Praxis Theatre is currently on hiatus! Please find co-founders Aislinn Rose and Michael Wheeler at The Theatre Centre and SpiderWebShow, respectively.

Category: #FasterThanNight

May 2, 2014, by
Comment

IMG_1470

Major Tom gets all the airtime, but there’s a whole lot more going on backstage in Houston. Meet some of the amazing tech crew on Faster than Night.

Solene4

Dynamixyz Performer specialist Solène Morvan tests custom iPhone head-motion-tracking developed by SIRT Centre

IMG_1304

Maziar Ghaderi (Live VFX Switcher, Visual Media Manager, and new OCAD master of design)

IMG_1462

Ryan Webber (Interactive AV Design, System Integration, Isadora Pro User)

IMG_1429

LaLaine Ulit-Destajo is the show’s Sound Operator, but also programmed two custom Twitter API interfaces in Processing

IMG_1372

Heather Gilroy is our Assistant Interactive Writer and Assistant Live Story Editor, but does not register on camera (that’s her in the fifth row)

IMG_1400
IMG_1441
IMG_1384
April 30, 2014, by
Comment

IMG_1255-Melee-skype-rehearsal

by Melee Hutton

Working on Faster Than Night has been a literal education for me. Not just in the field of social media, where my knowledge hovers somewhere around my own Facebook page and not much more, but also in the world of artificial intelligence.

Playing a quantum A.I is flattering but daunting. Along with it come actor questions I’ve never asked before, but perhaps will ask more often in the future.

“Can I feel?” I’ve asked our director, Alison Humphrey. “Is guilt something I know?” “Do I have a sense of humour?” Questions I take for granted when playing a human have become charged for me. “How much can I feel it?” “How do I get to feel it?” “Can I do anything that Caleb hasn’t programmed me to do?”

And so I have spent the last few weeks contemplating, “what ultimately makes us human?”. I’ve written down words as they come to me in rehearsal, such as Humility, Humour, Love, Guilt, Regret, Defiance, Rebellion, Trust, Imagination, and Grace. If an animal can feel them, is it possible that in time computers will too, or will some things remain impossible to create outside of the human condition?

Faster Than Night is set fifty years into our future, and ISMEE stands for Interactive Socially-Mediated Empathy Engine. Once Caleb invented me, my empathetic abilities made him a multi-billionaire. I am many things for him: the source of all answers, the predictor of odds, a surrogate mother figure, the connector of humanity to one another.

But who is ISMEE to herself? Alison asked me one day in rehearsal, “What does ISMEE want?” In a thirty-year career as an actor, that question has never stumped me before. “Wow,” I thought, “this isn’t going to be simple.”

When we look at the world through artificial intelligence, what are we hoping to see? That we are different, or that we are the same? This led me to think about theatre and our contribution. Perhaps our interest in A.I.s is driven by the need to see ourselves in relation to the universe – we need to know that we are not alone, we need to know that we are capable of creation that is so imaginative that we can’t tell the difference between it and reality. That we, as a species, can recreate ourselves even as we destroy ourselves, and that our imaginary friends can exist in 3D into our adulthood.

That, in a nutshell, is why I work in theatre. I’m grateful to ISMEE for making me rethink things to which I was sure I already knew the answers.

April 28, 2014, by
Comment

by Alison Humphrey

AndySerkis_Haddock_Gollum_Kong_GQ_17Jan13

 “Naturalism is a good word for a bad idea.
Art is to do with transformation”
Ariane Mnouchkine

In our first and latest posts, we explored how motion capture and real-time animation works. But we haven’t really talked about why one would want to use it in live performance, or what stories it tells best.

These are key questions. Live animation takes a lot of extra work and costs a bomb. It makes it hard to describe the show to theatre audiences (other parts make it hard to explain to game designers and 3D animators). And it affects the story in a fundamental way. Or at least, it should. Otherwise you’re just sprinkling digital pixie dust on top of a play and hoping no one will notice the story would be better told in television or videogame or non-mocap-theatre form.

Throughout the script development process we’ve asked again and again: why are we telling this story with this technology?

Performance capture has traditionally been used for movies with supernatural or fantastical characters: Gollum in Lord of the Rings, the Na’vi in Avatar, Caesar in Rise of the Planet of the Apes, Davy Jones in Pirates of the Caribbean, and the Hulk in The Avengers:

the-avengers-hulk

But for a naturalistic human character, filmmakers still usually prefer to shoot a real human actor. This is partly because it’s cheaper, and partly because of the peril of the uncanny valley, wherein the closer a computer-generated model gets to photorealism, the more disturbing it looks:

uncannyvalley

Human-looking digital doubles are more common in videogames, where the nature of interactive narrative makes it unfeasible to shoot every possible branching story variation:

EthanMars-PascalLangdaleSo how does motion capture fit into theatre? The short answer is, not easily. Modern theatre tends to stick to certain kinds of stories. Kitchen-sink realism has owned the modern stage for generations. I’m not sure why. Maybe because it’s cheaper, maybe because it’s more grown-up and respectable.

But it wasn’t always thus. Before celluloid split drama into two solitudes, stage and screen, theatre was teeming with the supernatural, the fantastical, the mythological, the magical.

Ancient Greek theatre had its satyrs in the satyr plays. The Erinys (the original “Avengers”) in The Eumenides. The god Dionysus in The Bacchae.

Shakespeare put fairies and an animal-headed man into A Midsummer Night’s Dream; witches into Macbeth and Henry VI parts 1 & 2; ghosts into Hamlet, Richard III and Julius Caesar; and spirits into The Tempest.

Rautenstrauch-Joest-Museum - BarongSouth and east Asian traditions have kabuki ghosts, the Monkey King, and the Balinese witch Rangda battling the lion-spirit Barong.

In fact, The Lion King director Julie Taymor drew on her early experiences in Bali, and her fascination with Japanese bunraku theatre, when creating the stage version of Disney’s big-cat Hamlet. Her staging feeds the audience’s joy at watching a puppeteer and a puppet at the same time, a phenomenon she calls the “double event”.

toothandnailHandspring Puppet Company similarly designed its War Horse to reveal the puppeteers within. For my money, that made the stage version infinitely more fun than the Spielberg movie. We know it’s not a real horse, but like Fox Mulder, we want to believe. The same team has created human puppets for plays like Tooth and Nail and Or You Could Kiss Me, but like their animals, these are still stylized rather than naturalistic.

Motion capture has been used on stage by Disney theme parks (Stitch Live!) and Dreamworks musicals (Shrek the Musical’s Magic Mirror), but both of these seek to reproduce characters from animated movies in a live performance setting.

Dance companies have been far more inventive with mocap technology. Two of the earliest experiments were Bill T. Jones’s Ghostcatching (1999), and Merce Cunningham’s Loops (2000), a hands-only dance that brings to mind Samuel Beckett’s waist-up drama Happy Days, neck-up Play, and disembodied mouth monologue Not I.

Faster than Night is similar to these Beckett body-parts in that the real-time animation shows only the head of astronaut Caleb Smith, as he banters with his spaceship’s artificial intelligence and his Earth audience from inside his hibernation pod. But we hope it shares even more with Krapp’s Last Tape – a story that is inextricably enmeshed with the technology used to tell it.

Beckett wrote that play in 1958 after seeing his first reel-to-reel tape recorder at the BBC. He became fascinated, like Atom Egoyan, by “human interaction with technology… the contrast between memory and recorded memory.”

We hope Faster than Night also tells a story about the human interaction with technology. About art and transformation. About escaping the gravity of realism.

What story is that?

Please join us in the theatre on May 3rd to find out… then tell us whether you think the what fit the how. And why.

Melee Hutton (left, in Toronto rehearsal room) with Pascal Langdale (on laptop, as animated Caleb Smith, from Stuttgart)

Melee Hutton (left, in Toronto rehearsal room) with Pascal Langdale (on laptop, as animated Caleb Smith, Skyping in from Stuttgart)

April 18, 2014, by
1 comment

by Alison Humphrey

Topeng_Bali

Balinese topeng masks (photo: Gunawan Kartapranata)

As far as we’re aware, Faster than Night is one of the first handful of theatre productions in the world to use facial performance capture live on stage. But while from one perspective it is cutting-edge technology, it is also just the latest mutation of an artform that has been used in theatre for millennia: the mask.

440px-Face_of_Dzunuk'wa_(UBC-2009)

Kwakwaka’wakw mask “Face of Dzunuk’wa” (photo: Leoboudv/UBC Museum of Anthropology)

A mask is simply non-living material sculpted into the shape of a face. It can allow a human performer to transform into a different human, or an animal, or a supernatural being.

440px-The_Childrens_Museum_of_Indianapolis_-_"Ko-jo"_Noh_Theater_mask

“Ko-jo” (old man) Noh theatre mask (Children’s Museum of Indianapolis)

It can allow a young person to play an old person, or a man to play a woman.

It can define a character by a single facial expression (0r if the mask-maker is very skilled, several expressions depending on the viewing angle).

In the Balinese tradition of topeng pajegan, a single dancer portrays a succession of masked characters with different personalities: the old man, the king, the messenger, the warrior, the villager.  A whole epic story can be told by one skilled performer, simply by switching masks and physicalities.

Just as animation is a series of still drawings, or film is a series of still photos projected fast enough to create the illusion of movement, real-time facial capture is fundamentally a process of switching and morphing between dozens of digital “masks” – at rates of up to 120 frames per second.

But how does real-time facial capture actually work?

We’re working with the Dynamixyz Performer suite of software, which takes live video from a head-mounted camera, and analyzes the video frames of the actor’s face. In the image below, you can see it tracking elements such as eyes, eyebrows and lips.

dynamixyz-performer

For each frame of video, the software finds the closest match between the expression on the live actor’s face, and a keyframe in a pre-recorded library of that actor’s expressions, called a “range of motion”. That library keyframe of the actor corresponds to another keyframe (also called a blendshape) of the 3D CGI character making the same expression. By analyzing these similarities, the software can “retarget” the performance frame by frame from live actor to virtual character, morphing between blendshapes in seamless motion. This process is called morph target animation.

Here are a few rough first-draft keyframes for Faster than Night. On the right is a head-cam video frame of Pascal Langdale, and on the left is an animation keyframe by Lino Stephen of Centaur Digital:

FTN_Profile_45

FTN_Profile_04

FTN_Profile_42

FTN_Profile_17

FTN_Profile_25

There are dozens more expressions in the “range of motion” library for this particular actor / character pair. Some of them are “fundamental expressions”, drawing on the Facial Action Coding System developed by behavioural psychologists Paul Ekman and Wallace V. Friesen in 1978:

FACS_micro_expressions_tim_roth

Tim Roth played a character inspired by Paul Ekman in the 2009 TV series Lie To Me

Other facial “poses” convey more subtle or secondary expressions…

…while still other expressions represent phonemes, the building blocks of lip-sync, as found in traditional animation:

preston_blair_mouth_shapes

Disney animator Preston Blair (1948)

This image of Aardman Animation’s stop-motion character Morph gives a tangible metaphor for what’s going on inside the computer during the real-time animation process:

aardman-faces

With each frame, a new head is taken out of its box and put on the character, just like the topeng performer switching masks.

The cumulative effect creates the illusion of speech, of motion, of emotion… of life:

 

April 17, 2014, by
Comment

A longer post is coming soon with details on the 3D facial model work being done by Centaur Digital and Dynamixyz, but to tide you over, here are a few more elements from our talented creative team.

First up, the design of the starship Envoy by Mike Nesbitt and Caroline Stephenson of Capture Scratch:

FTN_ShipEXT_A3_3_lines

FTN_ShipEXT_A3_4-web

Next, “Like Clockwork” by composer and audio designer Will Mountain, via Vapor Music:

And our latest piece of concept art was developed by Clementine Konarzewski, who is playing the voice of astronaut Caleb Smith’s daughter, Katy, age 6:

OctopusAlienFamily

April 8, 2014, by
1 comment

by Pascal Langdale

In the UK, I’d be called a “jobbing actor”.  That means that I work across all sorts of media, picking up a wide variety of acting work that provides a steady income.  I have worked in corporates and commercials, stage, film and TV, and interactive games. Going a little further back, I have published poetry, written a radio play, co-produced movement-based theatre, headed a flamenco company, and once danced with Cyd Charisse.  In my early thirties I studied nonverbal behavior and re-appraised my acting techniques.  When I say I’m a RADA graduate, I sometimes think people expect a more traditional actor – a style or a working process which I can also embrace when necessary.

Every form I’ve worked in has certain established rules and conventions, developed over years (or centuries) to serve the best interests of storytelling within that medium. Each has its own artifice, relies on a shared experience, and requires adaptation of the core craft of acting.

So it was with a jobbing actor’s irreligious approach that I made friends with the newcomer on the storytelling scene: performance capture.  In this medium, the rules and conventions are still being established, found wanting, re-established, changed, superseded and hotly debated.

In Faster than Night, my character Caleb Smith, a cross between Tony Stark (Iron Man) and Chris Hadfield (THE man), relies on social media to help decide his fate in a life-or-death situation.  Animated in real time.

Stark_and_Hadfield

With this show, we’re exploring whether performance capture can play nice with traditional theatre to create a new form of interactive storytelling for live (or live-streamed) entertainment, and a new role for an engaged audience.

Now, no individual element of our production is entirely new. Faster than Night‘s constituent parts have a broad history.  Our live-animation technology is cutting edge, the result of an explosion of development in the field of facial capture and analysis over the past twenty years. In the past three, the range of capture systems available has expanded, offering greater options for quality at differing price-points. Facial capture tech is becoming democratised, and this key component of human-driven animation is finally reaching the hands of a new generation of artists and producers.

As soon as facial capture became advanced enough to animate in real time with some quality, the potential to fuse video-game and film technology with theatrical storytelling became inevitable.  Theatre has always grabbed whatever innovative tech could help tell the story – from Greek masks that allowed a character to be heard and seen from the back of a large open-air amphitheatre, to painted backdrops and gaslight.

Why should we stop with digital capture? In recent years video games have created a desire for direct influence over a narrative,  and social media has provided a platform to share your feelings about it. Where earlier artists would have needed a show of hands to decide a voted ending, Twitter allows us to canvass the opinions of countless viewers.  Tweets have been used as source material in theatre productions (#legacy, one of our fellow HATCH productions, being the most recent – so recent it hasn’t even opened yet!). The viewer-poll competition format, used most famously on American Idol, is spreading to shows like Opposite Worlds or even the scripted TV drama Continuum.

Interactive story as created by video-game developers, and writers of choose-your-own-adventure books, must fix its narratives in stone. However beautifully executed, they can only give a finely-crafted illusion of unlimited freedom. An ancestor of the interactive game can be found in theatre, which has a longer heritage of improvisation and audience participation. The British have a long history of music hall and panto, where a rougher but no-less-organised form of audience participation is part of the entertainment.  (Did I mention my first job at the age of 17 was in a pantomime?)

1992-Dick-Whittington-me

The first vote-based multiple-ending play was written by Ayn Rand in 1935, a courtroom drama in which the jury was drawn from the audience.  In the 1970’s, Augusto Boal anticipated the internet-enabled art of flash mobs with Invisible Theatre, “in which an event is planned and scripted but does not allow the spectators to know that the event is happening. Actors perform out-of-the-ordinary roles which invite spectators to join in or sit back and watch.”

Despite the use of Twitter interaction or facial capture animation, our core goal remains primal: to tell a good story. If we fail at that, all the cutting-edge tools we might use become a mere distraction.

Yet wherever linear narrative is challenged, sharing a satisfying story becomes notoriously difficult.  I will be learning forty pages of a script that occasionally leaps into improvisation with the audience through their ambassador, @ISMEEtheAI, voiced by Melee Hutton.  Learning a script is a challenge, but it’s one I at least know the measure of. Playing an interactively-led character presents a number of far less familiar challenges, which (even more than the performance capture) is why this show is particularly experimental. The interactive aspects demand our greatest attention, and our boldest moves.

Here’s an example – a scan of my own re-typed script from Heavy Rain, an interactive game or movie with multiple narrative paths that led to one of twenty or so differing endings.

 

HR_emotional_STAND.png

As an actor, and as a person, understanding behaviour relies on a certain level of causality.  For example, a mood:  “I’m in a bad mood, so I snap at my partner.” Or a learnt pre-condition:  “I had a violent father, so I struggle with authority.” Or a hardwired precondition: “I am genetically predisposed to bouts of euphoria.” All these are examples of what can cause behavior.

In the script above, the player had three choices for how my character Ethan Mars could interact with an unknown “helper”, Madison.  Going from left to right, Ethan (1) seeks basic info, (2) wonders at her selflessness, and (3) suspects her motives.  These are quite different (although there are more examples of more extreme differences elsewhere), and demand that the acting choice in the moment before the player choice be appropriate for all three options. Moreover, each acting choice must also finish off in a way that is consistent with Madison’s response.

ethan-madison

Ethan Mars and Madison Paige, Heavy Rain

The lack of pre-decision required in this situation is not as foreign to an actor as one might think.  Many actors strive to be “in the moment,” to imitate life itself.  Not knowing a character’s full behavioral palette is also not uncommon. Playing the character Karl Marsten in Bitten, I did not receive scripts for all thirteen episodes in advance. This is par for the course for a TV series, but even though this one was based on a series of novels I could pick up and read anytime, some characters’ fates departed radically from those in the books, in order to better serve the unique needs of television storytelling.

When a pre-scripted, pre-recorded game story with multiple endings is developed, the creative team try to set up a balance among the player’s possible choices, a “neutral”, making sure that each of them is equally plausible and possible. The nature of live theatre means we don’t need an astronomical budget to shoot every possible outcome. This lets us open up to more variety of audience input, more freedom, more chaos.

Live theatre also means the audience’s final choices can no longer have guaranteed neutral preconditions, because they may have been biased by an unexpected experience that night, something that didn’t happen any other night. A comment, a look, a pause, even a cellphone ring, could pull the audience’s attention away from a vital piece of balancing information, or push them towards a particular relationship with Caleb, ISMEE, Xiao or Dmitri. Every show changes, because every audience changes.

So we need you. Yes, we need people with smartphones who, if not already familiar with Twitter, are willing to give it a try. But more than that we need an audience willing to engage. Become an active participant, and if the experiment is successful, you’ll come away feeling emotions that are harder to come by with passive entertainment: guilt, endorsement, responsibility, vindication, shame, or triumph.

If you’re up for that, start following @ISMEEtheAI on Twitter, and bring your phone along to Harbourfront on May 3rd, ready to participate in your own unique experience of our show.


Pascal Langdale is an actor, producer and writer on Faster than Night.

April 5, 2014, by
1 comment

by Alison Humphrey

“Interactivity” is one of those words.

What does it mean in theatre?

What does it mean in storytelling?

What does it mean to you? 

As we develop Faster than Nightwhich was conceived with the slippery and baffling ambition to involve the audience in the story, I’ve been thinking a lot about what it means to me.

Alien-Legion-Sarigar-6When I was sixteen, there was a comic convention in downtown Toronto. This was way back before comics were cool, a good decade before the invention of the web let geeks find each other en masse, and longer before the San Diego Comic-Con went media-tastic and started attracting 130,000 paying attendees. We’re talking the basement of the Hilton, a bunch of folding tables with dealers selling back-issues from boxes, and a special guest or three.

I was a huge fan of Marvel Comics’s Uncanny X-Men, which was ahead of the curve in introducing strong female superheroes. (One of those heroines, the teenage Kitty Pryde, was the original protagonist of one of my favorite X-Men storylines, “Days of Future Past“. Yet when its blockbuster movie adaptation is released next month, the very male and very box office-friendly character Wolverine will be taking her role in the narrative. That’s a different blog post…)

I’d heard that legendary comics inker Terry Austin was going to be at the convention, and decided I would make a Fimo figurine of one of his characters as a gift.

He had left the X-Men a couple of years prior, so I sculpted a character from his new gig, Alien Legion. I wasn’t as much of a fan of that book, but Sarigar was a half-snake alien – an awesome challenge that involved a coat-hanger armature and a lot of finger-crossing in the firing process.

I snapped some photos before I took it downtown and gave it to one of my real-life heroes. He was gracious and appreciative, giving me some original art in return.

I started thinking about that formative early moment last June, when the Royal Shakespeare Company partnered with Google+ on an ambitious, interactive theatre/social media hybrid called Midsummer Night’s Dreaming (a followup to 2010’s Such Tweet Sorrow, remixing Romeo and Juliet via Twitter). I’d heard about the concept from the RSC’s forward-thinking digital producer Sarah Ellis, and was instantly fascinated.

Google’s Tom Uglow (co-director with the RSC’s Geraldine Collinge) wrote eloquently beforehand about Why we’re doing it:

We are inviting everyone on the internet to take part. We’d rather like 10,000 contributors extending the RSC across the world, commenting, captioning, or penning a lonely heart column for Helena. Maybe people will invent their own characters. Or make fairy cupcakes; share photos of their dearest darlings as changelings; send schoolboy marginalia about “wooing with your sword”; compose florid poetry to Lysander’s sister; or debate with Mrs Quince on declamation. Or just watch online….

I followed the project as it unfolded online over the course of midsummer weekend, enjoying the original material commissioned from professional writers and artists (“2000 pieces of material for 30 new characters to be shared online non-stop for 72 hours”), and even Photoshopping a few memes and lolcats of my own.

MidsummerDreaming-memes

It was fun, and it certainly taught me a lot about the mechanics of Google+ as a social media platform. There were lots of thoughtful analyses after it was over, but for me the most interesting aspect was the strange mix of emotions I was feeling around having participated.

How does the audience feel crossing the line that usually separates them from the professionals providing their entertainment? Passive observation is safe. Active creativity is not. As kids, we all start out as unselfconscious artists, writers, musicians and dancers. But in adolescence, when social pressures and fears kick in, most of us transition into observers and “consumers” of culture made by other people.

For me, the Midsummer Night’s Dreaming project provoked the complicated mix of reactions such invitations always do – the thrill of inclusion or transgression, yes, but also the fear that my contribution won’t be “good enough” or that too much enthusiasm will make me uncool.

More than anything, it brought me back to that kid sitting in her room, creating something to give to an artist whose work she admired. Yes, part of that was the fan hoping to garner the attention of the idol, however briefly. And part was the desire to immerse in the fictional world. But it was also just wanting to step into the creative sandbox and play.

ChooseYourOwnAdventureMany people think of interactive story in terms of the classic 80’s Choose Your Own Adventure books. I certainly flipped around their pages and beat my head against the Infocom computer game adaptation of Douglas Adams’s Hitchhiker’s Guide to the Galaxy long enough to pick up the basic mechanics of branching narrative.

Adams took his next step with interactive fiction in 1998 with the Myst-like Starship Titanic, for which I had the dream job of producing an interlocking set of fictional websites to promote the CD-ROM game in the year leading up to its release. These websites were an early version of what would later be dubbed “alternate reality games”, and a hidden forum on one of them unexpectedly became home to a community that was still active in 2011.

Alternate reality games have blossomed as the web has evolved tools that encourage more participatory culture (two insightful analyses: The Art of Immersion and Spreadable Media). Videogames have evolved too. Some of them (like Heavy Rain) provide almost cinematic interactive stories that appear to offer freedom from the author’s control, but actually run on narrative “rails” that branch with the player’s choice, then eventually re-join. At the other end of the spectrum, “sandbox” or “open-world” games give players far more freedom, sometimes at the expense of emotional engagement or satisfying dramatic structure.

Theatre itself has had an uneasy relationship with audience participation as long as anyone can remember. In recent years, “immersive theatre” companies like Punchdrunk and Secret Cinema have grabbed the spotlight by setting audiences loose to literally roam through their fictional worlds. But decades before them, forum theatre creator Augusto Boal engaged the audience in a very different spirit, with techniques like invisible theatre, “a play (not a mere improvisation) that is played in a public space without informing anyone that it is a piece of theatre”.

And last year, in real life, Commander Chris Hadfield engaged with his audience across a fourth wall up to 250km thick.

In Faster than Night, we’re inviting our own audience to interact with fictional astronaut Caleb Smith via his artificial intelligence ISMEE. You can find her on Twitter as @ISMEEtheAI. In the weeks leading up to the performance, as people tweet questions about his mission to travel faster than light, Caleb will answer. And during the show itself, he will need the audience’s help to make the ultimate choice.

But first a question for you.

What does interactive story look like?

Is it this?

ChooseYourOwnAdventure-branching-map

Or is it this?

Or does it mean something entirely different to you?

Interact with us in the comments!

Alison Humphrey is directing and co-writing Faster than Night. She has a master’s in interactive multimedia, but that doesn’t mean she’s figured out the damn thing yet.

March 26, 2014, by
Comment

Hawking radiation and the “black hole firewall” paradox (Scientific American)

by Alison Humphrey

When you’re writing a science fiction story about wormholes and time travel, sooner or sooner you’re going to bump into Stephen Hawking. The world-famous physicist and Brief History of Time author even has some theoretical black hole quantum-effect radiation named after him, for heaven’s sake. He’s just. that. good.

But there’s another resonance with Faster than Night that we didn’t anticipate. Hawking, like us, uses facial-capture technology.

Our first blog post described the head-mounted camera that will track every movement of actor Pascal Langdale’s mouth, cheek, eye and brow, and pass the data to a computer to animate his astronaut avatar.

Stephen Hawking’s “headcam” is a tiny infrared sensor mounted on the corner of his eyeglasses, but it is infinitely more powerful than its size suggests.

As a result of the motor neuron disease that has wasted his body, only the shortest neural pathways, such as that between his brain and his cheek muscle, are still under his precise control. The infrared sensor on his glasses detects changes in light as he twitches his cheek, and this almost indiscernible bit of motion-capture is his sole means to control everything he does on his computer.

Glasses-mounted infrared sensor (Photo by T. Micke)

On the screen, a cursor constantly scans across a software keyboard, until his signal stops it to select a letter. The software has a word-prediction algorithm, so he often only has to type a few letters before he can select the whole word. When he has built up a sentence, he can then send it to his iconic voice synthesizer, a piece of technology which is now almost 30 years old.

A new documentary released last year, titled simply Hawkingreveals the race against time to keep his communications technology ahead of the progression of his disease. (This sequence begins at the 19-minute mark.)

Jonathan Wood, Stephen’s graduate assistant: “Stephen’s speed of communication has very gradually slowed down. A few years ago, he was still able to use his hand-switch and able to communicate by clicking this switch on his wheelchair. When he wasn’t able to do that anymore, we switched over to a switch that he’d mounted on his cheek. But with him slowing down with that, we’ve approached his sponsors, so they’ve been looking into facial recognition.”

Intel technician: “This is a high-speed camera which will allow us to see verifying details on the facial expressions, and this will help us to improve the rate of your speech and input.”

Stephen Hawking: “I have had to learn to live with my slow rate of communication. I can only write by flinching my cheek muscle to move the cursor on my computer. One day I fear this muscle will fail. But I would like to be able to speak more quickly…. I am hoping this current generation of software experts can harness what little movement I have left in my face, and turn it into faster communication.”

This intriguing and dramatic arms race (face race?) has Intel’s best and brightest looking for ever more sensitive sensors and new techniques to give the physicist better ways to control his computer.

Cathy Hutchinson drinks from a bottle using the DLR robotic arm (Photos: Nature)

It even has one American scientist investigating a means by which Hawking may one day be able to “write” directly from his brain, bypassing the facial muscle altogether. As the BBC reports, “In 2011, he allowed Prof Philip Low to scan his brain using the iBrain device… a headset that records brain waves through EEG (electroencephalograph) readings – electrical activity recorded from the user’s scalp.”

Last month’s issue of National Geographic described similar explorations at Brown University, where Cathy Hutchinson maneuvered a neurally-controlled robot hand to drink a cinnamon latte.

Next stop: hoisting a brewski with the Canadarm?

You think we’re joking, but the Hawking documentary ends with a delightful sequence in which Richard Branson of Virgin Galactic suggests that the best astronaut might be a man whose mind has never been bound by the gravity that holds his body:

“I just couldn’t think of anybody in the world that we’d rather send to space than Stephen Hawking. And, you know, we haven’t offered anybody a free ticket, but it was the one person in the world that we felt, ‘We’d love to invite you to space.’ And it was incredible when he accepted. I went up and saw him that day, and he told me to hurry up and get the spaceship built because he wasn’t going to live forever. Hopefully next year, we’ll take him up. I think that he feels that if he goes into space personally, he can lead the way.”

March 21, 2014, by
Comment

squeez-a-snak

Earlier this week, ISMEE, the artificial intelligence on board the starship Envoy, issued an invitation and a challenge:

And you know there’s nothing people like better than tweeting their #foodporn:

 

 

 

 

 

 

Craving some freeze-dried deep-fried beef? Camembert squeeze cheese? Soup in a tube?

Share a taste with @ISMEEtheAI and join the #Fastronauts today!

Astronauts Thomas P. Stafford and Donald K. “Deke” Slayton hold containers of Soviet space food in the Soyuz Orbital Module. The containers hold borsch (beet soup) over which vodka labels have been pasted. This was the crews’ way of toasting each other. (Source: Wikimedia Commons)

March 19, 2014, by
1 comment

ExploreTheUniverse

By now, if you live anywhere on the planet but under a rock, you’re probably aware that billionaire Caleb Smith and his faster-than-light spaceship the Envoy will be blasting off on May 3rd.

You can watch the launch live from the shores of Lake Ontario… Or you can watch Earth shrink in your rear-view mirror as one of two lucky winners of a new competition announced today!

ISMEE, the ship’s artificial intelligence (or more accurately, its Interactive Socially-Mediated Empathy Engine) has just tweeted the news:

Would space suit you? One way to find out is to take this astronaut test, created by the British series Live from Space based on criteria set by the European Space Agency.

Or you could measure yourself against the selection criteria for applicants to the Mars One settler program. 75 Canadians are among the 1,058 short-listed for the one-way trip.

But perhaps the most important thing to consider is that the Envoy’s will be a journey in time as well as space: due to the time dilation of faster-than-light travel, when our #Fastronauts return, they’ll be a year older, but ten years will have passed on Earth.

Still think you’ve got the right stuff? Strut it in your first challenge:

 

Or, if you’re more the down-to-Earth type, follow @ISMEEtheAI on Twitter to watch our wannabe rocketjockeys angle for one of two spaces in history!