As the Humane Technologies research team first began contemplating the 2016-2017 "Livable Futures" theme in Autumn semester, we held a series of sandbox sessions in the ACCAD labs and studios, each led by a different team member. The purpose of these sandboxes was to engage in a "doing thinking" process together with various humane technology frameworks in order to explore potential lines of inquiry, develop research questions, and build relationships. What follows are notes developed in conjunction with this particular sandbox session.
Sandbox: Motion Capture with Vita Berezina-Blackburn
Wednesday, November 30, 9:30-11:30am in the ACCAD Motion Lab
Attendees: Vita Berezina-Blackburn, Alex Oliszewski, Norah Zuniga Shaw, Peter Chan, Scott Swearingen,Scott Denison, Alan Price, Mindi Rhoades, Hannah Kosstrin, Isla Hansen
Sandbox Framework for Collaboration:
Investigation of approaches for presenting narratives in full body, room scale VR scenarios driven by practices in theater production and acting. The Sandbox will include demos of ACCAD's current state of available technologies and existing VR experiences from the Marcel Marceau project, as well as related creative practices. Tech: Vicom Motion Capture System, Motion Builder, Oculus.
Anticipation / Expectation:
• VR, motion capture, and training performers, live storytelling in physical and virtual worlds, theater artists driving VR creation
Disposition / Experience:
Thoughts gleaned from participants during and after the sandbox:
• Two characters were having a conversation in a science fiction future and I was able to walk around as an invisible third-party (fly on the wall) and observe.
• The conversation was secondary as I was exploring the view and props from this high-rise virtual set design. But I could have easily replayed the scene, taken a seat beside them and listened more intently the second time.
• Is this a significantly more entertaining means of experiencing narrative?
• The thought of 'stepping' into someone's experience was very interesting, and whether or not I would be more likely to follow his mesh or his shadow.
• When doing 180-degree turns in VR I need some sort of reflection so I can see his movement when he goes off-screen.
• Having multiple instances works well pedagogically or as a learning environment, but not so much from the perspective of "appreciate this historic performance."
• Having a CG hand that can interact with the environment would be useful and engaging. Placing an invisible trigger-box around it could easily test for collision.
• Using headphones would connect with experience better b/c audio would be more contextually sensitive. For instance, MOCAP lab walls bounce sound differently than the tight quarters I was experiencing in VR. Scale is always an issue.
• In some ways this reminded me of 'manual cinema', but the audience would also need headsets to approach parity with actors.
• The concept of 'priming for the meta-aesthetic' was very interesting.
Reflection / Opportunity:
• The technical aspects of this are way over my head, but I wonder if this could be done with multiple Google Cardboard to avoid the tethering requirement of Oculus?
• As in the Marcel Marceau experiment, are we able to learn faster/more through embodied experiences, i.e. could someone practice an interview or social etiquette this way?
• Could the viewer/reader/player use something like this to inspect props/evidence within the scene to help solve the crime? With the addition of more sophisticated facial detail and scanning at the input stage might we also have been able to study character behaviors?
• Could designers use a similar approach to experience thought problems and test critical thinking?
• Could we build a scene or environment with all the trappings of the “problem space”, especially one that is remote or in a faraway place, in which designers can immerse themselves for study?
• I wonder if MOCAP style labs will replace some studio spaces, i.e., desks and laptops, with untethered headsets and communal, embodied experiences/learning?
• What could we accomplish with scale? Could either 'watch' or 'follow' and have full understanding of entire body and weight distribution throughout the performance and not have to piece together anatomy that's off-screen.
• Matt Lewis suggested the podcast 'Voices of VR' - interviews with the movers and shakers of virtual reality... sounded awesome.
• Why did the character that we embodied during this exercise assume we were 'physical' (Why not a droid/ghost/spectre like Sally was)? That could help explain some of the physical/VR inconsistencies related to navigating the space.