Humane Object Agency: Implementation

Humane Object Agency: Implementation

In a previous blog post I described my introduction to the Humane Technologies project and my intentions for the pop-up week: exploring the use of interactive virtual reality to simulate an Internet of Things (IoT) filled space, with participants embodying the roles of the communicating smart objects inhabiting the environment. Leading up to the big week, I met with several of the participating faculty who gave me invaluable suggestions for additional readings, relevant pop culture references, and other perspectives on possible "motivations" for the IoT devices to be simulated in the project.

During the pop-up week Professor Michelle Wibbelsman and I met with Professor Hannah Kosstrin's dance class and explained the basic idea of the project. Michelle and I had come up with a few exercises/scores with different emphases for the students to try out. For example, we initially split the students into two groups, and requested that one group take a dystopian perspective of IoT devices, while the other group imagine a more utopian viewpoint. While the devices in the later group focussed on keeping the apartment inhabitant happy and comfortable, the former group embodied more of an overbearing nanny/salesperson space. For the initial round, we had requested that the performers communicate primarily via motion. There was a strong tendency however to want to speak primarily to the person in VR and communicate in general via anthropocentric means. For the next round we requested that communication only be through movement, and primarily between the IoT devices, rather than focussing on communicating with the the apartment's inhabitant. Additionally we asked some performers to take on the roles of aspects of the communications infrastructure: one dancer was "Wi-fi" and others were "messages" traveling through the network, between the devices.

There was very little time given for planning between each performance/simulation, so most of the systems and processes resulting were improvised during each performance. As a result very little actual motion-based successful communication took place (though lots of attempts were made.) However these sort of initial experiments using no technology in the classroom gave us a great deal of information and discussion points for our technology-based experiences a couple of days later.

Several people were involved in the implementation of the quickly assembled technological system. I initially had specified the desired system features and set up the physical system components. Skylar Wurster (Computer Science undergrad) and Dr. J Eisenmann (ACCAD alumnus / Adobe research) implemented the interaction and control scripts in the Unity realtime 3D environment. Kien Hoang (ACCAD Design grad) assembled a 3D virtual apartment for the VR environment. 

Professor Kosstrin participated in the role of the inhabitant of the VR apartment. At Professor Wibbelsman's suggestion we avoided naming this character so as to avoid too strongly biasing our notions of their role (e.g. "owner", "user", "person", "human", "human object", etc.) We ended up frequently making a stick figure gesture mid-sentence to refer to them during our discussions. It was intended that as the physical performers were communicating outside of VR, there would be some indication inside VR that the virtual smart objects were talking to one another. A few visual options were implemented in the system: the objects could move (e.g. briefly "hopping" a small amount), they could glow, or they could transmit spheres between one another, like throwing a ball. Given the motion-based communications we were attempting with the dancers, I chose to use primarily the movement method to show the VR appliances communicating. This was implemented with a slight delay: if the smart chair was going to send a message to the smart TV, first the chair would move, then the TV would move, as if in response. I imagined this being perceived like someone waving or signaling, followed by the message recipient responding by waving back.

We investigated two methods for connecting communications in the physical and virtual worlds. In our first trials, we simply relied on an indirect puppetry approach. A student at a workstation (Skylar) watched the dancers, and when one started communicating to another, he would press an appropriate keyboard button to trigger the communication animation in the virtual world. For one of the later runs, Ben Schroeder (ACCAD alumnus / Google research), Jonathan Welch (ACCAD Design grad), and Isla Hansen (ACCAD Art faculty) all contributed solutions to enable the dancers to touch a wire to trigger a communication. While this had the advantage of allowing direct control for the performers of their virtual counterparts, the downside was, it placed limitations on their movement possibilities. Regardless, inside VR, the movement of the appliances did not read for our VR participant as communication: "Why is the refrigerator hopping?" Time during the brief session didn't allow for experimentation with the other communication animation approaches, but I suspect some of the other modes might have fared better.

Professor Wibbelsman led the group in discussion and we quickly discovered that our goal of eliciting new ideas about future possibilities for these emerging technologies seemed to be a success: everyone had a great deal of strong opinions about what might emerge and big questions about what they might be more or less comfortable with. One further practical consideration that emerged was the need for dancers to use a separate "narration" voice to communicate with the person in VR, to tell them things they needed to pretend were happening in VR as the improvisation ran its course (e.g. a refrigerator door opening and giving them access to ice cream.) Despite the pop-up providing an invaluable week of time for everyone to focus on prototyping projects such as these, one of the more surprising challenges was having access to people's time. Many of the details of the project were not the result of well considered design decisions but rather because that was what the person who popped-up to work for an hour or two could accomplish before jumping back out to a different project.

Humane Object Agency: Part One

Humane Object Agency: Part One

I arrived at the humane technologies project and group later than most of the participants. I was invited to participate in the pop-up week which would focus on virtual reality this semester. I've been curious about using VR technologies for interface prototyping, and this seemed like a great opportunity. As with all pop-up participants, I was encouraged to consider either joining existing project groups, or to bring my own ideas to the table.

Not having been part of the earlier discussions, my unbiased ideas about "humane technologies" primarily involved evaluating people's interactions with the technology emerging around them in positive and negative ways. In particular, I've been reading almost daily newspaper articles about the "internet of things" (IoT). Usually these discussions center on debates between convenience vs privacy: e.g. your internet connected devices are controllable via your smartphone, but they also report your engagement to advertisers for marketing purposes. 

Discussions of the Internet of Things tend to predict that smart objects will be increasingly communicating in complex webs of systems which may or may not have our best interests in mind. In the same vein as it is often said that, "you are not the consumer but rather the product" for companies like Facebook, networked smart objects like your TV might be "free" to use as well, in exchange for you allowing an infrared camera to monitor your apartment and track your eyes as you watch TV.

With this content in mind at my first humane tech meeting, I heard Professor Michelle Wibbelsman (Spanish & Portuguese) mention two things that resonated for me: indigenous peoples' beliefs about objects having agency, and also "Object Oriented Ontologies" (OOO). I was curious about the idea that some cultures may have already thought a great deal about how to live surrounded with objects that have agency. Additionally, "Object Oriented Ontology" is a relatively recent perspective on metaphysics that’s attracted some attention from computer scientists working at the intersection of philosophy and human computer interaction. OOO involves a de-centering of humans that considers physical objects, ideas, their relationships, and agencies all as equally valid objects of philosophical consideration.

At this same initial meeting, Professor Hannah Kosstrin (Dance) mentioned that her motion analysis class's graduate students would be available to participate in projects during the pop-up week. Years ago I was fascinated by a presentation I’d seen on "service prototyping" which used actors as participants for interactive system design. I proposed that Hannah's students could embody the roles of communicating IoT devices, exploring the possibility space of system agency. Many IoT species will converse primarily with other smart objects and networked systems, rather than interacting directly with people in their space. What might such devices be "talking" about? What could their awareness and motivations encompass in different future scenarios?  

Additionally, I envisioned another participant immersed in a VR apartment environment, experiencing representations of these devices communicating around them. For example, there might be an indication that a smart TV, smart refrigerator, and smart couch were all observing aspects of their environment, and "doing their job" whatever that might be. What would this be like to live in such a space? 

I suspected that by embodying this simulation/performance, it might lead to thought provoking discussion, helping us to contemplate aspects of such emerging technologies and trends in ways we might not have otherwise considered from mere thought experiments.  I also hoped we might gain insight into the humane technology aspects of IoT, beyond the current discussions of privacy vs. convenience. Last, I hoped to gain experience with the usefulness of VR for interaction design prototyping. In a followup post, I'll discuss the implementation and outcomes of the pop-up.

Popping In, Popping Out: Reflections on the Humane Technologies Pop-Up Week

From the outset of the Humane Technologies: Livable Futures Pop-Up Collaboration, I wondered what my role would be in it. As a specialist in rhetoric whose interest lies in exploring how technologies have shaped our communication practices throughout history, I’ve been trained to explore these questions from a position that’s somewhat outside and above the immediate action. To an extent, I set out to maintain this stance, intending to watch from the sidelines as an impressive group of technologists, designers, and artists came together in the spirit of play, exploration, and creativity to question how contemporary technologies can be utilized to promote a more compassionate, socially engaged future. But as the week unfolded, I found myself caught up in the gravitational pull, eventually diving in and joining the fray. 

As part of the reporting team (Peter Chan, Michelle Wibbelsman, and myself), our goal was to observe and document the processes of collaboration as they unfolded throughout the ACCAD space: the brainstorming, the concept building, the rapid prototyping, the problem solving, the play-testing, the refining. Initially, I found myself focused on the technologies themselves, the instruments that facilitated these processes. The whole space was populated by a whole heap of impressive gee-whiz tech, from VR rigs and 3-D printers to interactive touch displays and projectors. Surrounded by this technological infrastructure, it’s tempting (and perhaps even understandable) to forget about the actants, the human agents, that use that infrastructure. I mentally checked myself and popped out of the activity to observe from a different perspective.   

I found myself watching how bodies circulated during the week: frenetic, chaotic, playful, eventually leading to patterns… leading to purpose. The open layout of the ACCAD studios facilitated this movement, where people working diligently on one project would be pulled into another for some quick feedback, then to another to help with a demo. Classes would move in and out of the space, students contributing to the tasks at hand. 

I popped back in. I played with data visualizations on a large touch screen, contributed family photos to help Maria build a patchwork landscape for the Fly Like a Bird heron flight simulator, offered feedback as Scott and his team developed his Digital + Physical Games project. I also worked with Alan as he developed his Method of Loci VR and multi-touch display environment (for this project, I contributed the idea of the classical/medieval rhetorical technique called the memory palace, a method of remembering parts of an oration by mentally placing key points in an imaginary building). This project explores the possibilities of externalizing our individual memories and experiences in a shared, interactive virtual space. I think of this project as a microcosm of what the entire week was about: connecting, creating spaces for empathy and unerstanding.  

I popped back out. As Peter, Michelle, and I talked about what we were observing as ideas took shape, as process yielded product, we leaned on metaphors, symbols, and imagery that reflected this dynamic: the double helix structure of DNA, Chinese ideographs depicting “tree” and “forest,” pictures of a copse of trees, an individual tree with serpentine root structure, imagery of tornados, and Robert Smithton’s earthworks sculpture The Spiral Jetty, among others. 

At the time, I wrote in response to Peter as he shared a collage of these images:

I’m struck by the resemblances evoked by these different image groupings: curvilinear, evoking a sense of motion/process, "natural." in the sense of conveying a visual identity for whatever it is that humane technologies want to become (despite what we might *want* them to become), these images collectively suggest a common ethos or spirit. 

Additionally, these visual metaphors all work together if we consider how systems and ecologies operate, and, more to the point, how we as subjects observe them in operation: from a certain distance, perhaps they appear orderly and unified, but zoom in and you might see frenetic noise or even chaos; zoom in even further, and you might realize there's actually an elegance (perhaps even design?) to that chaos... 

Popping back in. I’ve come to a realization that all of this spiraling imagery is not just a metaphor, but a way of mapping the week-long activity of the Pop-Up. In other words, this movement of bodies not only reflects on a symbolic level how ideas emerge, change, lead to creation, it is *literally* a key mechanism by which they are formed. Hands type and push buttons to change code, arms wave in the midst of gameplay, whole bodies undulate in the service of performing a dance routine. Witnessing firsthand (and even participating in) this whirlwind-in-a-snowglobe, I realize that this dynamic is at play when we scale up to consider culture at large. The problem is, we don’t always recognize that; perhaps the solution lies in deliberately attempting to bring about those moments of recognition more clearly and more often.


Contemplating the future.

Contemplating the future.

How often do we contemplate the future? In this sense, I'm not referring to our shopping list, or our student loans, getting promoted at work, starting a family, finding a soulmate, or preparing for retirement. I exclude these types of future-oriented concerns because most of us feel that we have some modicum of control over their outcomes. We can see a path by which our individual agency can make an impact. So, when I ask the question about contemplating the future I have in mind the things that are progressing in labs and research institutions all over the world, things like autonomous vehicles, robotics, bioengineering, the Internet of Things, human augmentation, and artificial intelligence. I don't believe we think much about these futures, but in many ways, they are potentially the most transformative and could affect our lives as much as the futures with which we do contend. As a professor of design, my objective is to prepare students to be conscientious problem solvers and creators of the physical and informational environments that surround us. This week my students in Collaborative Studio 4650 provided a real word guerrilla future for the Humane Technologies: Livable Futures Pop-Up Collaboration at The Ohio State University. The design fiction was replete with diegetic prototypes and a video enactment. I will unpack some terms. The term guerrilla future stems from what Stewart Candy (2010) calls guerrilla interventions. 

augmented reality glasses


“Its aim as a practice is to introduce… possibilities to publics that otherwise may not be exposed to them, or that, while perhaps aware of the possibilities in question, are unable or unwilling to give them proper consideration. It is about enabling people to become aware of and to examine their assumptions about futures -- possible, probable or preferable -- by rendering one or more potentials concrete in the present, whether or not they have asked for it.” [Emphasis added].

Our goal was to present a believable future—in 2024—when ubiquitous augmented reality (AR) glasses are the part of our mundane everyday.  We made the presentation in Sullivant Hall's Barnett Theater, and each member of the team had a set of mock AR glasses. The audience consisted of about 50 students from ranging from the humanities to business. 

In contrast to the gallery show, academic or corporate workshop, which attracts voluntary participants, guerrilla futures are uninvited. The objective is to bring awareness of future thinking to a wider audience and perhaps to engage them social debate. The second term to unpack is design fiction which is a research methodology for designers whereby we create believable artifacts from the future as well as other media to craft fictional futures. If these props are experiential, it is possible that the audience could become cooperative participants and agents in the intervention.

the artificial intelligence


The presentation lasted about 30 minutes after which we pulled out rolls of white paper and markers and divided up into groups for a more detailed deconstruction of what transpired. The discussions were lively and thought-provoking. Though it is too early to have completed an exhaustive analysis of the event, it seems universal that we can recognize how technology is apt to modify our behavior. It is also interesting to see that most of us have no clue how to resist these changes. Julian Oliver wrote in his (2011) The Critical Engineering Manifesto

“5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user's dependency upon it.”

The idea of being engineered by our technology was evident throughout the AugHumana presentation video, and in discussions, we quickly identified the ways in which our current technological devices engineer us. At the same time, we expressed to varying degrees, our powerlessness to change or effect that phenomenon. Indeed, we have come to accept these small, incremental, seemingly mundane, changes to our behavior as innocent or adaptive in a positive way. En masse, they are neither. Kurzweil stated that, 

‘We are not going to reach the Singularity in some single great leap forward, but rather through a great many small steps, each seemingly benign and modest in scope.’

History has shown that these steps are incrementally embraced by society and often give way to systems with a life of their own. An idea raised in one discussion group we labeled as effective dissent, but it seems almost obvious that unless we anticipate these imminent behavioral changes, by the time we notice them it is already too late, either because the technology is already ubiquitous or our habits and procedures solidly support that behavior.

the design team

There are ties here to material culture and the philosophy of technology that merits more research, but the propensity for technology to affect behavior in an inhumane way is powerful. 

This is necessary research for designers as well as the rest of us. The things that are going on in laboratories somewhere will, in gradual steps affect us. In this context, designers must be cognizant of these more elusive futures. They may wish to leverage them, but they must also be wary of them. Why wary? Because we make things. Design affects culture and culture, in turn, affects what we design. It has always been this way. It is always the designer's responsibility to be certain that there are no errors in the software or the material, to ask “what could go wrong?” But how often do we ask, what are the ramifications of our design, should it go right? What are the entailments to scalability and ubiquity and the systems, often complex and gnarled, that result from successful designs?  Because successful creation has the tendency to become ubiquitous, to influence behavior and to transform society, it becomes a design responsibility, but also one that pertains to virtually every other discipline. We need to pay attention.

Hopefully, design fiction and guerrilla futures can become a more widespread methodology to provoke discussion and debate and make us more active participants in what our future should be. Special thanks to the Humane Technologies Collaboration for allowing us to create this future provocation.

E. Scott Denison | Assistant Professor | Department of Design | The Ohio State University

Better Futures

Better Futures

With funding from the OSU Discovery Themes ACCAD will be the site of a Humane Technologies Pop-up Collaboration the week before spring break. With humane technologies as our foundation we will be focusing on the theme of Livable Futures.

Join us in creating artworks that fill ACCAD, the campus, Columbus, and beyond with messages of compassion, social justice, livability for diverse human and non-human life, and multi-sensory technologies for better futures! 

Humane Technology Pop-Up Collaboration: Livable Futures

March 6-10
All of our faculty and staff and GAs and many of our classes will be working on creative projects throughout the week. Like a hack-a-thon or a charrette the purpose of this week is to create a focused time outside our busy lives for creative collaborative action. Students from the environmental humanities and human rights research groups will join us as well as alumni guests who are taking time out from their work at google, Adobe and in their own design firms and they will enjoy connecting with you all. 

Throughout the week we will be sharing and documenting the prototypes and artworks and advancements made. All of our working spaces (the open collaboration rooms, SIM lab, Motion Lab, conference room...) will be busy and there will be more people around than usual. 

Rosalie Yu's visit kicked off the events and we have Ohad Fishof and Noa Zuk in residence next week as a warm-up for our Pop-Up collaboration March 6-10 and will have more visitors later in the semester.

Here's where you come in.

Let’s create visions of better futures and take creative action together. 
What creative humane technology interventions can you imagine and what solution stories can you create for better, more livable futures? All mediums and methods welcome.

We want to see your videos, poems, essays, performances, animations, posters, drawings, games, prototypes, sculptures, virtual environments and beyond. 

Pitch us your ideas throughout the week. Strong ideas that best capture or comment on livable futures will be supported with funds for supplies, exhibition online and in future gallery events, input, ideas and trouble shooting support. 

And if you'd like to add your efforts to our projects we will be demo-ing and discussing them Monday morning 3/6 9:30-Noon and you can stop by any time and see what we're working on. 

More info:

Humane Tech Pop-Up: Livable Futures
March 6-10, 2017
Humane technologies do no harm, they are creatively open-ended, socially connected and access the full multi-sensory capacities of human intelligence. Humane tech creates compassion and well-being, embraces complexity, enhances collaboration and is radically inclusive. With these humane working assumptions, we will focus this Pop-up on livability in the 21st century. 

Posthuman not Anti-Human
In her book on posthumanism, scholar Katherine Hayles critiques the fact that many visions of the future “point to the anti-human and the apocalyptic" and calls us to action showing that "we can craft other visions that will be conducive to the long-range survival of humans and of the other life-forms, biological and artificial, with whom we share the planet and ourselves."

Solution Stories
Activist Frances Moore Lappe calls us to create solution stories “Facing unprecedented challenges, we can choose to remain open to possibility and creativity—not mired in despair. Surely, the latter is a luxury that none can afford. We can create and enthusiastically share a solutions story today, every day. It is a revolutionary act.”  

Monday-Thursday Collaborative working sessions 9am-6pm

  • Monday 9:30-Noon Demos to kick off the projects and afternoon working sessions
  • Weds 12:45 in the Motion Lab object oriented ontologies embodied exploration
  • Thursday 9:35am in 320 Design Futures provocation (design and humanities students, others welcome)
  • Friday all day demo-ing and documenting results

Email zuniga-shaw.1 for more info

Sandbox Sessions Summary

Sandbox Sessions Summary

Professor Norah Zuniga-Shaw facilitated a series of Sandbox sessions during the Autumn 2016 semester for the Humane Technologies team to get started collaborating and asking research questions together. The expectations, experiences, and reflections stimulated by these Sandbox sessions are presented in the blog posts for each Sandbox. These open ended collaborative sessions resulted in the key research frameworks and humane technology definitions that we will take into the Pop-Up session March 6-10, 2017.



Sandbox: Motion Capture with Vita Berezina-Blackburn

Sandbox: Motion Capture with Vita Berezina-Blackburn

Wednesday, November 30, 9:30-11:30am in the ACCAD Motion Lab

Attendees: Vita Berezina-Blackburn, Alex Oliszewski, Norah Zuniga Shaw, Peter Chan, Scott Swearingen,Scott Denison, Alan Price, Mindi Rhoades, Hannah Kosstrin, Isla Hansen

Sandbox Framework for Collaboration:

Investigation of approaches for presenting narratives in full body, room scale VR scenarios driven by practices in theater production and acting. The Sandbox will include demos of ACCAD's current state of available technologies and existing VR experiences from the Marcel Marceau project, as well as related creative practices. Tech: Vicom Motion Capture System, Motion Builder, Oculus.

Anticipation / Expectation:

• VR, motion capture, and training performers, live storytelling in physical and virtual worlds, theater artists driving VR creation

Disposition / Experience: 

Thoughts gleaned from participants during and after the sandbox: 

• Two characters were having a conversation in a science fiction future and I was able to walk around as an invisible third-party (fly on the wall) and observe.

• The conversation was secondary as I was exploring the view and props from this high-rise virtual set design. But I could have easily replayed the scene, taken a seat beside them and listened more intently the second time.

• Is this a significantly more entertaining means of experiencing narrative?

• The thought of 'stepping' into someone's experience was very interesting, and whether or not I would be more likely to follow his mesh or his shadow.

• When doing 180-degree turns in VR I need some sort of reflection so I can see his movement when he goes off-screen.

• Having multiple instances works well pedagogically or as a learning environment, but not so much from the perspective of "appreciate this historic performance."

• Having a CG hand that can interact with the environment would be useful and engaging. Placing an invisible trigger-box around it could easily test for collision.

• Using headphones would connect with experience better b/c audio would be more contextually sensitive. For instance, MOCAP lab walls bounce sound differently than the tight quarters I was experiencing in VR. Scale is always an issue.

• In some ways this reminded me of 'manual cinema', but the audience would also need headsets to approach parity with actors.

• The concept of 'priming for the meta-aesthetic' was very interesting.

Reflection / Opportunity:

• The technical aspects of this are way over my head, but I wonder if this could be done with multiple Google Cardboard to avoid the tethering requirement of Oculus? 

• As in the Marcel Marceau experiment, are we able to learn faster/more through embodied experiences, i.e. could someone practice an interview or social etiquette this way? 

• Could the viewer/reader/player use something like this to inspect props/evidence within the scene to help solve the crime? With the addition of more sophisticated facial detail and scanning at the input stage might we also have been able to study character behaviors?

• Could designers use a similar approach to experience thought problems and test critical thinking?

• Could we build a scene or environment with all the trappings of the “problem space”, especially one that is remote or in a faraway place, in which designers can immerse themselves for study?

• I wonder if MOCAP style labs will replace some studio spaces, i.e., desks and laptops, with untethered headsets and communal, embodied experiences/learning?

• What could we accomplish with scale? Could either 'watch' or 'follow' and have full understanding of entire body and weight distribution throughout the performance and not have to piece together anatomy that's off-screen.

• Matt Lewis suggested the podcast 'Voices of VR' - interviews with the movers and shakers of virtual reality... sounded awesome.

• Why did the character that we embodied during this exercise assume we were 'physical' (Why not a droid/ghost/spectre like Sally was)? That could help explain some of the physical/VR inconsistencies related to navigating the space.

Sandbox: Whitebox with Scott Swearingen

Sandbox: Whitebox with Scott Swearingen

Wednesday, November 16, 9:30-11:30am at ACCAD

Attendees: Scott Swearingen, Kyoung Swearingen, Norah Zuniga Shaw, Alice Grishchenko, Stephen Turk, Mindi Rhoades, Alan Price, Peter Chan

Sandbox: Whitebox with Scott S

Anticipation / Expectation:

• Connecting virtual and physical experience…

• Digitizing the physical world using photogrammetry has become part of our common
vernacular in the creation of digital characters, assets, and more recently, full environments. However, this technology is often employed from a production-oriented perspective that is more design-agnostic than design-centric. By incorporating 3D-printing into the process, our new pipeline seeks to preserve design intent, and help maximize the value that designers as well as artists contribute to the creation of virtual environments.

• The point at which we deviate from typical production pipelines is after the creation of the white-box. The white-box is a low-resolution collision model that serves as the foundation for all interactions between the 'player' and the 3D world in terms of mechanics, collision, layout and flow. Because 'player' interactions within virtual spaces are so inextricably tied to the collision model of the white-box, using a 3D printer would ensure that the collision model's integrity would also be preserved as it was converted to a physical format. With a physical print of the white-box in hand, sculptors and painters can now create artwork for it, and focus their efforts in a more design-oriented approach. Once the physical sculpture is complete, it is digitized using photogrammetry and integrated with the original white-box.

• This workshop aims to discover opportunities that broaden collaborations between physical and digital artists in computer graphics production. It also seeks presenters who are interested in utilizing existing technologies (such as 3D-printing and photogrammetry) in new and innovative ways. In addition, our pipeline is visually very flexible, and should be of great interest to a wide spectrum of artists, educators, and studios.

•  Can we make physical component more ‘player-facing’ rather than only ‘developer-facing’ as dictated by the process?

• What can we discover about other prototyping models that could benefit from our process?

• What alternative digital-physical methodologies could help to steer our research?

• What are the best ways to develop our shared understanding and collaborative relationships?

Disposition / Experience:

• The Whitebox is mechanics (verbs)-driven in its employment of metrics, but more narrative-driven in terms of layout and flow.

• Build in a modular fashion to help reach visual parity with concept.

• How adaptable is the process to varying skill sets, how easily can it be experienced front-to-back?

• The process of alignment is the 'grayest' and most loose step, and could be difficult for a beginner to find success here.

• Are there opportunities to receive (or design with!) other sensory input, especially considering the physical<->digital pipeline.

• Much potential to evolve (and expand) into other domains.

• Desire exists within group to make the player-facing components more physical, not just the developer-facing ones.

• Plan to make an analog prototype.

• Very curious about application (from game to augment with masked animation for narrative and atmospherics.

• Has potential to draw on multiple disciplines.

Reflection / Opportunity:

• Opportunities and interest overlaps with architecture. This is the future of architectural presentation.

• Narrative design at its best when complimented by mechanics (and vice versa)

• Terminology can be an obstacle when communicating process across disciplines.

Sandbox: Kinect/Oculus Playdate with Alan Price

Sandbox: Kinect/Oculus Playdate with Alan Price

Wednesday, September 28, 9:30-11:30am in the ACCAD SIMLAB

Attendees: Stephen Turk, Candace Stout, Peter Chan, Scott Swearingen, Scott Denison, Alan Price, Norah Zuniga Shaw, Isla Hansen, John Welch

sandbox kinect/Oculus playdate with Alan

Anticipation / Expectation:

• To promote discussion and questions about full body engagement and motion in VR, capturing action with playback and real time drawing, and representation in VR spaces...

• To pose the question “what is this for?”

• To explore the VR format (presumably a current interest in use of HMDs with head tracking).

• To explore the embodiment in virtual space; multi-sensory compared with full-body engagement and representation (point-of-view/ gaze).

• To explore the recording of motion (playback, reflection, analysis, of how participants move and engage over time).

• To explore the internal development (starting the process of developing tools for portable templates and future sandboxes created in-house).

• To focus on the user reflecting upon his/her own body as the active element in the space, independent of any encumbrances such as hand-held wands or game controllers.

Disposition / Experience:

• How people are able to physically engage in a virtual space” in interesting, new, creative and/or healthful ways.

• What makes the VR Player do things that are fun to watch as well as fun for them?

• How desired motions could drive the game mechanics such as a desire for people to extend the range of motion, to change levels, to make cross lateral patterns and balance?

• Could additional bodies in space in the VR experience (either inside or outside) create a more interesting learning environment for a viewer / user / player?

• Could you create a dance score with moving objects in the virtual realm?

If so -- what are these objects?

• Who is our intended / ideal Audience? ... How do we want our experience to relate to and possibly change who they are or how they think?

• How can we enhance the experience to make evaluative design decisions within the virtual space?

• How to teach game design through new technologies that are not yet fully realized. (SS)

• How to better navigate the world than using handheld devices?

• Could it be that games are real, and toys are not? ... The context is fiction, but the decisions are real - and lasting.

Reflection / Opportunity:

• VR player as performer...

• We felt that interacting with our own recording-motion and the traced-forms made us more aware of their bodies (for better or worse).

• Obviously modeling of any kind is a richer experience in 3D, if I can build in layers and then dimensionally look through them.

• Recording motion was a hit. ... I want to go back in now and try to choreograph those figures.

• We were toying with the idea of human Tetris style game that did not require a lot of space to play and the environment could scale to your available real-world play space.

• It was very interesting for me when I began to think about physical motions as ‘player mechanics’ in a game-related environment.

• The third person perspective and omniscient high viewpoint were of interest.

• I really, really wanted my avatar to be an ‘it’.

• We are interested in play spaces that are physically, socially and creatively engaged.

• I’d like a humanist to help think about narrative and ethical contexts of some of this work and the relationship to post-humanism.

• This VR work that is in conversation with Ghostcatching, a kind of partial reconstruction would be fun.

• I’d like to make a 3d drawing experience that takes IMPROV TECHNOLOGIES into VR.

• I’d like to make something that invites cross lateral motion.

• The big thing I am thinking about is the place of movement qualities in a VR environment and how training a user to engage movement qualities could lead to more empathetic interactions with the world from a renewed understanding of one’s own movement proclivities which inevitably connect to emotions (how do humane technologies work toward that end). I am thinking specifically from the vocabulary associated with the Laban systems for movement qualities.

• I’m considering this balance as to how each medium [movement improvisations and VR generated environments] retains its integrity, but enhances the best traits about the other.... perhaps this ties into the discussion empathy and self/group awareness.

• I am thinking about the followings—the relationship between avatar and player; player driven goals; connections between environments; visual themes; activities; and the external world.

Sandbox: VR Playdate with Alex Oliszewski

Sandbox: VR Playdate with Alex Oliszewski

Friday, September 23, 1-4pm in the ACCAD collaborative space (aka the living room)

Attendees: Ben McCorkle, Norah Zuniga Shaw, Alan Price, Peter Chan, Alex Oliszewski, John Welch

Sandbox VR Playdate

Anticipation / Expectation:

• Getting started by experiencing a wide range of VR games that invite full body motion, allow creative open-ended play, explore space and the brain's sense of motion and ask how they might be re-performed or hacked for artistic creation.

• Connection and creativity in VR, and pushing at what they can do.

Disposition / Experience:

• Tension between my body’s sense of space and the actual range in which I have to move (players “backing up” in order to see something better in certain game environments). Issue of scale. Teleporting is dissatisfying. 

• Certain actions in the game inspire level changes and Tilt Brush is amazingly as inspiration for motion. It is great to watch people draw in 3D space.

• Play between what is happening virtually and in physical space/time and learning the etiquette of VR takes time. 

• Play between what the brain understands as actual experience is an on-going question including the potential for manipulation, illness, changing experience forever (matrix dystopias). 

Reflection / Opportunity:

• What inspires motion in these VR environments and what kinds of motions do we want to encourage if any?

• If post-human is not anti-human then how indeed might we want these technologies to evolve?

• How locomotion and teleporting in VR impact the sensation of space? What would be better?

• What are the inspired desires for multiple sensors on the body in the VR environment?

• How might knowledge in the performing arts be used to enhance embodied creativity in virtual spaces?

• What about experiential process in dance and things that are paced for exploration and self-discovery?

• How can we use the potential for manipulation in VR (particularly the brain’s sense of motion) as a space for play and well-being?

• What about world creation in virtual environments and using dance improvisation scores as world builders?