Student Fellows Statements

Student Fellows Statements

The Collaboration for Humane Technologies seeks to foster interdisciplinary awareness and skills and conduct arts driven research into technology in service of well being on the planet. An important branch of this research is the inclusion of student research fellows. Fellows propose a relevant project or indicate interest in contributing to one of our active projects and become part of our research community by participating in events and engaging with faculty throughout the year. Events can include attending and participating in sandbox collaborations, guest workshops, and engaging meaningfully in our spring Pop-Up Collaboration. Below are some selected research statements and questions from our 2018 Well-Being fellows.

Dreams Cleaver

My first thoughts on research about dementia was to make a documentary about my experiences as a caregiver to my mother. But I was also interested in showing her side of the story. How did she feel? What could I show to an audience that would help them understand what it was like to have dementia or care for someone with dementia? Could understanding help create better care for the patient which would improve their quality of life? Realizing a live action documentary would pose many problems I turned to animation. Using animation as a way to juxtapose reality and the confused state of dementia showed potential to me and I created a snippet story using real audio against animated footage of what represented the repetitive symptoms from memory loss.

The main issue with this approach was the audience was still disconnected from the experience. How much information about dementia prior to viewing did they need to understand what was going on in the film?

I thought about an interactive tool, such as a website with game like qualities, which could play with the user’s sense of reality. For example, the user is asked to find the banana and they select an image of a banana but then they are told they are wrong and the image they selected is changed when they see it again. This would continue through levels (stages of dementia). My intention was to work toward using this with actual caregivers, as a training tool within care facilities or the home, in order to increase their knowledge of the needs of the patients.

When I entered Alex and Vita’s VR class I saw the opportunity to develop this idea through VR. What appealed to me in this class was the idea of a live person interacting with the user. This created a personal approach, such as a documentary with story, and a guide through the experience, which could help with understanding. It had potential to create an element of personality for the user by seeing objects and images they could relate to their own lives. It also easily gave the user the ability to experience altered realities without having to “suit up” as in the examples of Alzheimer’s simulations I have seen in the past (see links below). And an easy “out” if they felt scared or uncomfortable with the situation. In the previous Alzheimer’s simulations, the user’s hands are bound, they are blindfolded, and walking around in a real space which has potential to cause injury if they run into an object. My Dementia Experience in VR proposes to create a safer space for the user to explore.

After witnessing users in the prototype for the Dementia Experience as the patient, I saw potential to extend the experience to caregivers or friends of those with dementia to explore other perspectives. I would like to review the original script and pull out more ideas that may be possible to add into this prototype (due to time constraints and technical difficulties some ideas were left out). It would also be beneficial to do more research into what are the most common symptoms that those with dementia are experiencing? Are there symptoms across the stages that could be addressed in this experience? How can the Dementia Experience be beneficial to those unfamiliar with dementia but are facing a diagnosis for themselves or loved one?


Kevin Bruggeman

Roughly 44% of Americans suffer from chronic stress according to the American Psychological Association. This means that nearly half of Americans are increasing their risk of heart disease, depression, weight gain, early aging and more each and every day. When thinking of ways to cure chronic stress, I tried to think about ways that chronic stress is being approached in the world today. This brought me to research about mindfulness and meditation. The goal for my project up to this point was to create a virtual experience in which an individual could be taught how to meditate, in an environment that promotes meditation. The project took form as a guided meditation session in a virtual reality simulation taking place in a Japanese Zen temple. These temples were places designed for meditation and mindful thinking.

The end of my first year project was seemingly a success, but I began thinking “What can the VR world provide that the real world can’t?”. I wanted to utilize the seemingly endless possibilities of VR to help individuals become more mindful and aware.

The direction I am going right now is divided into three experimental projects. First, with the help of Skyler Wurster, we have developed a way for the virtual world to be able to detect and react to the heartbeat of the player. The intention, at this point, for this simulation is to allow the user to focus on their heart beat by having it control the adaptation of the environment around you. The senses involved in this effort are, sight, hearing, and feeling your heartbeat all around you in this virtual simulation.

The second project will be working with Emotive(EEG) headset. The intent with this technology is the ability to visualize emotion. The Emotive(EEG) headset has the ability to detect different brainwaves and can output that data in a way that can be transferred into the virtual world. I intend to develop a way for an individual to be able to not only see the visual effects of their emotions, but also be able to recognize and control their emotions as well.

Finally, the last project I want to explore is a way to create a user driven experience using the player's breath. The concept for how I wish to utilize this method is still under development. One way that this method is being utilized right now is by the creator of Deep VR. Please follow the link for an understanding of how they use breath as a mechanic in their gaming experience:

The purpose behind this last experiment is to get the user to focus on their breath. Being able to look inward and focus on your breath is one of the most important concepts in meditation.


Claire Melbourne

I am excited to be a contributing student fellow to the Humane Tech Well Being project this year. Having participated in some sandbox collaborations in Norah’s Multidisciplinary class last spring, I will continue to explore and brainstorm creating situations, strategies and environments which facilitate creative collaboration in an interdisciplinary format with students and faculty in the Humane Tech Group. I want to consider who speaks who acts and why, and experiment with and discuss what kind of scaffolding supports greater collaboration, risk taking, follow through with ideas of parallel and mutually supportive process and product. 

A project that I would like to connect to the Well Being theme this year is using fort building, with household materials, dance-making and some Isadora patches as entry point for investigating sense of interiority, boundaries and permeability in the body and in the material and digital world.  I’m considering how boundaries are an important element of care for self and others and also how a desire for structure, fit and sense of place can manifest in blocking or keeping out otherness. In response, I'm curious about other ways this desire can be nourished that are playful, expansive, evolving and inclusive.

Right now, I am working with 4 other dance students, two MFAS, Brianna Johnson and Kat Sauma, and two BFAS, Lauren Garrett and Emily Kilroy, developing movement and building scores that are stimulated by arrangement of objects as well as interactive sound and video elements, affected, through use of a Kinect and other computer sensors, by human movement choices. I am pulled between two focuses on our individual bodies and sense of limitations and boundaries there both physical and psychical, but also the sort of force fields or insider environments we create and delimit to share with others and the potential for this to create an expanded and shared sense of interiority. What might that extend into as far as greater mutual care, empathy and by extension critical thinking and action? An essential follow up to what I'm working on right now is having this practice extend to a larger public, theorizing and playing with limits of the body beyond the context of contemporary dancers here at OSU.


Diego Arellano

During the Autumn of 2015, as an incoming Freshman, I became the student curator of OSU’s Andean and Amazonian Cultural Artifact Collection, under Dr. Michelle Wibbelsman’s direction. The collection was acquired by the Center of Latin American Studies with Title VI Federal Funds and is permanently housed in the Department of Spanish and Portuguese in Hagerty Hall. In addition to skills learned from my Arts Management major and History of Art minor, I brought to the collection cultural sensitivity from my Ecuadorian cultural heritage and, as a child of the tech boom, an understanding of a new generation of students, their interests and learning orientations.

Professor Wibbelsman and I have worked together to catalogue and organize the physical display of the artifacts. In addition, we developed interactive digital features, including a digital story map and digital narratives, to support the curated exhibit. Beyond providing contextual information for the items displayed, one of our main concerns was how to give access to artifacts that were too fragile to bring out of the display cases for hands-on workshops. We were also focused on platforms that allow students to interact in meaningful ways with the collection.

In spring of 2017, ACCAD invited our participation and enabled the use of photogrammetry to create digital models of select artifacts. Through this unique collaboration I was able to familiarize myself with new software and the process of creating 3D models. With the help of Jonathan Welch, graduate student at ACCAD, we created 3D digital models of two items in the collection. I presented the outcomes of the project in the ACCAD Open House on April 7th, 2017.

The invitation to join the Humane Technologies Discovery Theme formally as a Research Fellow this year will allow me to continue working on 3D models of our artifacts. In addition to uses for the exhibit itself (including a pop-up traveling exhibit we’re working on), we’ve been invited to include these models in an experimental virtual reality environment called Method of Loci.

For the fields of Arts Management and Policy, a crucial question is whether and how audience members interact with a particular piece of art or interactive exhibit feature. I am curious to see how the features I work on in the context of the Humane Tech Theme engage the audience. Is their experience positive? How do we measure that? Do these interactive features provide an added experience with the exhibit that other features do not? Another research question I have is whether these features make our audience come back again, and/or also whether they might be sharing these features through social media with their friends?

With these questions in mind, I plan to use the fellowship time to create two or three more models of select artifacts by the end of the autumn semester. Dr. Wibbelsman and I will also be working with OSU Libraries Knowledge Repository to permanently house URLs for the interactive features in order to preserve them for future uses. We are using for tiny urls and generating QR codes that make the features easy to access with cell phones and other personal devices and can track number of viewers.


Laura Rodriguez 

My thoughts have been over stimulated with impossibilities that keep expanding. I am currently interested in the information surrounding VR, projection, interactive spaces, environments, movement-based VR, and 3D renderings. I am not sure which question I will fully surrender to just yet, and if the subject relates to choreographic research, interactive spaces, or creating a personal experience - or all. More brainstorming for now.

In another work, I made an interactive installation that could be added to with audience contribution over a months period. The audiences that came to the museum could contribute, change, or take away something. These examples while related to the dance field have left me curious for continued information on the merger of space, collaboration, and the technological - human subject. 

Back to the ideas for this project. I am going to flush some out here since I do not know what is possible at this stage.

  • A virtual Scrapbook that can be expanded into the VR space as a full 3-D image or recreation. This would also allow a movement experience to happen from user to program during the interaction.
  • Drawing on memories creating a memory tool that allows you to walk back through a VR rendered world of your memories. Interactable - change colors, people, and happenings.
  • A collaborative installation with the Intergenerational Center. Together we design, draw, paint, capture, interact and document the work through social media, live stream, Kinect, and projection.
  • A screen dance film made with the community of the Intergenerational Center. Maybe this also ties into the documentation of the collaborative installation or not.
  • A choreographic VR tool that visualizes pathways, articulations, and temperaments to establish a visual. This would also create an environment that could be personalized to work in as a choreographer. Possible incorporating a drawing, painting, writing space to put ideas. 

Well-being overall for me generates a partnership between human and machine that enhances life for all people. So much of technology today feels direct in the amount of time lost in the day. Creating something with technology that gives back, supports, or challenges the conventional knowledge surrounding this void seems like a radical act.

Interview with Chris Landreth

Interview with Chris Landreth

Ohio State's College of Arts and Sciences presented a fantastic interview with award winning animator Chris Landreth along with professor and principle investigator Norah Zuniga Shaw ahead of his workshop and public screening along here at OSU. Find a snippet below and the entire article here

Landreth's master class

Landreth's master class

"A recipient of multiple honors — including an Oscar for his breakthrough 2004 animated short Ryan — and the progenitor of Making Faces, a groundbreaking course on facial animation, Landreth is one of the most influential animators working today. In advance of his visit, Norah Zuniga-Shaw, professor and director of Dance and Technology and the principal investigator for Humane Technologies research, spoke with Landreth about his work.

Norah Zuniga-Shaw (NZS): I’m excited to talk with you about your upcoming residency at OSU/ACCAD and what “humane” might mean in relation to your work. First and foremost you are a storyteller, and the stories you tell are humane in the way you use computer graphics to reveal people’s inner lives. You address issues of addiction, marital dysfunction and grief in your work in nonjudgmental ways.

Chris Landreth (CL):  You’re talking about The Spine and Ryan, probably. My ultimate goal is to tell a really great story, and the best and easiest way to tell a really great story is to tell it with empathy. In order for the story to succeed, I have to create a sense of empathy for the audience to have a way into the stakes of the story. The nice thing that comes out of that — at least I hope — and I’m glad that you picked that up, is that there is a sense of compassion and caring. "

Feel better with Michelle Ellsworth in the New York Times

Feel better with Michelle Ellsworth in the New York Times

Jan.2 saw a fabulous article about a fabulous artist, Michelle Ellsworth who was here at OSU for a week this fall as one of our guest artists for humane technologies performance research. I suggest this article as a great read for anyone needing to feel better in this new year and to find ways as Seth Godin says to "stretch in whatever we do to be artists, to create in ways that matter to other people." Michelle's work makes me feel like there really might be something I CAN do as a dancer, an improviser, an artist, teacher, human: "Like many people over the past year or so, Michelle Ellsworth has often felt disoriented, as if the world had been turned upside down. But she is probably the only person who responded to that feeling by putting herself in a wooden wheel so that she can be rotated 360 degrees around the axis of her nose." Michelle was an incredibly galvanizing presence, cutting through all the mishmash with humor and her own electric presence. We are still enjoying the catalyzing effects and her directing changed the pathway of our performance research projects in powerful ways. Because of her directing, the piece we are creating with the laptop orchestra has become an overt expression of the precarity we are experiencing with the impacts of climate change in our lives. Stay tuned for more on that piece "A Lecture on Climate Change."

Read the full New York Time article



Writing Jam: Beautiful Complexity of Wellbeing

Writing Jam: Beautiful Complexity of Wellbeing

On October 27, 2017 Norah Zuniga Shaw hosted a writing jam in relation to this years theme on Wellbeing entitled Beautiful Complexity. This jam included a movement storm which is a score for generating ideas, developing language and getting into a shared space of embodied discovery. Below are selected reflections from participants Candace Stout and Ben McCorkle.

Candace Stout, Department of Arts Administration, Education, and Policy

Last week Norah invited me to a sandbox, along with our good colleagues Peter Chan, Ben McCorkle and Rick Livingston. I envisioned a think-tank affair, four of us seated around a table, Norah flipping a tablet, noting concept gems, diagrams and ah-ha moments. Things, however, weren't that way and at the outset, I was uneasy. Maybe it was the Starbucks venti espresso. Maybe it was the walk through the ACCAD supply closet culminating in a rubberized room, or the almost audible monastic soundtrack infusing that space. Whatever was that initial instinct [over-caffeination or sedimented institutional expectations], it dissipated in finding, in Toni Morrison's words, "the friends of my mind." Among the piles of yellow paper scattered on the floor, the incidental flip charts at the edge of the room, and in the layers of journal articles that Norah placed on tables, I found compelling relevance. There were key words, incisive phrases, and insistent commentaries-expressions resonating with and informing my own humanizing epistemology and practice, and importantly, that of my grad students in the most consequential ways. Humanizing research works toward connection and disruption, relational, dialogic, consciousness-raising for self and others. It is a mind-set, strategies animating and probing performance and experience, activating sympathetic and empathic awareness. Graduate researchers in my writing seminar use the virtual to examine, understand and impact the material-the real. They are committed to the work of

meditation for well-being, healing, coping with the human condition; understanding the nature and import of embodied knowing; using arts performance as connection in public spaces; using narrative ways of knowing for the researcher-self and those with whom meaning is created. Multimodality in knowing and relating are primary in what they do. Research toward connection, disruption, and resolve. Thank you Rick, Peter and Ben for this collaboration. Thank you Norah for sharing this inspiring box of sand.

Ben McCorkle, Department of English

I’d like to play with a scaled-down version of a much larger cloud of ideas that I have percolating in my head that deals with generating creative ideas by exploring connections between binaries, closed systems, things that seem irreparably divisive, unconnected, or incapable of change.

The jam activity today was designed with a particular purpose in mind: to generate ideas that will guide the HT collective’s thinking about the theme of well-being might eventually help give shape to some future artifact or text such as a journal article, blog post, etc. After debriefing, we first engaged in a 20-minute period of “graze, gather, raise.” After, we shifted to the atomizing phase, exploding a concept out into multiple areas, then shared our results with one another, and then left to produce things like this one I’m writing now

Yes, I know, we were all there—your recollection is probably a lot more granular than this account is.  My point, though, is that everyone talked about the *process* of this activity—we acknowledged it *as its own thing* rather than the thing that leads to the thing (i.e., that piece of writing, etc. that we might generate down the line). Process and product—this is how we typically butcher the meat. But in this case, the process itself can be seem a type of product, a thing in and of itself, or at least the anticipatory echo of products-to-be. Here, I find myself contemplating what that means in terms of the HT project and how it relates to this theme of well-being. This process-as-product, which engaged the entire sensorium, our sense of proprioception, our sense of care as we moved through the space and manipulated its contents, opens up a space of possibility as a potential product that addresses our idea of well-being.

For example, I can imagine this activity as an immersive VR program designed to help users generate their own creative ideas that allows them to map, move, (re)place text, images, etc., and especially allow for positive collaborative interactivity. Or maybe some sort of meditation training app that, by moving words rapidly, playfully, and constantly through a virtual or augmented space, creates that mantra-like phenomenon of semantic satiation that often accompanies transcendent states (“care, care, care, care, care, …”). Or maybe an application that would help with conflict mediation in some way by creating a dynamically manipulable, shared virtual space where users would work with material in a way that would bring about resolution through cooperative play (okay, this one is a little half-baked, but I think there might be something there). In these cases, the underlying idea is the same: focusing on today’s process itself as the wireframe around which we build possible products aimed at designing technological solutions to the problems impeding our well-being…

Jennifer Monson- iLAND Lecture and Dawn Walk


As part of the ongoing Humane Technologies investigation at ACCAD at Ohio State, guest artist Jennifer Monson presented a lecture the evening of April 11th, 2017. Monson also led an experiential "dawn walk" the same morning on the OSU Oval for thirty interdisciplinary participants. 


Monson’s attention to environmental phenomena incorporate Humane Technologies greater mission of sustainability. Her work is also deeply embedded in interdisciplinarity as her research “upholds a fundamental commitment to environmental sustainability as it relates to art and the urban context and cultivates cross-disciplinary research among the arts, environmental science, urban design, and other related fields.” Monson’s studies also look to reimagine humans relationship to the environment and the places they inhabit. 

Photo by Valerie Oliveiro

Photo by Valerie Oliveiro

Jennifer Monson is a choreographer, performer, and teacher. Since 1983, she has explored strategies in choreography, improvisation, and collaboration in experimental dance. Through multiyear creation processes, her works have investigated animal navigation and migration (BIRD BRAIN, 2000-2005), human impact on natural sites (iMAP/Ridgewood Reservoir, 2007), and communities in east-central Illinois dependent on the aquifer (Mahomet Aquifer Project, 2008-10). In 2004, Monson incorporated under the name iLAND (Interdisciplinary Laboratory for Art, Nature, and Dance), which explores choreographic, improvisational, and collaborative strategies in experimental dance. Monson is currently a professor of dance at the University of Illinois at Urbana-Champaign and Marsh Professor at Large at the University of Vermont. Her current work-in-development is in tow, which investigates the nature of collaboration and experimentation across geographies and disciplines.


Rosalie Yu- Humane Photogrammetry

Rosalie Yu's work engages in the creation of meaningful social connections through 3D scanning and photogrammetry. Yu spoke in her artist talk about projects such as Embrace in Progress and her research into creating lasting artifacts from fleeting moments of intimacy. The use of this technology and it's compassionate creation methods is integral to the Humane Technologies project. 

Yu came to ACCAD as a Visiting artist and scholar to conduct a hands-on workshop on depth photography and photogrammetry, and how capturing the depth axis can further unfold the real world and create new perspectives. She posed the following questions in the beginning of her talk: How do machines capture emotion and time? How can an artist capture intimacy? In what ways can we represent organic human qualities in digital mediums? Yu shared her past research Embrace in Progress as well as Skin Deep which investigates these questions. 

In the workshop, Yu demonstrated how to use the 3D scanning tool Skanect to create models using an Xbox 360 as the scanning tool. Scanning reconstructs a point cloud of the object, creates a mesh to surround it, and applies textures- similar to other 3D scanning softwares. The act of scanning is an physical task since the person needs to move slowly around the body while keeping the sensor horizontal and moving up and down in space to capture the entire body. The resulting scans were uploaded to the website Sketch Fab and Yu suggested extra resources for future endeavors in this work such as 8i, Mesh Lab, Mesh Mixr, and Net Fab, and itSeez3D.

DSC00372 (1).JPG

Rosalie Yu is a creative technologist from the Brown Institute for Media Innovation at the Columbia Graduate School of Journalism. Ms. Yu's visit was sponsored by the Humane Technologies Project, of the Humanities and Arts Discovery Themes at The Ohio State University, Advanced Computing Center for the Arts & Design (ACCAD). She works with emerging photo- (depth photography, photogrammetry) and 3D-technology to capture and transfigure everyday experiences. 

Liveable Futures: A collaboration for Humane Technologies Pop-Up

Liveable Futures: A collaboration for Humane Technologies Pop-Up

Humane technologies do no harm, they are creatively open-ended, socially connected and access the full multi-sensory capacities of human intelligence. Humane tech creates compassion and well-being, embraces complexity, enhances collaboration and is radically inclusive. With these humane working assumptions, the 2017 Humane Technologies Pop-up focuses on livability in the 21st century. 


From March 6- 10, 2017 ACCAD faculty, staff, GAs and many of our classes worked on creative projects in the working space of a hack-a-thon or a charrette the purpose of this week was to create a focused time outside our busy lives for creative collaborative action. Students from the environmental humanities and human rights research groups joined us as well as alumni guests who are taking time out from their work at google, Adobe and in their own design firms. All of our working spaces (the open collaboration rooms, SIM lab, Motion Lab, conference room...) were utilized in the sharing and documenting the prototypes and artworks and advancements made together. 

Virtual Devising and Acting for Developing Experience, Story and Social Interaction Simulation.

Virtual Devising and Acting for Developing Experience, Story and Social Interaction Simulation.

This ongoing project investigates the application of immersive theatre and improvisation based devising methods in the development of room scale virtual reality experiences.

The projects allow a participant to put on a Vive head mounted display and interact with a virtual avatar performed by a live actor and a virtual environment in real time. Each environment is associated with a rough story idea. The participant can improvise interactions and dialogue with the live actor. Some variations of this setup also introduce an additional character pre-recorded/captured by the same or another actor. Most environments include physical props that match locations of some virtual objects, creating a possibility of haptic feedback. With attached optical markers, some of the props are also physically manipulatable. Through this work we seek to gain better understanding for developing innovative VR experiences that involve co-presence and cooperation among multiple participants, haptic based on real objects with the foreseeable applications in the arts as well as education, various types of training and multi-player simulation.

The technical setup takes places inside a 40x40’ volume with 20x20’ trackable area and a projection screen for the audience and the actor. Physical furniture and props provide haptic feedback for the participant. Besides the furniture and the screen, spike tape marks on the floor guide the actor.  We combine optical tracking of the live actor and physical props with HTC Vive headset and controllers tracking via the lighthouses. Vicon Blade in combination with Unity 3d or Motionbuilder is used for prototyping and developing the experience. Immersive sound is optionally used in the experiences allowing the participant to hear actor’s voice through headphones as they speak through wireless microphone. 

Noa Zuk and Ohad Fishof Residency

Noa Zuk and Ohad Fishof Residency


Interdisciplinary artists based in Tel Aviv, Ohad Fishof and Noa Zuk joined the Humane Technologies team of collaborators at OSU in arts-driven research investigating 21st century life and livability. Zuk & Fishof have collaborated for over a decade and work in a diverse range of fields including Dance, sound performance video and installation. In February 2017, they participated in group discussions and Fishof gave an artists talk.

Andean and Amazonian concepts of livability and wellbeing

Andean and Amazonian concepts of livability and wellbeing

By Michelle Wibbelsman

Humane Technologies Discovery Theme

In Spring of 2017 I was invited to join the Humane Technologies Discovery Theme as a research fellow from the Humanities. My area of expertise lies with Latin American indigenous cultures, epistemologies, and performance practices, particularly those of the Andes and Amazonia, which have unique perspectives on technology, humanity, livability and wellbeing.

In Ecuadorian Quichua, sumac kawsay (also spelled sumak kawsay or sumaq causai) captures the essence of meaningful, beautiful, proper living and connotes a sense of “livability.” In indigenous worldview, making things knowledgeably and beautifully is conducive to meaningful and proper living, as is personal and collective reflection by way of oral traditions, participatory practices and indigenous art. Another key aspect of sumac kawsay is the practice of sustained dialogue, mutual nurturing, and exchange based on relations of respect and cariño (affect).


Aside from the ethics of good, proper living, people often frame meaningful, beautiful living in terms of an aesthetic defined by zig zagging and a back and forth movement (quingushpa), evident in weaving, music, poetic linguistic patterns, dancing, pottery designs…and pretty much everything else as a recurring and reiterating pattern. At the height of its expression, this aesthetic reflects mastery of a sense of playfulness between symmetry and asymmetry. In contrast, being too direct or going straight to the point is considered yanga puringashpa (following an ugly and aimless path) be it in artistic expression or cultural habits such as speaking too directly, not using enough suffixes, ending a conversation too abruptly, conducting a financial transaction without engaging in pleasantries, doing or making things without embellishment, knowledge and sensitivity…


When people state that they live “always talking, always conversing with one another” in this quingushpa sort of way as part of sumac kawsay, more than signaling a cultural practice, they are underscoring a rhythm or style of going about life. Moreover, they are not only referring to a human community, but to a sustained conversation with beings in other time-spaces or pachas as well. The Andean indigenous world has four pachas: the world above (hawa pacha) where all kinds of spirits and syncretic divinities exist, “this world” (kay pacha) which is the world of nature, “the fourth world” or “the other world” (chusku pacha or chayshuk pacha) where the ancestors live, and uku pacha which is the world below or more precisely the world within where people live. I outline these pachas in my book Ritual Encounters: Otavalan Modern and Mythic Community (2009) and argue that they are not theoretical or folklorized notions, but instead a fundamental part of people’s daily reality.


Obligations to other people in terms of respect, dialogue, exchange, mutual nurturing, conviviality are similar to those with beings from other time-spaces. Animals, plants and things are often referred to as runa (literally, fully human being—the self-designator of Quichua ethnic communities). The Earth, for instance, is pachamama, Mother Earth, and is treated with the respect and sensitivity due to a mother. The animated landscape, with gendered qualities, embodies ancestors referred to as dear great grandparents. Saints as well as animals, including plague animals and insects, are treated with cariño and brought into a relation of compadres or fictive kin. Many agricultural techniques rely on dialoguing with the insects and listening to the signs of nature. The souls of the deceased are kept alive through frequent visits to the cemetery. Without food and conversation, indigenous people say the souls would die for real, (like the mestizo souls since no one visits them)…


All of this to signal that notions of “humanity” are much more inclusive in the Andes. This in turn changes the way people relate to their environments from the hierarchical arrangement that puts people on top in the Western conception (and justifies exploitation of the environment as a resource for people) to a less hierarchical, more reciprocal system where people are on par with other beings that share their humanity.


Andean technologies work with nature rather than trying to go against it or dominate it. We see this in the impressive Inka stonework where each piece is tailor-made to work with elements in its environment.



Similarly, people say that they do not try to eradicate pests or cut out diseases, but rather to dialogue with them, understand their needs, find a compromise that allows everyone to live well—a radical redefinition of the “common good.” They do not try to force production, but rather respect a healthy pace of production, including rest periods. Value is not defined by maximizing profit and accumulation but instead guided by principles of redistribution. This marks a significant difference from Western practices in agricultural techniques, medicine and healing.


This attention to relations of mutual nurturing with the environment also points to a vision of long-term commitment with nature, and with humanity. The future is defined not in terms of immediate gain, but rather sustainability along a millenarian timeframe. It seems that time and sustainability must necessarily be a factor in defining humane technologies and livable futures. As I traveled through the Sacred Valley in southern Peru in summer of 2017, I was impressed by the fact that Inka and pre-Inka constructions and technologies endure—structures are intact, the Great Inka road continues in use, aqueducts are functional, agricultural terraces are in production. In the meantime, earthquakes have devastated colonial and modern constructions. Remnants of electric and gas-driven mechanisms, pipes, modern roads lie abandoned and in disrepair alongside technologies that are more than 500 years old and perfectly functional. Modern agriculture has turned once fertile fields into deserts due to overuse of fertilizers, herbicides and gm crops not endemic to the region. The flower industry in northern Ecuador is one such example where as one glances across the landscape one can clearly see how corporations simply move to another plot of land once they’ve exhausted the previous one. 20th and 21st century technologies do not appear to have improved on Inka and pre-Inka engineering.


The idea of a shared humanity with other living and also nonliving things opens a space for thinking about nonhuman ontologies (or perhaps now that we’ve defined human more broadly, ontologies not exclusive to people). My sense is that the recognition of and engagement with other ontologies is where the rigorous theoretical work begins to decenter people in a conceptualization of humane technologies.


I hope the synopsis above does not present an overly simplified impression of concepts and practices that carry important depth. The notion of sumac kawsay, for instance, has made it into the 2008 Ecuadorian constitution as a recognition of indigenous values and principles.  At the same time, its complexity has been reduced in the Spanish translation buen vivir (good living) and by way of this simplification the concept has been coopted by the State to reference the common good in terms of the duties of the Welfare State within a capitalist economy. This is quite different from the understanding, lived practice and context of sumac kawsay, the common good, and well-being in indigenous communities.


As we collectively turn to a renewed attention to topics of livable futures, humane technologies, and well-being in the midst of growing inequality in our societies and escalating concerns about our global environment, indigenous cultures may have something important to contribute to the discussion by way of the radical alternatives they put forward.


Digital & Physical Games

Scott Swearingen (Design); Scott Denison (Design); Ben Schroeder (CSE); J Eisenmann (CSE); Kyoung Swearingen (Design); Matt Lewis (Design); Norah Zuniga Shaw (Dance); Chris Summers (Dance), Alan Price (Design); Isla Hansen (Art); Alex Oliszewski (Theater); Oded Huberman (Dance); Sarah Lawler (Design). Demo Location: Motion Lab (room 350). Digital + Physical Games

Scott Swearingen (Design); Scott Denison (Design); Ben Schroeder (CSE); J Eisenmann (CSE); Kyoung Swearingen (Design); Matt Lewis (Design); Norah Zuniga Shaw (Dance); Chris Summers (Dance), Alan Price (Design); Isla Hansen (Art); Alex Oliszewski (Theater); Oded Huberman (Dance); Sarah Lawler (Design). Demo Location: Motion Lab (room 350). Digital + Physical Games


This project involves development of a framework that explores, discovers, and questions the intersection of physical and virtual presence within the context of games. While ‘play’ offers the individual an opportunity to learn about themselves and others, ‘games’ provide the necessary structure to make our choices meaningful and give weight to our capacity for empathy. Furthermore, by integrating physical and virtual presence in this framework, we can streamline our ability to abstract relationships within a given system, and hence, one another.


A playable prototype of a two-player game using Kinect/ Processing and Unity. Players cooperate to navigate a scrolling landscape, dodging or otherwise moving around emerging obstacles, barriers, and projectiles.


We created this in a week during our Human Technologies Pop-Up intensive. There was never a time throughout the week that we were worried about having a deliverable. Keeping the mechanics simple and having a really small design footprint helped us stay agile, and made development easy to pick up and put down. Investment was also key. We wrangled faculty, students, and staff for their ideas, and bounced our own off them for hourly sanity-checks.

Method of Loci: Multi-scaled Integrated VR for Collaborative Meaning Making

Method of Loci (a mnemonic system in which items are mentally associated with specific physical location) Alan Price (Design); Isla Hansen (Art); Scott Swearingen (Design); Norah Zuniga Shaw (Dance); Michelle Wibbelsman (Latin American Indigenous Cultures); Ben McCorkle (English). Demo Location: SIM Lab.

Method of Loci (a mnemonic system in which items are mentally associated with specific physical location) Alan Price (Design); Isla Hansen (Art); Scott Swearingen (Design); Norah Zuniga Shaw (Dance); Michelle Wibbelsman (Latin American Indigenous Cultures); Ben McCorkle (English). Demo Location: SIM Lab.


We set out to explore modes of interaction between users immersed in VR with a Head Mounted Display, and users with an external, third-person perspective using a multi-touch display. The design intent was to draw awareness to the differences in scale and perspective, engaging users in a process of collaboration that requires navigation and communication across the two modalities and encourages awareness of both digital and physical experience.


The current outcome is a networked multi-user VR collaboration space that encourages experimental making and play through collective creation, assembly, and recording. A mobile web app is used to upload images, sound, and video, as well as 3d models, in real time, to contribute to a growing and malleable virtual world. Inside this world, users can move, combine, and attribute physical properties to objects, videos, and sounds. Recording these movements, users can create animations, drawings, and spatial soundscapes. Objects take on meaning through the users’ intent, creating associations through composition and movement in the virtual space. The system can be used for staging games, collective sense-making, storytelling, or other purposes to be discovered.


Critical thinking and research in the domain of humane technology can include ongoing study of the design of interfaces; the design of modes of interaction; the design of technology that can enable us to freely converse between physical and digital constructs. Developing systems that promote reflection by its users on how we understand our engagement with systems and how we can engage with one another through a system, benefits from focusing on the attributes that support or expose a deeper dialog about the mechanisms operating to enable that engagement.


Birdbot: Encouraging Full-bodied Play in VR Fantasy World

Birdbot flyover: flap your arms to drift over various compassionate landscapes as conceived and created by students in Design. Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design), and students in Palazzi's Design 6400 class: Breanne Butters, Stacey Sherrick, Sarah Lawler, Zachary Winegardner, Kevin Bruggeman, Devin Ensz, Bruce Evans, Dreama Cleaver, Kien Hong. Demo Location: SIM Lab @

Birdbot flyover: flap your arms to drift over various compassionate landscapes as conceived and created by students in Design. Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design), and students in Palazzi's Design 6400 class: Breanne Butters, Stacey Sherrick, Sarah Lawler, Zachary Winegardner, Kevin Bruggeman, Devin Ensz, Bruce Evans, Dreama Cleaver, Kien Hong. Demo Location: SIM Lab @

Birdbot balance: Rise through virtual woulds and make music with your wings as you achieve balance challenges in VR: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design); Demo Location: SIM Lab @

Birdbot balance: Rise through virtual woulds and make music with your wings as you achieve balance challenges in VR: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design); Demo Location: SIM Lab @


Get moving in VR! BirdBot grew out of an early Sandbox Collaboration we had using the Kinect to get good full-body interaction in virtual reality (rather than just being able to move or play with things using controllers). It is also a response to one of our core research interests in this project which is to create more physically active and stimulating virtual reality experiences. 

The resulting prototype is what we call a "movement toy" and there are a few movements we targeted specifically including "balance," "level changes," and any gross motor action (in this case flapping the arms). But really any desired movement could become a mechanic of this "toy." 


We created is a series of Virtual Environments for the Oculus Rift using a Kinect as our sensor. One of our creative interests was to see what happens when we start with a movement idea and let the virtual world grow from there. A movement creates a story and the story creates the world. So it was a very intuitive, emergent process and evolved through many iterations that existed in the collaborative space between our minds/bodies. We had some fantastic brainstorming sessions with visual artist Isla Hansen about making a physical installation to experience while in VR and will continue that going forward. The nature imagery and heron came from our discussions about de-centering the human and making non-mirrored interfaces. When you put the headset on and enter the world of Birdbot you are in a peaceful room with grids on the walls but it is filled with trees and your shadow is a heron. If you flap your arms, a hidden world is revealed and as you balance on one foot (a challenge in VR) you rise up into a bright pink tunnel where you can make music with light-up chimes. Finally you enter a flyover world where you soar over a collage of compassionate landscapes that were created by students in our Teaching Clusters, including a tapestry made up of family photographs compiled from our research team.


As always in the iterative design process, some of the things we tried out but didn’t use provided us with fun learning experiences and make the work stronger.  The challenges of computer recognition of particular motions is a long-standing issue but the KINECT has made things easier and it is fantastic to see people moving and laughing and feeling good in VR.

Further relfection by Alice Grishchenko at

Collaborators: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design), and students in Palazzi's Design 6400 class: Breanne Butters, Stacey Sherrick, Sarah Lawler, Zachary Winegardner, Kevin Bruggeman, Devin Ensz, Bruce Evans, Dreama Cleaver, Kien Hong. Demo Location: SIM Lab @

Bird Thoughts

Bird Thoughts

Alice Grishchenko, MFA student in design and a key collaborator on the Humane Technologies team writes:

I loved making Birdbot. It is a virtual world that encourages certain motions, it isn't really a VR game, sometimes we call it a toy. It was a non-linear process to create it. Norah calls that emergent. We started from a place of abstract interactions, prototypes and a jumble of 3D models and textures pulled from many different sources and somehow we ended with a surreal 3 part experience, of which the most visible linking themes are birds and shadows. Personally, I started with some questions like:

  • What is achievable with this cross section of technology?
  • Which types of movement are engaging? 
  • Which environmental designs encourage engaging movements and provide discernible and satisfying feedback to the user? 

Each of these questions is actually a cascade of many other questions that should ultimately be answered by players interacting with the system, but before having those answers you have to create a system by anticipating them. Hypothetical answers are tricky, so I started with some low investment prototypes that looked like this:

The goal was to get the player moving in a fun way by creating an interactive environment. I tested many interactions with physics simulations, flying, floating, and rhythmic movement. I would run these by Norah and we'd talk about the intention compared to the feeling of the environment, and repeat this playtesting process with other collaborators to see what perspectives they could bring. Through this process we created three different interactive environments, then connected them with visual transitions and common themes. Slowly we started to solidify which features we wanted to develop in each scene. The three stages became surreal, calm spaces with strange gravitational properties and shadowy avatars that represent otherness. The levels remain separate in the mechanics of their interactions, balancing, reaching out for virtual contact and flapping arms in a way that imitates flight. 

We connected the three levels in a way that creates the experience of moving from an enclosed space, upwards to a vast open space and then forwards into a tunnel that leads back to the beginning of the experience. The content of the levels changes dramatically once the player ascends to the open space. This is because we enlisted the help of Maria Palazzi's class to create compassionate landscapes for the player to soar over. The class's work is combined to generate a procedural world that changes over time. Collaborating with an entire class of people for a week was a really unique experience for me, and Maria's class delivered some great insights and beautiful assets to the work that made it much richer. I worked with Skylar Wurster to develop the spherical procedural landscapes and we used some custom shaders to fade between ground and (sideways) sky textures. 

A compassionate landscape/interpretation of how birds may view an urban environment

A compassionate landscape/interpretation of how birds may view an urban environment

All these changes between the three levels and all their components provide players with many unusual experiences and sensations one after another. I think the piece really leads the player to think about identity and the journey of connection they just experienced.

Humane Object Agency: Part Two, Implementation

Humane Object Agency: Part Two, Implementation

Collaborating faculty Matthew Lewis writes: In a previous blog post I described my introduction to the Humane Technologies project and my intentions for the pop-up week: exploring the use of interactive virtual reality to simulate an Internet of Things (IoT) filled space, with participants embodying the roles of the communicating smart objects inhabiting the environment. Leading up to the big week, I met with several of the participating faculty who gave me invaluable suggestions for additional readings, relevant pop culture references, and other perspectives on possible "motivations" for the IoT devices to be simulated in the project.

During the pop-up week Professor Michelle Wibbelsman and I met with Professor Hannah Kosstrin's dance class and explained the basic idea of the project. Michelle and I had come up with a few exercises/scores with different emphases for the students to try out. For example, we initially split the students into two groups, and requested that one group take a dystopian perspective of IoT devices, while the other group imagine a more utopian viewpoint. While the devices in the later group focussed on keeping the apartment inhabitant happy and comfortable, the former group embodied more of an overbearing nanny/salesperson space. For the initial round, we had requested that the performers communicate primarily via motion. There was a strong tendency however to want to speak primarily to the person in VR and communicate in general via anthropocentric means. For the next round we requested that communication only be through movement, and primarily between the IoT devices, rather than focussing on communicating with the the apartment's inhabitant. Additionally we asked some performers to take on the roles of aspects of the communications infrastructure: one dancer was "Wi-fi" and others were "messages" traveling through the network, between the devices.

There was very little time given for planning between each performance/simulation, so most of the systems and processes resulting were improvised during each performance. As a result very little actual motion-based successful communication took place (though lots of attempts were made.) However these sort of initial experiments using no technology in the classroom gave us a great deal of information and discussion points for our technology-based experiences a couple of days later.

Several people were involved in the implementation of the quickly assembled technological system. I initially had specified the desired system features and set up the physical system components. Skylar Wurster (Computer Science undergrad) and Dr. J Eisenmann (ACCAD alumnus / Adobe research) implemented the interaction and control scripts in the Unity realtime 3D environment. Kien Hoang (ACCAD Design grad) assembled a 3D virtual apartment for the VR environment. 

Professor Kosstrin participated in the role of the inhabitant of the VR apartment. At Professor Wibbelsman's suggestion we avoided naming this character so as to avoid too strongly biasing our notions of their role (e.g. "owner", "user", "person", "human", "human object", etc.) We ended up frequently making a stick figure gesture mid-sentence to refer to them during our discussions. It was intended that as the physical performers were communicating outside of VR, there would be some indication inside VR that the virtual smart objects were talking to one another. A few visual options were implemented in the system: the objects could move (e.g. briefly "hopping" a small amount), they could glow, or they could transmit spheres between one another, like throwing a ball. Given the motion-based communications we were attempting with the dancers, I chose to use primarily the movement method to show the VR appliances communicating. This was implemented with a slight delay: if the smart chair was going to send a message to the smart TV, first the chair would move, then the TV would move, as if in response. I imagined this being perceived like someone waving or signaling, followed by the message recipient responding by waving back.

We investigated two methods for connecting communications in the physical and virtual worlds. In our first trials, we simply relied on an indirect puppetry approach. A student at a workstation (Skylar) watched the dancers, and when one started communicating to another, he would press an appropriate keyboard button to trigger the communication animation in the virtual world. For one of the later runs, Ben Schroeder (ACCAD alumnus / Google research), Jonathan Welch (ACCAD Design grad), and Isla Hansen (ACCAD Art faculty) all contributed solutions to enable the dancers to touch a wire to trigger a communication. While this had the advantage of allowing direct control for the performers of their virtual counterparts, the downside was, it placed limitations on their movement possibilities. Regardless, inside VR, the movement of the appliances did not read for our VR participant as communication: "Why is the refrigerator hopping?" Time during the brief session didn't allow for experimentation with the other communication animation approaches, but I suspect some of the other modes might have fared better.

Professor Wibbelsman led the group in discussion and we quickly discovered that our goal of eliciting new ideas about future possibilities for these emerging technologies seemed to be a success: everyone had a great deal of strong opinions about what might emerge and big questions about what they might be more or less comfortable with. One further practical consideration that emerged was the need for dancers to use a separate "narration" voice to communicate with the person in VR, to tell them things they needed to pretend were happening in VR as the improvisation ran its course (e.g. a refrigerator door opening and giving them access to ice cream.) Despite the pop-up providing an invaluable week of time for everyone to focus on prototyping projects such as these, one of the more surprising challenges was having access to people's time. Many of the details of the project were not the result of well considered design decisions but rather because that was what the person who popped-up to work for an hour or two could accomplish before jumping back out to a different project.

Humane Object Agency: Part One

Humane Object Agency: Part One

Collaborating faculty member Matthew Lewis writes:  I arrived at the humane technologies project and group later than most of the participants. I was invited to participate in the pop-up week which would focus on virtual reality this semester. I've been curious about using VR technologies for interface prototyping, and this seemed like a great opportunity. As with all pop-up participants, I was encouraged to consider either joining existing project groups, or to bring my own ideas to the table.

Not having been part of the earlier discussions, my unbiased ideas about "humane technologies" primarily involved evaluating people's interactions with the technology emerging around them in positive and negative ways. In particular, I've been reading almost daily newspaper articles about the "internet of things" (IoT). Usually these discussions center on debates between convenience vs privacy: e.g. your internet connected devices are controllable via your smartphone, but they also report your engagement to advertisers for marketing purposes. 

Discussions of the Internet of Things tend to predict that smart objects will be increasingly communicating in complex webs of systems which may or may not have our best interests in mind. In the same vein as it is often said that, "you are not the consumer but rather the product" for companies like Facebook, networked smart objects like your TV might be "free" to use as well, in exchange for you allowing an infrared camera to monitor your apartment and track your eyes as you watch TV.

With this content in mind at my first humane tech meeting, I heard Professor Michelle Wibbelsman (Spanish & Portuguese) mention two things that resonated for me: indigenous peoples' beliefs about objects having agency, and also "Object Oriented Ontologies" (OOO). I was curious about the idea that some cultures may have already thought a great deal about how to live surrounded with objects that have agency. Additionally, "Object Oriented Ontology" is a relatively recent perspective on metaphysics that’s attracted some attention from computer scientists working at the intersection of philosophy and human computer interaction. OOO involves a de-centering of humans that considers physical objects, ideas, their relationships, and agencies all as equally valid objects of philosophical consideration.

At this same initial meeting, Professor Hannah Kosstrin (Dance) mentioned that her motion analysis class's graduate students would be available to participate in projects during the pop-up week. Years ago I was fascinated by a presentation I’d seen on "service prototyping" which used actors as participants for interactive system design. I proposed that Hannah's students could embody the roles of communicating IoT devices, exploring the possibility space of system agency. Many IoT species will converse primarily with other smart objects and networked systems, rather than interacting directly with people in their space. What might such devices be "talking" about? What could their awareness and motivations encompass in different future scenarios?  

Additionally, I envisioned another participant immersed in a VR apartment environment, experiencing representations of these devices communicating around them. For example, there might be an indication that a smart TV, smart refrigerator, and smart couch were all observing aspects of their environment, and "doing their job" whatever that might be. What would this be like to live in such a space? 

I suspected that by embodying this simulation/performance, it might lead to thought provoking discussion, helping us to contemplate aspects of such emerging technologies and trends in ways we might not have otherwise considered from mere thought experiments.  I also hoped we might gain insight into the humane technology aspects of IoT, beyond the current discussions of privacy vs. convenience. Last, I hoped to gain experience with the usefulness of VR for interaction design prototyping. In a followup post, I'll discuss the implementation and outcomes of the pop-up.

Popping In, Popping Out: Reflections on the Humane Technologies Pop-Up Week

Collaborating Faculty member Ben McCorkle writes: From the outset of the Humane Technologies: Livable Futures Pop-Up Collaboration, I wondered what my role would be in it. As a specialist in rhetoric whose interest lies in exploring how technologies have shaped our communication practices throughout history, I’ve been trained to explore these questions from a position that’s somewhat outside and above the immediate action. To an extent, I set out to maintain this stance, intending to watch from the sidelines as an impressive group of technologists, designers, and artists came together in the spirit of play, exploration, and creativity to question how contemporary technologies can be utilized to promote a more compassionate, socially engaged future. But as the week unfolded, I found myself caught up in the gravitational pull, eventually diving in and joining the fray. 

As part of the reporting team (Peter Chan, Michelle Wibbelsman, and myself), our goal was to observe and document the processes of collaboration as they unfolded throughout the ACCAD space: the brainstorming, the concept building, the rapid prototyping, the problem solving, the play-testing, the refining. Initially, I found myself focused on the technologies themselves, the instruments that facilitated these processes. The whole space was populated by a whole heap of impressive gee-whiz tech, from VR rigs and 3-D printers to interactive touch displays and projectors. Surrounded by this technological infrastructure, it’s tempting (and perhaps even understandable) to forget about the actants, the human agents, that use that infrastructure. I mentally checked myself and popped out of the activity to observe from a different perspective.   

I found myself watching how bodies circulated during the week: frenetic, chaotic, playful, eventually leading to patterns… leading to purpose. The open layout of the ACCAD studios facilitated this movement, where people working diligently on one project would be pulled into another for some quick feedback, then to another to help with a demo. Classes would move in and out of the space, students contributing to the tasks at hand. 

I popped back in. I played with data visualizations on a large touch screen, contributed family photos to help Maria build a patchwork landscape for the Fly Like a Bird heron flight simulator, offered feedback as Scott and his team developed his Digital + Physical Games project. I also worked with Alan as he developed his Method of Loci VR and multi-touch display environment (for this project, I contributed the idea of the classical/medieval rhetorical technique called the memory palace, a method of remembering parts of an oration by mentally placing key points in an imaginary building). This project explores the possibilities of externalizing our individual memories and experiences in a shared, interactive virtual space. I think of this project as a microcosm of what the entire week was about: connecting, creating spaces for empathy and unerstanding.  

I popped back out. As Peter, Michelle, and I talked about what we were observing as ideas took shape, as process yielded product, we leaned on metaphors, symbols, and imagery that reflected this dynamic: the double helix structure of DNA, Chinese ideographs depicting “tree” and “forest,” pictures of a copse of trees, an individual tree with serpentine root structure, imagery of tornados, and Robert Smithton’s earthworks sculpture The Spiral Jetty, among others. 

At the time, I wrote in response to Peter as he shared a collage of these images:

I’m struck by the resemblances evoked by these different image groupings: curvilinear, evoking a sense of motion/process, "natural." in the sense of conveying a visual identity for whatever it is that humane technologies want to become (despite what we might *want* them to become), these images collectively suggest a common ethos or spirit. 

Additionally, these visual metaphors all work together if we consider how systems and ecologies operate, and, more to the point, how we as subjects observe them in operation: from a certain distance, perhaps they appear orderly and unified, but zoom in and you might see frenetic noise or even chaos; zoom in even further, and you might realize there's actually an elegance (perhaps even design?) to that chaos... 

Popping back in. I’ve come to a realization that all of this spiraling imagery is not just a metaphor, but a way of mapping the week-long activity of the Pop-Up. In other words, this movement of bodies not only reflects on a symbolic level how ideas emerge, change, lead to creation, it is *literally* a key mechanism by which they are formed. Hands type and push buttons to change code, arms wave in the midst of gameplay, whole bodies undulate in the service of performing a dance routine. Witnessing firsthand (and even participating in) this whirlwind-in-a-snowglobe, I realize that this dynamic is at play when we scale up to consider culture at large. The problem is, we don’t always recognize that; perhaps the solution lies in deliberately attempting to bring about those moments of recognition more clearly and more often.


Contemplating the future.

Contemplating the future.

How often do we contemplate the future? In this sense, I'm not referring to our shopping list, or our student loans, getting promoted at work, starting a family, finding a soulmate, or preparing for retirement. I exclude these types of future-oriented concerns because most of us feel that we have some modicum of control over their outcomes. We can see a path by which our individual agency can make an impact. So, when I ask the question about contemplating the future I have in mind the things that are progressing in labs and research institutions all over the world, things like autonomous vehicles, robotics, bioengineering, the Internet of Things, human augmentation, and artificial intelligence. I don't believe we think much about these futures, but in many ways, they are potentially the most transformative and could affect our lives as much as the futures with which we do contend. As a professor of design, my objective is to prepare students to be conscientious problem solvers and creators of the physical and informational environments that surround us. This week my students in Collaborative Studio 4650 provided a real word guerrilla future for the Humane Technologies: Livable Futures Pop-Up Collaboration at The Ohio State University. The design fiction was replete with diegetic prototypes and a video enactment. I will unpack some terms. The term guerrilla future stems from what Stewart Candy (2010) calls guerrilla interventions. 

augmented reality glasses


“Its aim as a practice is to introduce… possibilities to publics that otherwise may not be exposed to them, or that, while perhaps aware of the possibilities in question, are unable or unwilling to give them proper consideration. It is about enabling people to become aware of and to examine their assumptions about futures -- possible, probable or preferable -- by rendering one or more potentials concrete in the present, whether or not they have asked for it.” [Emphasis added].

Our goal was to present a believable future—in 2024—when ubiquitous augmented reality (AR) glasses are the part of our mundane everyday.  We made the presentation in Sullivant Hall's Barnett Theater, and each member of the team had a set of mock AR glasses. The audience consisted of about 50 students from ranging from the humanities to business. 

In contrast to the gallery show, academic or corporate workshop, which attracts voluntary participants, guerrilla futures are uninvited. The objective is to bring awareness of future thinking to a wider audience and perhaps to engage them social debate. The second term to unpack is design fiction which is a research methodology for designers whereby we create believable artifacts from the future as well as other media to craft fictional futures. If these props are experiential, it is possible that the audience could become cooperative participants and agents in the intervention.

the artificial intelligence


The presentation lasted about 30 minutes after which we pulled out rolls of white paper and markers and divided up into groups for a more detailed deconstruction of what transpired. The discussions were lively and thought-provoking. Though it is too early to have completed an exhaustive analysis of the event, it seems universal that we can recognize how technology is apt to modify our behavior. It is also interesting to see that most of us have no clue how to resist these changes. Julian Oliver wrote in his (2011) The Critical Engineering Manifesto

“5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user's dependency upon it.”

The idea of being engineered by our technology was evident throughout the AugHumana presentation video, and in discussions, we quickly identified the ways in which our current technological devices engineer us. At the same time, we expressed to varying degrees, our powerlessness to change or effect that phenomenon. Indeed, we have come to accept these small, incremental, seemingly mundane, changes to our behavior as innocent or adaptive in a positive way. En masse, they are neither. Kurzweil stated that, 

‘We are not going to reach the Singularity in some single great leap forward, but rather through a great many small steps, each seemingly benign and modest in scope.’

History has shown that these steps are incrementally embraced by society and often give way to systems with a life of their own. An idea raised in one discussion group we labeled as effective dissent, but it seems almost obvious that unless we anticipate these imminent behavioral changes, by the time we notice them it is already too late, either because the technology is already ubiquitous or our habits and procedures solidly support that behavior.

the design team

There are ties here to material culture and the philosophy of technology that merits more research, but the propensity for technology to affect behavior in an inhumane way is powerful. 

This is necessary research for designers as well as the rest of us. The things that are going on in laboratories somewhere will, in gradual steps affect us. In this context, designers must be cognizant of these more elusive futures. They may wish to leverage them, but they must also be wary of them. Why wary? Because we make things. Design affects culture and culture, in turn, affects what we design. It has always been this way. It is always the designer's responsibility to be certain that there are no errors in the software or the material, to ask “what could go wrong?” But how often do we ask, what are the ramifications of our design, should it go right? What are the entailments to scalability and ubiquity and the systems, often complex and gnarled, that result from successful designs?  Because successful creation has the tendency to become ubiquitous, to influence behavior and to transform society, it becomes a design responsibility, but also one that pertains to virtually every other discipline. We need to pay attention.

Hopefully, design fiction and guerrilla futures can become a more widespread methodology to provoke discussion and debate and make us more active participants in what our future should be. Special thanks to the Humane Technologies Collaboration for allowing us to create this future provocation.

E. Scott Denison | Assistant Professor | Department of Design | The Ohio State University

Better Futures

Better Futures

With funding from the OSU Discovery Themes ACCAD will be the site of a Humane Technologies Pop-up Collaboration the week before spring break. With humane technologies as our foundation we will be focusing on the theme of Livable Futures.

Join us in creating artworks that fill ACCAD, the campus, Columbus, and beyond with messages of compassion, social justice, livability for diverse human and non-human life, and multi-sensory technologies for better futures! 

Humane Technology Pop-Up Collaboration: Livable Futures

March 6-10
All of our faculty and staff and GAs and many of our classes will be working on creative projects throughout the week. Like a hack-a-thon or a charrette the purpose of this week is to create a focused time outside our busy lives for creative collaborative action. Students from the environmental humanities and human rights research groups will join us as well as alumni guests who are taking time out from their work at google, Adobe and in their own design firms and they will enjoy connecting with you all. 

Throughout the week we will be sharing and documenting the prototypes and artworks and advancements made. All of our working spaces (the open collaboration rooms, SIM lab, Motion Lab, conference room...) will be busy and there will be more people around than usual. 

Rosalie Yu's visit kicked off the events and we have Ohad Fishof and Noa Zuk in residence next week as a warm-up for our Pop-Up collaboration March 6-10 and will have more visitors later in the semester.

Here's where you come in.

Let’s create visions of better futures and take creative action together. 
What creative humane technology interventions can you imagine and what solution stories can you create for better, more livable futures? All mediums and methods welcome.

We want to see your videos, poems, essays, performances, animations, posters, drawings, games, prototypes, sculptures, virtual environments and beyond. 

Pitch us your ideas throughout the week. Strong ideas that best capture or comment on livable futures will be supported with funds for supplies, exhibition online and in future gallery events, input, ideas and trouble shooting support. 

And if you'd like to add your efforts to our projects we will be demo-ing and discussing them Monday morning 3/6 9:30-Noon and you can stop by any time and see what we're working on. 

More info:

Humane Tech Pop-Up: Livable Futures
March 6-10, 2017
Humane technologies do no harm, they are creatively open-ended, socially connected and access the full multi-sensory capacities of human intelligence. Humane tech creates compassion and well-being, embraces complexity, enhances collaboration and is radically inclusive. With these humane working assumptions, we will focus this Pop-up on livability in the 21st century. 

Posthuman not Anti-Human
In her book on posthumanism, scholar Katherine Hayles critiques the fact that many visions of the future “point to the anti-human and the apocalyptic" and calls us to action showing that "we can craft other visions that will be conducive to the long-range survival of humans and of the other life-forms, biological and artificial, with whom we share the planet and ourselves."

Solution Stories
Activist Frances Moore Lappe calls us to create solution stories “Facing unprecedented challenges, we can choose to remain open to possibility and creativity—not mired in despair. Surely, the latter is a luxury that none can afford. We can create and enthusiastically share a solutions story today, every day. It is a revolutionary act.”  

Monday-Thursday Collaborative working sessions 9am-6pm

  • Monday 9:30-Noon Demos to kick off the projects and afternoon working sessions
  • Weds 12:45 in the Motion Lab object oriented ontologies embodied exploration
  • Thursday 9:35am in 320 Design Futures provocation (design and humanities students, others welcome)
  • Friday all day demo-ing and documenting results

Email zuniga-shaw.1 for more info

Sandbox Sessions Summary

Sandbox Sessions Summary

Professor Norah Zuniga-Shaw facilitated a series of Sandbox sessions during the Autumn 2016 semester for the Humane Technologies team to get started collaborating and asking research questions together. The expectations, experiences, and reflections stimulated by these Sandbox sessions are presented in the blog posts for each Sandbox. These open ended collaborative sessions resulted in the key research frameworks and humane technology definitions that we will take into the Pop-Up session March 6-10, 2017.