Normal for Norman?

Introduction

    “Normal for Norman?” is our attempt to build a short playable narrative experience that utilizes the unique qualities of Virtual Reality to explore the fragility of human memory as told through the perspective of an elderly man trying to remember the song he used to play on his trumpet as a child. We did this by creating an interactive environment that uses environmental storytelling through objects, audio and visual effects, and allows the user to “move” through different eras of their avatar’s life by focusing on specific objects to solve a simple puzzle and evoke the embodied perception of memory recollection.

Development

    We began our development on “Normal for Norman?” by trying to answer this question that we posed for ourselves: “How can we build a short, narrative experience exploring memory (and its fragility) that utilizes the uniqueness of Virtual Reality?” We knew that we wanted to tell an interesting story, and all felt that Virtual Reality provides a framework for creating incredible experiences, but we really struggled at first with deciding how to approach making an experience that could only be experienced in VR.

    We started by thinking about the ways in which Virtual Reality could be used to tell a story, and realized that traditional forms of storytelling wouldn’t be taking full advantage of the tools that we had available. First, we made the connection that Virtual Reality itself is a way of experiencing false memories. When we put on a headset and are fully immersed in VR, we are essentially experiencing things that aren’t really happening, even though we can clearly remember performing tasks or activities. This sort of embodied cognition is powerful, and is at the core of how we are using game mechanics and narrative to explore the nature of human memory.

    Our initial thoughts were inspired by our own experiences with friends and family members who have struggled with memory problems, and because of this we wanted to make sure that we did our research to treat the subject with care. Dementia, for example, “is an umbrella term used to refer to a collection of symptoms that can result from a number of different diseases of the brain” 1 such as Alzheimer’s and vascular disorders that affect specific parts of the brain. While we weren’t designing a story that depicts a specific disorder, our hope was that we could raise public awareness and empathy by allowing players to experience something similar themselves.

Audience

    With that said, our potential audience is still quite small. We selected the HTC Vive headset for its ability to create room-scale virtual reality experiences that people could naturally move through without needing to use navigation methods other than their own body. Exact sales figures are unknown, but it’s estimated that there are approximately 2.5 million headset owners as compared to approximately 2.5 billion smartphone owners2. Combined with the sell through rate of a popular VR game on the Vive platform of a similar genre being comparatively low, our potential reach is very small. How then, can we hope to raise awareness about memory and disability through our experience?

    Our hope is that as VR technologies improve, low-cost and highly immersive platforms that produce the level of immersion necessary to create Place Illusion (defined as the “strong illusion of being in a place in spite of the sure knowledge that you are not there3”) will become more prevalent. This would allow more people the opportunity to experience what we have built. Because this is not currently the case, and we have the luxury of making something without needing to return a profit, we made the decision that it was more important for us to tell the story that we wanted to than it was to spread a message of awareness to a wide audience. Those that do have access to the technology (now and in the future) should then have a more focused experience that is better able to translate what it is like to have difficulty recalling a memory.

Process

    As we were still learning the methods for creating virtual environments throughout the term, we focused on making rapid prototypes in Unity for playtesting often so that we could iterate on our design as quickly as possible. Our initial design of the project, which was of an entity living inside of someone’s brain that solved puzzles based on memory through several rooms that were related to deteriorating brain functions quickly proved to be far beyond the scope of what we felt that we could accomplish during the term. However, our physical prototype of the experience (where we acted out what we imagined the experience would be like) was a good exercise for determining the types of natural interactions we were hoping to include.

    After we had decided on changing the narrative to tell a story about a man who is trying to recall the experience he had playing his trumpet throughout his life, we realized that we would need help with implementing the interactions in VR when making our prototypes. We decided that we would focus on creating the prototypes using low-poly objects and primitive shapes, and that we would primarily use 3D and sound assets available on the Unity asset store and free resources online (with the exception of some unique assets we built ourselves). We initially included both Newton VR and Steam VR in our first prototypes, because we were unsure of how they would feel. And because we don’t all have continuous access to a VR headset, we initially split the development into three branches. One that could be tested in virtual reality, one that could be played using a mouse and keyboard (because Newton VR does not natively support a player controller), and one where we could start collecting and building our art assets for our final build.

Prototyping

    Our first technical prototype was an example room that had an object appear and disappear depending on when and where you were looking at it (defined in our code as “gaze”), and similar to the effect seen in “Sightline VR.”4 It was our very first attempt at playing with people’s actual memory and perception of their virtual environment, and would later influence the foundations for how the player would navigate the narrative.

    In our first playable prototype, we built a simple white room with four primitive objects and a box-like structure meant to test our basic puzzle design. We had significant difficulties with colliders when we tried to introduce our own scripts to allow us to combine objects (and thus putting together a trumpet and memory at once) – though we were able to solve this in later builds by reducing the number of combinable objects and changing the way that they are built.

    We then playtested this build in class, which provided valuable feedback on whether or not our interactions were functioning the way we were hoping they would. And despite the objective not being immediately clear, the action of trying to build the trumpet worked. We even had someone try and take it apart! This taught us that we were on the right track, but we soon learned that we had to make some significant changes.

Changes

    At this point, we knew that we had a solid foundation for a single puzzle, and a narrative that spanned at least four distinct rooms with their own puzzles and sequences, and multiple distinct branches of code. With only half of our development time remaining, we needed to refocus. So when we sat down to discuss what to do for our second playable prototype, we made a massive restructuring of how the game was going to work.

    First, we decided that having our own scripts for controller input was unnecessary when Steam VR has an included interaction mode. Second, we removed Newton VR from the project because it seemed to be fundamentally incompatible with Steam VR if we were to use both concurrently, and Newton VR did not have a non-VR mode. Next, we had to decide on a universal structure for including assets in our GitHub for the project before deciding that each team member would work on a scene named after themselves to be merged later.

     Lastly, and perhaps most importantly, we made the decision to change how the game was going to work. We still wanted the player to experience aging, and the process of “moving” between different eras of their avatar’s life, but we needed to find a way to put this all together in a single scene. We discussed ways of having the player be the catalyst of change, by triggering events that would be based on specific objects that held significance to the character. We would then make our single puzzle (of putting together a trumpet) the entire game by hiding pieces in different parts of Norman’s memory.

    Our second playable prototype was our attempt to get this “movement” working by triggering a change (initially with a key command and later by using gaze) and having the objects in the room be replaced by those from another environment in the same scene. It worked surprisingly well as a proof of concept, but didn’t really do much in terms of letting the player interact with the environment. The scene that our art team had been working on did, and we needed to find a way to combine the two.

Putting it all together

    Our breakthrough moment came when we merged our different branches together before our public playtest. In previous weeks, we had been working within the same project on GitHub, while building individual scenes that all shared assets between each other. We spent nearly twelve hours building a “Main” scene in our Unity project that was intended to be the scene that we would all be working with for the rest of development.

    We started by cloning the environment that our art team had built alongside the development of the playable puzzle prototypes. We then used a series of empty game objects that we renamed as “managers” of specific functions (i.e. Game, Audio, and Lighting) that we could use to collect the script functions from our previous prototypes and apply them properly to prefabs, game objects and elements for our new scene. We also adopted the structure of our second prototype (with 3 rooms that objects are moved between based on triggers) to the more complicated environment that we now had by duplicating the room and starting to include more diverse objects and assets for each. These would become our “timelines” that the player would move through.

    Seeing the script that Tommy had created (that had previously only functioned with a few simple objects) move entire rooms full of objects seamlessly and subtly was a revelation. Following that, we incorporated Ben’s scripts for haptic feedback and for focusing on specific objects in order to “move” the player character backwards and forwards in time. This was intended to be a diagetic interface that allows the player to select game states naturally by using narratively significant objects they encountered during their playthrough. We had managed to make significant progress towards our original goal, and we couldn’t wait to get people to play it.

← Back to Overview

Continue Reading (Interpreting Playtests) →

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s