Campfire Tales Development Diary – Part 2

Campfire Tales, Updates

“Campfire Tales” is my upcoming project that utilizes procedural generation, room-scale environmental effects, and performance elements to make a unique storytelling experience that can’t be found anywhere else. This week: I detail the first prototypes of the game experience and fire effects!

campfire2

Implementation and Early Technical Prototypes

Twitter Bot @CampfireTale

Screen Shot 2018-08-05 at 8.29.40 PM

Screenshot of the landing page for the @CampfireTale twitter bot account.

     As I’m fairly new to Javascript development, I decided to make a Twitter Bot to both learn how to build a project using NodeJS and interact with an external API as well as find a place to test generative text in tweet-sized chunks and user interactions. I followed the excellent video tutorial by Daniel Shiffman of “The Coding Train” to set up basic tweet interactions while the program is running [source]. To do so, I had to apply for a Twitter development account. For now, @CampfireTale only has a few basic test posts that included running a generated image, and an automatic response. Going forward, I plan to use the account with the Tracery based Twitter Bot generator “Cheap Bots, Done Quick!” to quickly iterate my grammar design before attempting to integrate the Tracery library directly into my existing code. This way, I can continue to iterate without having to spend valuable development time implementing my own solution in code that doesn’t necessarily fit into the core product.

Processing, Tracery, and Node Prototype with User Interaction

DiYDKCIWAAAD31m

Demonstrating text entry in an early prototype that adds string values to the Tracery grammar that can be reincorporated into future generations.

     My first completed technical prototype started off as an exercise in getting Improv functionality working with P5js, where I could output the text generated by Improv to the screen using Processing. I started by following the example in the Improv Tutorial to generate text to a terminal window, and then began to write a sample program that generated a variable from an Improv object in a Javascript program. I was able to get it running using a local simple HTTP server in python to test results, but it became really inefficient to set up every time that I wanted to make changes and see the results in the browser console.

     By using the node package “budō” I was able to package all the local files and serve them locally for testing, and should include functionality for packaging them up for deployment in a live environment[source]. I used Processing to display text to the screen, and to add a text input box for players. For usability, I didn’t want to have more than one point of entry on the screen at any given time. This is when I realized that Improv may not be the right tool for the job, given the need for tagging objects that are added to the grammar’s structure.

     I then decided to test with the node package for Tracery (tracery-grammar), which requires including jquery as well in order to function. Here, I was able to push inputted values to the array of values for characters in the story, thus increasing the possibility space for potential characters with each new entry. Each time the page is refreshed, the short story is regenerated from all of the values contained within the grammar.

     It was a very basic interaction that demonstrated the functionality that I was hoping would exist at the core of the experience. However, I still have some major problems to solve in future iterations. I can use the performer to filter entry at the exhibition, but there is no easy way to filter language (though this may be a good use for implementing RitaJS), and I’m not certain that I should restrict the language for any personal copy of the game. I also need to find a way to connect the generated grammars in such a way that I can create a narrative, and track inputted variables for the course of an individual story.

Web Hosting with GitHub Pages

Screen Shot 2018-08-06 at 1.40.02 PM.png

     My eventual goal for “Campfire Tales” is to have it widely available on the web. Once I had completed my initial technical prototype, I needed a way to publish it for people to access it. My first test was to create a sketch in processing that generated text on a curve over a static image (credit to oksmith on open clipart), and deployed the sketch to GitHub and published it as a GitHub page. The sketch was visible, but had issues displaying across a wide variety of devices. I collected a screenshot of the sketch, and adjusted it to deploy a static teaser website at campfiretales.info, where the final version of the game would be released and take the place of the screenshot in the canvas.

Tuya Smart Light Integration

     My original intention was to use Processing to send data over a serial port to an Arduino or other microcontroller for environmental lighting or other effects. While I still may use both for mechanical interactions, I decided to explore smart products for environmental lighting. This was both to learn something new, and as a forward thinking move, as it could be kept and implemented into future iterations to allow users with smart lightbulbs to experience some of the designed lighting effects from the exhibition version of “Campfire Tales.”

     To keep costs down, my first test was performed using a generic Tuya compatible smart light [source]. One of the major advantages was the availability of a node package for controlling Tuya devices locally [source]. Getting the API to function properly within my main program was significantly more difficult. Once I retrieved specific values by decoding HTTPS traffic for a local key, local IP address and device ID for my test bulb, I did manage to get the test program to turn the light on and off when a key was pressed. Unfortunately, there appears to be no documentation for writing color values to the bulb (while there is a dataset that appears to contain values, they refer to a state that is locked behind developer access to the API, which at the time of this writing I haven’t received).

     Color changing appears to be possible by setting up an open source controller called “Homebridge,” but it seems to make less sense to continue developing for a platform that much less people would have access to, and requires a significant amount of user setup that would significantly reduce my ability to share these features with others in the future. Additionally, there is a likely constraint that I won’t be able to access external internet from the venue that this will be installed in, so using a platform that requires access to an external source for setup and control isn’t suitable. I still plan to use colored lighting for the room, and more specifically for the centerpiece campfire that I would like the program to control.

Fire Effect Prototype

The original inspiration for this effect was a Youtube video by the Daniels Wood Land Show detailing how to make an “Insane Fake Fire Special Effect” using water vapor created by a sonic ionizer, halogen lights, a Tupperware container, and an old paint can . Because I’ll be setting “Campfire Tales” up in an indoor environment with limited ventilation, it’s critical that I make something that is both safe and not actually fire.

I need to build something similar to the effect shown in the previous video and scale it up to become a centerpiece for the exhibition display. To do so, my first prototype includes an ultrasonic mist maker [source], a plastic shoe box, a plastic bin that I cut holes into, and two portable fans. The mist maker is essentially a piezo vibrating at a very specific frequency that causes the water to split into microscopic droplets that form a mist.

Initial tests with the mist show that it works well at generating a large volume of airborne particles, but they tend to sit in a layer about 2 to 3 inches above the water level. To introduce movement, I added the two portable fans for testing, and held the light above the lip of the bin. The effect is close, but will require a significant amount of additional development to get working right. The plan is to find a way to “chamber” the mist so that it could be blown (potentially from multiple sources) into an opening that will be covered by burnt wood to give the impression of a campfire. I’ll be adding cade and birch tar essential oils to the mixture to give the impression of a smoky, burning wood smell, and can build a stone circle to hide the functioning electronics and protect them from wayward visitors. Lastly, I’ll be able to integrate one of the smart light solutions that I’m working on for environmental lighting to give the campfire an otherworldly flavor with unnatural colors.

Upcoming

Next week, I’ll be detailing my process for iterating development following feedback from my peers, and detailing my writing process for the tales themselves. Stay tuned!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s