Show You is an interactive installation in the form of a darkroom, where one gets to virtually “develop” one of their own Instagram photos in a developer tray.
Here is video documentation of Show You.

“Show You” – Darkroom Simulation with Instagram – Thesis Documentation from Zena Koo on Vimeo.

The amount the user moves the piece of paper in the clear-bottomed tray controls the rate at which the randomly chosen personal Instagram photo develops. I chose to randomly select the photographs that appear because I was hoping to mimic that magic moment in the real darkroom developing process when the photographer first sees their image start to appear on print in the tray — that moment of delight and surprise was something I was very keen to evoke.

How? I am using the Instagram API with the Python wrapper to access the user’s images through the OAuth process. From there I am sourcing two randomly chosen images from their recent feed (of the last [up to] 300 photos). I am also using MAX/Jitter to look for the two photos I just sourced, and a webcam to look for the photo in the water to motion track its movement and luminosity variance. This part controls the rate at which the image appears to develop (fade in, fade out). The same image that has just developed appears in a hanging frame inside the booth for a brief moment at the peak of its development to illustrate a few points. The length of time the image appears in the frame is short to comment on how our mobile phone photos are just that these days — photos in your mobile phone that we take quickly and easily and many times forget about. The frame references how we do not very often print out the many images we take and how we seldom frame them or put them into albums. We use our phones to use Instagram and its filters and in doing so reference the past and our generational brand of nostalgia, but what will our future generations’ nostalgia look like when all of our photos are living in our phones and storage devices? (Decidedly unromantic, IMO.) In making the whole immersive environment of a darkroom simulation, I am also referencing my own nostalgia for the process of darkroom development that has been largely abandoned for a while now.

In short, I’m trying to slow down the process of our quick and dirty mobile-phone photography by having you experience a meditative darkroom process. I wanted to create a personal immersive environment where this could happen. I got some great reception at the Spring Show 2013. But you either “got it” or you didn’t, which is all OK. It took a while to pare down the concept from a full-scale personalize-able exhibition to what this iteration has become. If I were to install this in a more public space, I would make some technical tweaks and aesthetic and user-experience changes, but I’m happy with the way it turned out and the experience I am/was offering.

Some screenshots of my Max/Jitter patch and my Python script.

working with the Python wrapper of Instagram API

working with the Python wrapper of Instagram API

Screen Shot 2013-05-28 at 3.34.22 PM

This patch loads the random images from the user's account and motion tracks the paper's movement. That movement is scaled to load a picture into one display according to how much the paper is being moved, and then into the next display (the frame) for a brief amount of time.

This patch loads the random images from the user’s account and motion tracks the paper’s movement. That movement is scaled to load a picture into one display according to how much the paper is being moved, and then into the next display (the frame) for a brief amount of time.


Here is some documentation of the process we went through throughout the semester.

Internal Emotional Investigation – Product Poetry – Fall 2012 – assignment 1

Product Expert Interview & User Observation_PP

Product Audit and Analysis_Sun Protection

Oct. 3 Presentation_PP

Oct. 18 presentation_Design Exploration 2

Nov. 21 Presentation_Getting Serious about Materials

Nov 28_Protype Explaination

Dec.5_Protype Test

Dec12_Final Presentation_team sunshine

Tiffany Chou and I presented our project from the Fun Theory class at the ITP Winter Show this past Sunday (Tiffany) and Monday (me). We came up with the name “Soap & Mirrors” after wanting to change the name from “Mirror, Mirror” into something more appropriate for what this project evolved into.

Here are some pictures of the setup.

2012-12-17_19-02-27_762 2012-12-17_18-48-57_961 2012-12-17_18-48-50_914 2012-12-17_18-48-24_445

When visitors came up to the display, I walked them through a user scenario such as this: Say you’re coming out of the bathroom stall, and you’re about to pass by the sink, when the floor sensors would sense that you were leaving and trigger a voice to say something like, “Don’t be gross. Wash your hands with soap.” Then when the user went to wash their hands, they would press the soap button, which would trigger a video projection of various content (in this case, cute cat/puppy clips) to appear on the mirror in front of them as they washed their hands. The voice would also encourage the user to keep washing by saying humorous things like, “Ooh, I like it when you scrub like that!” The video would play for the appropriate length of time for one to properly wash their hands (~15 seconds, according to our research), entertaining/rewarding the user all the while (not to mention letting them see their own reflection, as well, playing on their vanity). At the end of the video, the voice you congratulate with user with “You’re a tidy person,” or something of the like.

We wanted to promote better health and hygiene practices in a fun, rewarding way. We only slightly touched on the “shame factor” with adding the first comment that included the word “gross”. But overall we stayed pretty positive and took the advice of Katherine Dillon and our classmates and did not venture down the guilt/shame lane, since it would not be fun for the user, only to observers. We wanted to incentivize behavior changes in a fun way.

This concept could be adapted to different venues by changing the content of the videos, the output onto the mirrors (an opaque effect or face-tracking effect), and the audio voice (human instead of what we had, which was an automated voice).

To further develop this concept and go deeper into the the theory behind it, we could try to figure out a less blatant way of trying to change peoples’ sanitary behavior and go with a more psychologically subtle approach. A visitor suggested this to me and I really appreciated this feedback. I also got feedback about the audio voice — that it should be a real person to give the experience a more human, immediate quality, and in retrospect, I agree.

Fun theory class 4 brief presentation.

Some very informal thoughts: Should we change the idea to a soap button-triggered pin ball machine? It would have to be a ball/drop system that would only work if the person was vigorously washing their hands(?). Motion- tracking the hands would involve a camera in the bathroom, though. Too complicated in physical-computing terms, in this timeframe.┬áMaybe it should be a Processing sketch of the randomly-set ball movements, and the mirror would not show until the sketch was over and the person was done vigorously washing their hands…(?) Motion-tracking would still have to be involved, though, to ensure that people were washing their hands for long enough and vigorously enough.