Physical Computing: Week 2 Observation Assignment

Here are my notes from my observation assignment from week 2 of Intro to Physical Computing at ITP.

Observation Assignment

The goal of this assignment is to develop the habit of detailed observation of what people physically do when they use the kinds of technologies we’re developing. In order to develop good physical interfaces, you need to know how people use existing ones. Do learn this, it helps to observe carefully, and to limit your assumptions as to what the person’s intentions are while you’re observing.

Counting Daily Uses

To begin with, take a one-hour hour walk or ride around the city. Try to travel as far as you can from your start and get back in an hour, this will give more variety. Take note of every time you see a person using a digital device. This could be anything from buying and using a Metrocard on the subway to playing video games in an arcade to making cell phone calls to using an ATM to swiping an ID at the gym. With each action you note, take note of:

* location and time of day
* apparent intent of the actor
* time taken for the action
* number of people involved
* motor skills needed (hands, legs, seeing, hearing, etc)

Collect your notes on your blog. Do this in pairs, with one person observing and the other keeping notes. Alternate roles as well.

The goal of this stage is to notice how many everyday technology interactions we experience that we’re largely unconscious of, and what it takes to do them. In many cases, the success of these transactions depends on the lack of attention we have to pay to completing them. The goal of most of these moments is not to use a technology, but to reach some other goal.

We chose to walk around Greenwich Village, in the immediate vicinity of NYU.  We noticed that the overwhelming majority of human-technology interactions we observed on the streets of NY involved mobile devices (cellphones, iPhones, Blackberries) and portable music players (iPods, CD players, etc).  We also observed a large degree of multitasking: people listening to music or talking/texting on their phones while walking/cycling down the street, a process that involves quite a bit of coordination between the senses and human motor functions.  For listening to music while walking the streets of NY, the intended purpose seems to be escape – a way of isolating oneself from the excess of sensory stimuli.  However, for people on their cellphones, the intended purpose seems to be connection with others, even when alone in the streets.  Technology thus allows us to be both isolated and alone (with headphones and iPod) or hyperconnected to others we know (cellphones) when we are in public space.  Headphones and portable music players act as an invisible wall that shields us from others, while cellphones and their ilk act as extensions of our voices and ears that allow us to reach out to those not in our physical vicinity.  This projection of the senses is both freeing and constricting.  By being able to multitask, for example, taking a walk and talking on your phone, one is freed from the physical constraints of having to be face to face to communicate with others.  But at the same time, the tether of social cohesion and pressures is extended.  One is not really “away” if one is reachable by cellphone or Blackberry.  Herein lies the paradox of technology – we are simultaneously liberated and constrained by it.

Well, enough holding forth for me.  Click below to read the raw data we collected.

Continue reading Physical Computing: Week 2 Observation Assignment

Response to Crawford’s The Art of Interactive Design (Chapters 1 & 2)

This is my response to chapters 1 & 2 of Chris Crawford’s The Art of Interactive Design, the reading assignment for week 1 of Physical Computing at ITP.

Interactivity is a fuzz term that is hard to define.  Crawford says that “the term interactivity is overused and underunderstood.”  As a buzzword, it has been applied to things as absurd as a rug for children and even shampoo!  Many things/activities claiming to be interactive are really not.

While he does not claim to have the final definition of interactivity, Crawford does propose that interactivity be defined “in terms of a conversation: a cyclic process in which two actors alternately listen, think, and speak.”  If this is the case, then the ultimate interactive activity is direct human social interaction.  But other activities that humans engage in with objects

Participation is not interactivity.  Movies, plays, music and dance are for the most part, not interactive.

Also, Crawford says that interactivity is not a Boolean property, meaning it is not a binary either/or situation.  There are varying degrees of interactivity.

So maybe interactivity is like porn: controversial, misunderstood, difficult to define, but you know it when you see it.  Like porn, interactivity has varying degrees.  Softcore-Hardcore.  Low interactivity-High interactivity.

Perhaps the definition of interactivity itself should be interactive.  Ok, that’s pretentiously meta.  But if interactivity is such a virtue, then a conversation is essential to its definition and application.

So why bother with this interactivity stuff?

Crawford declares that “interactive communication is superior to conventional, one-way communication” and that “interactivity is the computer’s intrinsic competitive advantage.”

Interactivity is both old and new.  It is hardwired into mammalian animal behavior as well as a trait of modern computers.  Interactivity is a conversational process that helps humans/mammals learn through play.  So it is something that we (people/animals) have in common with computers.  So as computers get more interactive does it mean that they too are able to learn, and become more similar to humans/animals?

Crawford promotes interactivity as a new and exciting field for artists to explore.  He also quotes a Chinese proverb, “I hear and I forget; I see and I remember; I do and I understand.”  I have heard and seen the gospel of interactivity, now it is time for me to do before I truly understand.

Physical Computing: Week 1 Lab

This is my documentation of my first week of Physical Computing at ITP.  What is physical computing?  According to the syllabus:

Physical Computing is an approach to learning how humans communicate through computers that starts by considering how humans express themselves physically. In this course, we take the human body as a given, and attempt to design computing applications within the limits of its expression.

To realize this goal, you’ll learn how a computer converts the changes in energy given off by our bodies (in the form of sound, light, motion, and other forms) into changing electronic signals that it can read interpret. You’ll learn about the sensors that do this, and about very simple computers called microcontrollers that read sensors and convert their output into data. Finally, you’ll learn how microcontrollers communicate with other computers.

Physical computing takes a hands-on approach, which means that you spend a lot of time building circuits, soldering, writing programs, building structures to hold sensors and controls, and figuring out how best to make all of these things relate to a person’s expression.

For me, physical computing means building things with wires, circuits, sensors, switches and microprocessors.  We are using Arduino microprocessors in class to control our creations.  Our lab this week involved building a simple digital input/output system that makes two LEDs alternately flash when you press the switch.

Besides a photosensor theremin that I built in high school based on some plans I got off the internet, I have never really done any physical computing stuff before, hence my perplexed look in the photo above.  I basically just set about recreating the plans in the lab instructions while familiarizing myself with working with the materials.  The color-coded wires really help.  Red is for power, black is for ground, and the light blue is for input/output signals.

Soldering was a little bit scary for me.  We were supposed to solder the switch to two of the light blue wires.  First, I wrapped the wires onto the switch, and then put everything in the helping hands.  Then I melted some solder onto the iron to coat it, and then put the soldering iron onto the connection along with a bit of solder and melted it all together.  It only took a couple seconds.  My soldering job was ugly, but nothing exploded and the project worked, so I guess practice will make perfect.

Reading the resistors was a little bit tricky.  Resistors have these little colored stripe patterns on them that tell you how many Ohms they are.  They all kind of looked the same to me.  I really had to squint to see the patterns.  And I just got my eyes checked, so I don’t need new glasses.  They are just really small!  Note to self:  I need to get a magnifying glass.

After I thought I had everything set up, the red light was flashing, but the yellow light didn’t do anything.  Careful inspection revealed that I had one end of the yellow LED plugged in the wrong whole.  Once again, a magnifying glass would have helped.  Anyway, finally it worked!

Here is my successful project: