Categories
Art Audio Design ICM Interactive ITP Music NYU

ICM Final Project Proposal: Text2Drum

For my ICM final project, which I am calling Text2Drum, I seek to explore the rhythmic qualities of language.  Text2Drum will read text from a file, and convert each letter of the text to a drum/percussion that I will assign to correspond to each letter of the alphabet.  Spaces and punctuation marks will be interpreted as a musical rest, that is to say a period of silence.  The program will also display the text on screen while playing back the drum samples.  In using Text2Drum to generate percussion patterns, I aim to translate text into a new musical language and seek to reclaim the rhythmic nature of language found in oral communication that is lost in written language.

I also hope to make a second version of Text2Drum which will be interactive, which will have an interface that will allow a user to type in text that Text2Drum will then convert into a rhythmic pattern.

Categories
Art Audio Design DIY Education Fun ITP NYU Physical Computing Video

head(banger)phones

My final project proposal for Physical Computing at ITP:

head(banger)phones – a personal musical device

A set of headphones rigged with an accelerometer that detects the motion of the user’s head and converts that motion data into MIDI data via an Arduino microprocessor.  The MIDI then triggers percussion sounds in a software synth on the computer, which feeds the audio signal back to the user wearing the headphones.

Here is a video of my proposal to my class.


head(banger)phones from lee-sean on Vimeo.

Categories
Activism Audio Communications Lab Music New York NYU Podcasts

Jump (The Bailout Bash)

IMG_4349

Elizabeth and I teamed up again to work on our audio pieces for Comm Lab this week.  Although we did record audio from the streets of NY together for last week’s assignment, we decided not to use any of it and instead decided to choose a pressing socio-political theme of the current economic “crisis.”  I took photos and recorded audio from an anti-bailout protest on Wall Street last month.  We were particularly attracted to some snippets from a speech given by the charismatic looking gentleman pictured above.  We also used samples of other protesters chanting slogans, and put everything over a beat that I composed.  In some amazing coincidence, almost all of our samples fit over the beats at 109 BPM.  Only one sample of chanting protesters had to be slightly stretched in Audacity to fit the tempo.

Download the MP3 or the AIFF.

The danceable audio anger that resulted from our musical collaborations reminds me a little bit of the Muppets doing N.W.A. And for further musical explorations relating to pigs, I suggest NIN’s Piggy and March of the Pigs.

Elizabeth describes more of our process in her blog:

We used several applications to make the piece. We cut up the audio in Fission, a commercial software for simple cutting. It is really usable. The beats were composed in iDrum…. We used Audacity to change the length of some of the audio pieces so that they all had the same beat. Then we assembled the song in Garage Band.

IMG_4338

Categories
Audio Communications Lab DIY HEPNOVA ITP Music NYC NYU Podcasting Podcasts

Audio Sketch 1: Subway, Streets & Stars

Today in Comm Lab, we covered how to edit audio using Audacity and GarageBand, both of which I have worked with before, but it was good to get a review. This week’s assignment is to create a 2 minute audio piece using audio we recorded or found sounds.

Besides the soundtrack to Herbivores, I haven’t done much music lately. Especially since the last time Hepnova made music together was almost a year and a half ago. Later this week, I’m scheduled to work with Elizabeth, who I worked with earlier on the Herbivores animation, but I just couldn’t wait to get back into making music again, so I just went ahead and composed my own piece today. I will still make another track with Elizabeth later this week.

This track is called Subway, Streets & Stars. It is composed of recordings I made in the subway and streets of New York City as well as an audio sample of stars (the ones in space, not the ones in Hollywood) from the BBC. I used Fission to chop up samples, iDrum to create rhythmic loops, and GarageBand for putting everything together.

Click here for the uncompressed AIFF version.

Categories
Audio DIY ITP NYU Physical Computing

P-Comp: Sine Wave of Doom AKA the Poser Theremin

In this week’s lab, we learned about Serial Duplex and continued to learn more about how to get Arduino and Processing to talk to each other.

In my spin on the lab, I hooked up 2 potentiometers through the Arduino to control the frequency and panning of a sine wave in Processing that is manifested on screen and as audio.  I don’t know why I chose red and blue for the sine waves.  Maybe I’ve been watching too much TV coverage of the presidential campaign.  In any case, the effect is that of a sine wave of doom or a poser theremin.

To get Processing to play sound, I used the Minim library.  I basically poached some example code from the Minim site and tweaked it so the sound responds to analog ins from the Arduino instead of mouseX and Y in the original sketch.

IMG_4826

I hope to use more exciting sensors later on to make it more expressive and musical when I get back from fall break next week, but it took me awhile to tweak the software side of things and the computer store was already closed by the time I got around to this. Here it is in action:

My digital camera didn’t pick up the audio very well, so here is an MP3 of some “music” I made.  And here is another attempt.

And finally, here is the Processing code:

import processing.serial.*;     // import the Processing serial library
import ddf.minim.*;
import ddf.minim.signals.*;

Serial myPort;                  // The serial port
AudioOutput out;
SineWave sine;
int sensors[] = new int[2];

int passX;
int passY;

void setup(){
myPort = new Serial(this, Serial.list()[0], 9600);
myPort.bufferUntil(‘\n’);
size(512, 200);
// always start Minim before you do anything with it
Minim.start(this);
// get a line out from Minim, default sample rate is 44100, bit depth is 16
out = Minim.getLineOut(Minim.STEREO, 512);
// create a sine wave Oscillator, set to 440 Hz, at 0.5 amplitude, sample rate 44100 to match the line out
sine = new SineWave(440, 0.5, 44100);
// set the portamento speed on the oscillator to 200 milliseconds
sine.portamento(200);
// add the oscillator to the line out
out.addSignal(sine);
passX = 0;
passY = 0;
}

void draw()
{
background(0,50);
stroke(255);
// draw the waveforms
for(int i = 0; i < out.left.size()-1; i++)
{
stroke(#FF0000); //left is red
line(i, 50 + out.left.get(i)*50, i+1, 50 + out.left.get(i+1)*50);
stroke(#0023FC); //right is blue
line(i, 150 + out.right.get(i)*50, i+1, 150 + out.right.get(i+1)*50);
}
float freq = map(passY, 0, 1023, 1500, 60);
sine.setFreq(freq);
// pan always changes smoothly to avoid crackles getting into the signal
// note that we could call setPan on out, instead of on sine
// this would sound the same, but the waveforms in out would not reflect the panning
float pan = map(passX, 0, 1023, -1, 1);
sine.setPan(pan);
}

void stop()
{
out.close();
super.stop();
}

void serialEvent(Serial myPort)
{
// read the serial buffer:
String myString = myPort.readStringUntil(‘\n’);
myString = trim(myString);
int sensors[] = int(split(myString, ‘,’));
for (int sensorNum = 0; sensorNum < sensors.length; sensorNum++)
{
// print(“Sensor ” + sensorNum + “: ” + sensors[sensorNum] + “\t”);
}
passX = sensors[0];
passY = sensors[1];
//println();
//print(“PassX= ” + passX + “, PassY= ” + passY);
//println();

}