I presented my final project for Intro to Computational Media today. Building on the work I did with the Musical Typewriter, I ended up making some last minute tweaks to my “musical typeface,” which consists of audio samples corresponding to each letter of the alphabet. Originally I had used all single-hit percussion sounds, but I reworked the alphabet to include short musical phrases or gestures, and brought in wind and string instrument samples for greater musicality and richness. The final version of my Processing program reads a text file and “translates” the text into music by playing back the samples corresponding to the letters of the words as musical phrase “cluster.”
While playing back a text as a song, the Processing sketch also simultaneously displays the word corresponding to the musical sample cluster being played and visualizes the frequency waves of the music on the screen. Refer to the screenshot above.
I got the audio visualization to work. The waves represent the audio waveforms of the percussion alphabet. The letters fade out gradually after you type them to in order to help pace the user and to represent the “life cycle” of each note through time. Here is a screenshot:
After getting help from the ICM email list, I changed my AudioPlayer objects to AudioSamples, which only need to be triggered once, as opposed to AudioPlayer objects that require play and rewind functions. This tweak solved my latency and crackle issues.
Testing out the new version has reminded me of the aural pleasure of typing on analog typewriters. The rhythmic, percussive quality of clanking keys is now lost on contemporary computer keyboards that make more muted sounds when used.
I also thought it would be cool to visualize the waveform of the audio output and display that along with the letter on the screen. Well, back to work…
I have completed phase 1 of my ICM final project, Text2Drum, which involves me creating a new “percussion alphabet”, perhaps another way to describe it is “a musical Morse code.” I have assigned a unique percussion sample to each letter of the alphabet. I have assigned ‘A’ through ‘G’ pitched percussion hits that correspond to the white keys on a piano, but voiced at different octaves. All of the other letters are un-pitched percussion sounds. I have not assigned sounds to punctuation marks or numbers (yet). I’m not sure if I want to or if this is necessary for my new language.
I have written a Processing sketch, with the help of the Minim library, that that plays back the “percussion letters” when the user types on the keyboard. There is a bit of latency and audio “crackle” that still needs to be worked out, but for the most part, the musical typewriter works. The next step is to build a related program that can read a text file as a musical score, translate the letters into percussion alphabet, play back the results, and save the audio playback as a file.
For my ICM final project, which I am calling Text2Drum, I seek to explore the rhythmic qualities of language. Text2Drum will read text from a file, and convert each letter of the text to a drum/percussion that I will assign to correspond to each letter of the alphabet. Spaces and punctuation marks will be interpreted as a musical rest, that is to say a period of silence. The program will also display the text on screen while playing back the drum samples. In using Text2Drum to generate percussion patterns, I aim to translate text into a new musical language and seek to reclaim the rhythmic nature of language found in oral communication that is lost in written language.
I also hope to make a second version of Text2Drum which will be interactive, which will have an interface that will allow a user to type in text that Text2Drum will then convert into a rhythmic pattern.