ICM Final Project

Somos Semillas
We Are Seeds

Quisieron enterrarnos pero se les olvidaron que somos semillas.
They tried to bury us. They forgot we were seeds.

Plant a seed. Sing to it. Talk to it. Yell at it. Whistle at it. Watch it grow. 

Clap at it. Beatbox at it. It blossoms.

Description

My final project is a browser-based sketch––accessible by anyone with the link––that translates sounds into a digital drawing. Namely, your voice draws a growing seedling.

It utilizes the p5.sound library––specifically p5.Amplitude––to measure the amplitude of the user’s microphone input each frame. That value is then mapped to the y-position of each seedling segment. The amplitude value is also mapped to the strokeWeight() of each seedling, although it is modified with a little arithmetic (provided by Mathura Govindarajan) to create a smoothing effect for each segment drawn.

Utilizing p5.FFT, a threshold of high frequency amplitude is created using peakDetect() to draw a blossom whenever the threshold is surpassed. In practice, “s” and “t” sounds reliably create the effect, as well as claps.

Link to the source code in the p5.js online editor.

Link to the full screen version.

Special thanks to my ICMadness partners, Chian and Alan, from whom the seed of this idea sprouted.

Please see my proposal for a more detailed history of the quote above and my inspiration for this project.

Methodology

Growing Lines

Over the course of the past month, I experimented with many different ways of drawing the seedlings. I started with this basic concept of a line that grows in response to mic input using the AudioIn() constructor in the p5.sound library. Based on the example of iteration in the p5.js documentation, I used a for() loop to redraw the line every frame. For each new frame, the volume measured by the mic input was mapped to the y-position of the line using map().

Keep in mind that the getLevel() method only ever returns a value between 0 and 1 when measuring the volume of the user’s mic input. To make the line grow visibly in that case, I experimented with how much to multiply the volume by. I also aimed to make the rate of growth match intuitively to the volume of the user’s voice.

Mimi Yi, she advised me to play with noise() as a way to mimic the organic form and movement of a plant:

Set a variable for the rate at which the point oscillates (t). Then, add the x-position to itself plus the noise(t) plus the amount at which you want it to tend left or right (any amount between 0 and 1).

In my early experiments, line() seemed to lead to compelling, yet ultimately useless sketches. I then focused using vertex() with beginShape() and endShape() to get the effect I had envisioned. Again, it resulted in oddly satisfying visuals but nothing close to the vision––not to mention functionality––I had in mind.

Dan Shiffman kindly pointed me in the direction of p5.Vector as a way to define the coordinates of my seedling and connect each previous coordinate to the current coordinate. He also––through a gentle dig at the roundabout nature of my code––reminded me about the importance of object-oriented programming. I was able to reorganize my entire thinking: first into a seedling() function and then into a Seedling class.

The final refinement of the growth of the seedling came in the form of Aarón Montoya’s suggestion that I map the volume to the width of the line as well as its y-position. He also helped me figure out how to use line() instead of vertex() to the effect I was going for.  Mathura further helped me smooth out the strokeWeight() by giving me this math:

It’s like a nerdy autograph.

A Journey Through FFT Analysis

While Mimi was trying to help me get my seedlings to grow plants grow, she also suggested some ways of getting them to grow in different directions based on frequency using p5.FFT. This was part of my original concept and I tried so hard––so very hard––to make that happen.

Here are some of the tactics I tried that yielded little to no results:

  1. Using analyze() to print arrays of frequency amplitudes and manually going through it to try and identify which human sounds corresponded to what bins in the array
  2. Using getCentroid() to try and find a threshold
  3. Comparing the frequency presets in getEnergy() using boolean phrases (if treble > bass, etc.)

In the end, I could see no discernable difference between any of these methods and just using pure noise().

The upside is that I learned A LOT about frequency analysis, Fast Fourier Transform, and sound visualization. This is definitely a path I would like to continue down.