Future Goals for E.C.H.O.

Ideally, E.C.H.O. would be displayed publicly in an art space as an interactive soundscape. Physically, we would like it to be more like a box, rather than a mess of jumbled pieces. It relies heavily on being connected to a computer, but we would like to remove this necessity by using a raspberry pi or something equally small, inexpensive and customizable.

E.C.H.O. all connected--it's a  mess.

E.C.H.O. all connected–it’s a mess.

The physical pieces pose the largest challenge. We need still need to rewire the system from DC to AC in order for it to work properly. It would also be nice to add a visual component–whether it be a projected twitter stream or lights that correspond with each hashtag trigger.

Interactive Soundscape

As we’ve noticed throughout our experiment and in class, the user interface of a system can highly affect engagement. Using E.C.H.O. as an interactive soundscape could allow us to explore many questions about audience interaction:

  • Would audiences try to be the “loudest”?
  • What would inspire people to work together to create something more musical, rather than something entirely cacophonous?
  • How would this physical display differ from the Twitter sphere itself? Would the translation of data into sound affect user interaction, or have we been too programmed to use social media in a certain way? Would audiences have trouble breaking this mold?
  • Would a “gamification” occur? Is the ability to control a publicly shared experience enough to incentivize interaction?

We utilized polarized hashtags as an attempt to comment on discussion quality on twitter, but you can use different hashtags and sounds to make different assumptions. Presentation is vital to the interpretation of this piece because it can be so abstract. Depending on the time and space in which it is presented could heavily influence the hashtags, sounds and incorporation of audience participation.


Tempo of Social Media: Twitter as a Case Study

One of our original project goals was to “vocalize an ephemeral concept and thus shed light on the suspected shallowness in our social culture”. We hoped to display this through the comparison of sets of hashtags–one associated with pop culture, and light-hearted expression (#yolo, #firstworldproblems), and one associated with heavier, more political topics (#syria, #worldhunger). At a base level, we assumed that if we vocalized each hashtag with a recognizable tone, the abundance of the “silly” hashtags would be more obvious. We thought simply the abundance of conversation around these hashtags could prove an inherit shallowness within the twitter medium.

This ultimately didn’t allow for any solid conclusions for many reasons: #yolo and #firstworldproblems have solidified themselves as common themes on twitter and are used more as hashtags rather than #syria and #worldhunger. Ideally, we would have made a much larger data set–allowing for these inconsistencies among topics.

Also, we suspected any tweet which references something like #firstworldproblems is likely bordering on narcassistic and void of any real content–but expression is unpredictable, and #yolo or #firstworldproblems could be used to highlight a cultural irony or dissonance in unexpected ways. So even if these seemingly irrelevant hashtags are used more, proving a shallowness in culture just based on tempo is problematic.

#worldhunger shallow tweet

Even tweets that utilize seemingly serious hashtags can stray into the shallow end of spectrum of expression.

Screen Shot 2012-12-10 at 1.36.35 PM

The same phenomena can easily be reversed– using #yolo in a politically charged tweet.

While the content aspect of our experiment proved weak, the issue of tempo rose to the forefront of discussion. Twitter’s 140 character limit and stream style display both encourage a higher frequency of sharing. In order to stay salient to followers, users need to tweet consistently to stay near the top of a stream. Every tweet gets the same base impressions–no matter the content. A tweet about breakfast goes out to the same followers as a tweet sharing more meaty content. However, on many other sharing websites, impressions are directly related to engagement–better content gets engaged with more, and audience impressions increase. Salience relies on relevant, quality content (in theory, of course).

The magnitude of this increased tempo is difficult to see on twitter’s automatically updating stream. The page never ends and design differences make it difficult to visualize this comparison with other, slower moving platforms. However, our sound experiment allows for a transfer of data into noise–a medium that allows us to distinguish tempo very easily.

The structure and user interface of a platform set the foundation for user interaction. Twitter’s case is made slightly more complicated due to it’s open API. With user generated content abstracting Twitter’s basic functions, new forms of expression are emerging within the twitter community. Hashtags, @replies are all results of user demands. While more complexities are allowed, the goal of the interface is still simplicity–most noticeable in its simple timeline stream.

As Stephensen points out in his analysis of GUI systems vs command line systems, the display of information is vital to user interaction.

The desire to have one’s interactions with complex technologies simplified through the interface, and to surround yourself with virtual tchotchkes and lawn ornaments, is natural and pervasive–presumably a reaction against the complexity and formidable abstraction of the computer world.- Stephensen

At a graphical level, Twitter’s interface doesn’t seem to vary too heavily from that of Tumblr, or even Facebook. But when you translate the data from graphical to aural, a new idea of tempo emerges. Twitter’s use of hashtags allow us to track “group think” and popular themes among its users. The tempo of interaction among different topics is difficult to distinguish graphically. Loading times are abstracted and grouped together, making it difficult to feel tempo in real time. However, when we translate this data into sound–the tempo becomes increasingly clear. Different expressions of the same data can allow for entirely different interpretations. While you lose any idea of content within each tweet, simply being aware of the tempo of discussion can be telling to the topic at hand.

Making Noise, Lots and Lots of Noise

Since a major goal of this project is to highlight social dissonance through a sound medium, cacophony plays a huge role in our execution. We needed to create 4 audio sources: 2 droning, repetitive noises and 2 more jarring, recognizable tones. In order to execute this, our original intention was to create these noises using only tape players and our own field recordings. However, after Robert broke a few tape players he decided to put equipment he already had to good use. As hacker culture suggests, there can be a lot of value in reusing items–especially for uses not inherit in the object’s original purpose.


For our two droning noises, we used a chaosilator and a nintendo DS with a KORG DS-10 game. Both of these are very “toy-like” and create unexpected, random noises very easily.




Nintendo DS with KORG DS-10 Game


For the more jarring sounds, we wanted to use something that had words and was potentially recognizable. Robert used a BOSS Dr. Sample SP 202 to make a loop of a clip from Martin Luther King Jr.’s “I have a Dream” speech. The other is a loop from John Lennon’s “Imagine”.

Both of these clips hold a lot of cultural value alone. When they are altered, it creates an eerie perversion of these cultural refrences. What were once sounds of American freedom have now become so heavily used, commercialized and and referenced in general culture. These loops distort that reference further and exploit the longtime perversion of these important symbols.

Things are working!

Things are working!

Wiring Is Hard: Piecing Together E.C.H.O.

Even with a group of 6 tech-savvy college students, only one of us had any prior experience with physical tech. Equipped with his toolbox of secrets and wires, Will set out to piece together one of the most vital components of this project: translating all of the instructions we are creating on the computer to our sound devices.

Originally we intended to have 6 audio inputs using 3 tape players, each with a LEFT/RIGHT channel. In order to accomplish this, Will needed to wire each audio source to the Arduino so that it could then trigger when the audio should be on/off (depending on if there was a tweet or not).

Early Problems:

  • Will had originally envisioned the project using a Bipolar Junction Transistor  rather than a Field Effect Transistor  Essentially, a BJT is always “off” until you turn it “on”, while a FET is always “on” until you turn it on “off”. While this was a relatively quick fix in the Arduino code, it took some tinkering to figure out what was wrong.
  • Wiring is hard. The bread board holes are very small, so as will discovered it is extremely difficult for standard core wire to be manipulated into these holes. Because we are using mostly audio equipment, most wiring is typically standard core because it is less prone to breaking. While this may be good for the wires themselves, Will had a really hard time making it all work together within such a tight space.

Once the actual bread board was constructed and connected to the Arduino, we met to try and piece it all together. The final breadboard consists of 4 NTE 2343 transistors, 4 resistors and a ground.

Arduino for the E.C.H.O. Project

Arduino for the E.C.H.O. Project

Emerging Difficulties:

  • After we met to combine it all together, all the pieces worked separately but not when combined. Currently we are still working on trying to make sure that the sound only plays when triggered. It looks like some of our audio inputs are stronger than we expected and need to either be grounded separately, or we need to add more resistors. 


Finding Early Solutions and A Lot More Problems

We finalized our plan and finished our proposal document. Originally started with VH Mutant Subtitles as our name, but after a brainstorming session we changed it to something more reminiscent of our project purpose– E.C.H.O (Emerging Chatter Heuristic Operator). “Emerging” refers to the immediacy of the twitter posts; “Chatter” refers to the unending stream of droning information that spews from the social media outlets; “Heuristics” describes the project’s goal, to translate the unnoticeable droning on the internet into audible sound; and “Operator” refers to the arudino itself, it is the instigator of our little social experiment.

Will and Robert talked shop about the conceptual side of the production and are looking for transistors. The sound system will have 6 channels comprised from 3 cassette players with Left/Right outputs. Aside from the wires, we tried to figure out what the thing will look on the outside – but that’s still pretty vague.

Alex started writing a formal justification for our project by connecting our intentions with past readings and documenting our progress in a more comprehensive manner.

Belinda worked on the code, and tried to get the script to capture hastags instead of a home twitter timeline that we used in our original project. As always, the internet is a wonderful place and we found an already written program to help us with this side of the project. It’s called Tweepy and it’s a Twitter API library that may be invaluable to us in the coming weeks, at least that’s what we hope.

You can find it here: https://github.com/tweepy/tweepy
It supports Python 2.5 and up but not Python 3.

from tweepy.streaming import StreamListener
from tweepy import OAuthHandler

Future Goals

We need to know how much current has to pump through the transistors to power our speakers. This might be one of our earliest snags in the design. There were a lot of sites that detailed HOW to use the transistors in this way, but not really WHERE to get them or the specific models. After a bit more searching, we learned we need to use a FET (field effect transistor) and change our code to accommodate this. The way we did it before (on a BJT), the channel closes when you add a current, but a FET closes when you take current away. We need to switch it so the script turns it ON instead of OFF.

In our last project, the code was one of the hardest parts but now it seems this project will have a much heavier on hands component that may prove challenging.