Big Screams is an audio-visual piece proposed for the Big Screens setup on the IAC video wall. It is an installation / user-generated performance, where the audience participates by calling a phone number and leaving a message. This message is repeated back in real-time by a bunch of cartoony critters on screen on the speaker system so that everyone in the room can hear it. Many messages can be left (and therefore repeated) at the same time, which will create a rich yet quite odd soundscape, with only bits and pieces of recognizable words. If one person calls, all the critters repeat the message in unison; if two people call, half the critters will repeat one message and the other half will repeat the other; and so on.
The display will consist of hundreds of these critters, all stacked one on top of another. They are blob-like heads with no bodies that are constantly in motion, rolling off each other and the landscape. The critters will originally all be the same colour, but their appearance will change on the fly to group them and identify which critters are repeating the same message. When a critter speaks, its mouth will move, lip-synching what it is saying.
The bottom of the display will be made up of a curvy landscape visualizing the audio that is being spoken. If many people are calling at the same time, many depths of land will appear stacked on top of one another. The critters will rest on this landscape, obeying the laws of gravity, so they will re-arrange themselves as it moves in relation to the audio being fed into it.
The visuals will be generated programmatically using Processing/Java or openFrameworks/C++. The advantage of Java is that rendering has a much better appearance and the Most Pixels Ever library is more advanced; The advantage of C++ is that it performs much better. The physics-based motion will be calculated using the Box2D library.
The calls will be handled and dispatched using an Asterisk server. A system to limit the number of live calls will be put into place, in case it is necessary. The audio from the calls that go through will be fed into a separate machine running Pd for analysis and playback. The data will then be transmitted to the visuals machines to generate the lip-synching.