I'm taking a break from the free improvisation series for the time being. There are about 5 parts to go, so I'll be back to it eventually. Here's a slight change of pace.
Liberal improvisers embrace John Cage's maxim that all sounds are potentially musical. By extension, they accept that all objects may also be viewed as “instruments”, potentially usable to create musical sounds. In the late 1960's groups like Musica Elettronica Viva, AMM and Kluster experimented with contact microphones, primitive (sometimes homemade) synthesizers, and extended instrumental techniques to generate new sounds. These are challenging to use, but because the sounds are (to many) new and unfamiliar, they can excite musicians and enliven an improvisation. But Cage's maxim applies to the new and groundbreaking as well as to the painfully familiar. This is best illustrated by the use of radios and other sound playback devices in improvisation. Setting aside for now the insipid but unfortunately relevant questions about copyright law, and whether this kind of use of recording constitutes “fair use” or “public broadcast”, I'll address a few practical concerns about the use of pre-recorded material in a musical improvisation.
Many radio stations have contests which reward callers who are able to correctly identify a 1-second song clip. The human brain has a tremendous capacity for recognizing tiny fragments of larger pieces of information. Improvisations are by their nature formless and unpredictable; tuning a radio to a popular song or a piece of classical music may be viewed as introducing a highly suggestive (and undesirable) element which could influence the improvisation in a harmonic or rhythmic way. But this is only one interpretation of these sounds. For a different interpretation, we may turn to the theologian Alan Watts, who observed, quite simply that when we listen to a woman speak on the radio, we are not hearing her voice. We are hearing the vibrations in the air caused by the speaker's diaphragm. This is controlled by voltage sent from a radio receiver, which decodes the signals sent along the electromagnetic frequency to which it is tuned. These radio frequencies are encoded by the radio transmitter, according to voltage sent to it from the broadcast studio, where the sound of the woman's voice was actually present.
So when, in an improvisation, we suddenly hear sounds that we identify as Boston's “More Than A Feeling”, how can we accept it as a valid sonic contribution without letting it disproportionately influence our musical decisions? (i.e. if we feel that we should start either play along with it, or deliberately play "against" it, etc.)
As with anything else, the issue is not the nature of the sound, it is how we choose to perceive it. This perception informs, but does not necessary determine, our choice of response. Let's begin with the simple illustration of call and response which is a common starting point in free improvisation:
Stimulus(A) → Response(B)
wherein A and B are different individuals. Stimulus(A) and its response(B), given the assumption that player B is aware of player A's initial statement; in other words, given that player B is listening to player A.
Listening is a very complex phenomenon. Neuroscientists are just beginning to understand the way in which the brain parses aural stimuli, especially how it is able to pay attention to a single conversation in a loud room, rather than just perceiving a “blooming, buzzing confusion”. What is obvious is that we are only conscious of a fraction of the stimuli our brain receives every moment, and that we have more or less control over what this fraction is comprised of. Of course, we cannot shut our ears off, so it is likely that even if we are not consciously listening to Stimulus(A), our brain is still affected by it, albeit on a less conscious level. If this is the case, then it would follow that we would achieve a deeper relationship with the stimuli of other musicians by not listening directly to them. This obviously flies in the face of conventional wisdom, which emphasizes listening above almost all other musical values. Even the Nihilist Spasm Band wasn't willing to jettison the musical element of listening from their arsenal of musical ideals.
If music is most fundamentally a human activity, rather than the sounds that result from this activity, then listening is the glue which binds the individuals together, so that they are not just atomized particles which happen to be acting in the same space and time. Listening enables musicians to be aware of each other, and to act in concert with each other: mutual rather than independent action. But there are many different things to listen for in music.
Suppose musician A and musician B are both soprano saxophonists, and that stimulus(A) is flurry of discrete pitches with an up-down shape. If musician B's ears are developed enough in 12-tone equal temperament, she may perceive the flurry in terms of an implied harmonic relationship. In other words, she may listen at the level of 12-TET.
Now suppose that musician A plays a series of different crumpled up materials, each of which is mounted to a wooden board and amplified with a contact microphone. The sound spectrum may include sounds that have pitch value, but it is highly unlikely that these sounds could be meaningfully related to 12-TET. Stimulus(A) instead is a gentle agitation of each material between the index finger and the thumb. There is no pitch value to listen to. Musician B may hear some sort of rhythmic pattern emerge from the agitations, thus she listens at the level of rhythm. She may also listen to the rustling quality of the materials, thus she listens at the level of timbre.
Every instrument (an object used for musical purpose) has certain techniques which will result in sound. Technique is the bridge between the human body and the musical sound, whatever the particular bridge happens to be, whether it is that of Vladimir Horowitz or Jandek. Regardless of how you may feel about Jandek's technique, there is no denying that he uses one. Techniques can be varied, but the possibilities are not infinite. If you set a guitar on your bed and dance on the sidewalk in front of your house, you may be making a valid artistic statement in relation to the guitar, but you are not playing the guitar. You have to actually aggravate the guitar in some way (however indirectly) to make it make sound, and thus to actually make music. Technique on an instrument is limited by the different ways in which the object can be used to create sound.
Finally, let's suppose that musician A plays a boombox radio. A radio is a very unique instrument: it has an extremely limited spectrum of technique, but the scope of sounds it produces is conceivably infinite in variety. The radio player has control over 1) whether the radio is on or off, 2) the volume at which the radio plays, 3) the bandwidth to which the radio is fixed, and 4) the frequency to which it is tuned. Some radios also include 5) a tone knob which colors the timbre with high or low frequencies. Suppose stimulus(A) is a minute-long gesture, beginning with turning on the radio (the moment of articulation) at 88.3 FM, and gradually sweeping the bandwidth dial upward to 104.9FM. Rather than listening to the specific sounds which come from it, listen on the level of the technique. Whether the radio produces static or an NPR station or Boston's “More Than A Feeling”, what you will really hear (and thus respond to) can be written simply as this:
This illustration requires hacking Western notation so that pitch is read as bandwidth; but both are frequencies, so it's not much of a conceptual leap. Hopefully a radio player will be more creative with their technique than this example illustrates; whatever technique is used, any sound that happens to occur during this period is of little consequence. In this way, by listening on the level of technique, we can treat the radio with its myriad timbres as an instrument of equal value and influence in the improvisation ensemble. A similar principle applies to the use of CD's, tapes, vinyl, and other pre-recorded media, except that the range of timbres on sound recordings is at least static; it's potentially infinite, but it doesn't change over time like radio does.
Now that we have analyzed listening to a radio player in terms of various levels, let's apply this idea to other instruments. It is fairly easy to listen on the technique-level to instruments with simple playing mechanisms like a radio or a woodblock. But some instruments have very complex techniques, like the saxophone, and require that the listener be a specialist herself. Returning to our original example, in which both musicians A and B play the soprano saxophone, and stimulus(A), the flurry of notes. Listening on the technique-level, musician B would perceive a particular combination of lip, tongue, lung, arm and finger actions which happen to produce the sound that is heard.
The point being, there are many levels of listening. The jazz saxophonist may find interest in Charlie Parker's solo, and listen to little else; the classical music fanatic would like to listen to the differences between Leonard Bernstein and Leopold Stokowski in conducting the New York Philharmonic; 1960's audio engineers would listen to the slapback on the early Johnny Cash Sun recordings with great interest; a record executive would listen for noise, clicks and pops in the background of the sound to evaluate quality of his company's product; an aspiring songwriter may prefer to listen to the lyrics. You can listen for Frank Sinatra's influence in Scott Walker's voice, for Keith Jarrett's influence on Craig Taborn, for Elvis' influence on the Residents. You hear the influence of African music on Western classical music whenever you hear the xylophone; and if you listen closely to any sound recording, you can hear Edison, Batchelor and Kruesi busily tinkering in their workshop, fixing tinfoil to a hand-cranked machine.
All facetiousness aside, this model of listening helps explain the wide variety of thoughts and associations experienced by people who hear the same sounds, or by one person listening to the same recording more than once. There is a physical reality to the sound, but there is also a world of implications which lead to the physical reality that we hear. We can ignore these implications and cling to our first or second impressions, or we can use these implications to finely tune our ears to pick out the actual music.