Non-linear sound in video games
The week before last, I wrote about Annabel Cohen‘s paper on music in video games, and mentioned Karen Collins of Gamesound.com. Collins has written a great deal on games and sound. Her 2007 paper, An Introduction to the Participatory and Non-Linear Aspects of Video Games Audio, from the book Essays on Sound and Vision, seemed a good place to start.
Collins begins by suggesting the subtle difference between the terms “interactive,” “adaptive” and “dynamic”. In her useful set of distinctions “interactive” sounds or music are those that respond to a particular action from the player, and each time the player repeats the action the sound is exactly the same. Citing Whitmore (2003) she argues that “Adaptive” sounds and music are those that respond, not to the actions of the player, but rather to changes occurring in the game (or the game’s world) itself. So “an example is Super Mario Brothers, where the music plays at a steady tempo until the time begins to run out, at which point the tempo doubles.” She goes on go describe “dynamic” audio as being being interactive and/or adaptive.
She also explores the various uses for sound and music in games. She has read Cohen, obviously and so her list is very similar. She quotes Cohen in relation to masking real-world environmental distractions, and in the distinction between the mood-inducing and communicative uses of music. She points out though, that the non-linear nature of game sound means that its more difficult to predict the emotional effects of music (and other sounds). In film, she states, its possible for sounds to have unintended emotional consequences – a director wanting to inform that audience that there is a dog nearby will tell the sound designer to include a dog barking out of shot, but the audience will being their own additional meaning to that sound, based on their previous experiences (which she calls supplementary connotation) . But in games, she argues, where sounds are triggered and combined in relatively unpredictable sequences by player actions, even more additional means are possible.
She also discusses how music can be used to direct the players attention, or to help the player “to identify their whereabouts, in a narrative and in the game.” She points out how “a crucial semiotic role of sound in games is the preparatory functions that it serves, for instance to alert the player to an upcoming event.”
This is something that was made very clear while I played both Red Dead Redemption and Skyrim. Red Dead Redemption would often alert me to an upcoming threat by weaving a more urgent, oppressive tune into the background music. Skyrim took a different approach, the music for Skyrim doesn’t work as hard, but while my cat-creature was sneaking around underground tunnel systems, I was often alerted to potential threats by my enemies muttering to themselves as I approached blind corners. Collins points out that these sorts of cues have occasioned a changing listening style from passive to active listening, among gamers.
Sometimes though, as Collins points out, games are created that put musical choice directly into the players’ hands. The Grand Theft Auto series gives the player a choice of in-car radio stations to listen too, so that their particular tastes are better catered for. Though they weren’t around at the time of Collin’s writing many iOS and other mobile games have a feature by which the player can turn off game music and even other game sound effects if the so choose, to listen to their own library of music, stored on the device. She even cites the game Vib Ribbon, or the Sony Playstation, which allows the player to load their own music from CDs, and the music then changes the gameplay according the structure of the music the player has loaded.
Collins also discusses the challenges that composers face when writing for games. For a start, Collins points out that “in many games it is unlikely that the player will hear the entire song but instead may hear the first opening segment repeatedly, particularly as they try to learn a new level.” (Though she also points out that many games designers are leaning to include what one composer calls a “bored now switch.” After a number of repeats of the same loop of music, the sound fades to silence, which both informs the player that they should have completed this section by now, and stops them getting annoyed and frustrated by the repetition.
The other main problem is that of transition between different loops (or cues, as she calls them). “Early games tended towards direct splicing and abrupt cutting between cues, though this can feel very jarring on the player.” Even cross-fading two tracks can feel abrupt if it has to be done quickly enough to keep up with game play. So composers have started to write “hundreds of cue fragments for a game, to reduce transition time and to enhance flexibility in music.” This is the approach taken in Red Dead Redemption, where as I move my character around the landscape, individual loops fade in and out according to where I am and what is happening, but layered together they feel (most of the time) like one cohesive bit of music.
Multiplayer games present another problem. “If a game is designed to change cues when a player’s health score reaches a certain critical level, what happens when there are two players, and one has full health and the other is critical?” she asks.
There are rewards too, get the music right, and games publishers can find an additional source of income. She quotes a survey which discovered that “40% of hard-core gamers bought the CD after hearing a song they liked in a video game.” (Ahem, guilty as charged m’lud, even though I’m not a “hard-core gamer.”)
Just before she completes the paper, she has some thoughts on the perception of time too. I’ve noticed a sort of “movie-time” effect in Skyrim, which presents a challenge for my real-world cultural spaces. So I think I might need to look at that in more detail.