Richard Ashworth

Mar 31, 2016 8 min read

Steve Reich’s Clapping Music with Akka

Using functions to express musical ideas is nothing new: Harmony, time signatures, the relations between notes in a scale and musical form all have their roots in mathematics, and composers have used mathematical abstractions for millennia (see Pythagorean Tuning for a 2500-year-old example).

These abstractions, however, are not always obvious from they way that music is written down, leading to a potential disconnect between composer and performer. In this post, we will use Steve Reich’s 1972 Clapping Music to show how functional programming can be used to capture an underlying musical idea.

Clapping music is an example of phasing, a technique in which an identical musical phrase is played by two instruments that gradually shift out of unison. A visual description of this is given in the video below:

We can begin to capture this musical idea in code, beginning with the fundamental element of the piece: a beat. Every beat in Clapping Music consists of either a clap or a rest, and we can model this with the following algebraic data type:

sealed abstract class Beat
case object Rest extends Beat
case object Clap extends Beat

The principal phrase on which the phase effect is applied can then be described as a sequence of Beat. We add a constructor to the Phrase class so that simple graphical scores (passed in as Strings) can be used to build these patterns of beats and rests:

case class Phrase(val beats: Seq[Beat]) {
  def this(notation: String) = {
    this(notation.map(_.toUpper match {
      case 'X'  Clap
      case _    Rest
    }))
  }

  def length = beats.length

  override def toString(): String = {
    beats.map(_ match {
      case Clap  "X"
      case Rest  "_" }) mkString " "
  }
  }

Now that we have a simple representation of musical patterns, we can define a function that computes the nth phase for a particular pattern. We outline the expected behaviour of this function in a ScalaTest specification:

import org.scalatest.FlatSpec

class ComposerSpec extends FlatSpec {

  "The Composer object" should "correctly shift a pattern by one beat" in {
    val originalPattern = new Phrase("X_")
    val phaseOne = Composer.getPhase(originalPattern, 1)
    assert(phaseOne === new Phrase("_X"))
  }

  it should "preserve a pattern shifted by zero beats" in {
    val originalPattern = new Phrase("X_")
    val phaseZero = Composer.getPhase(originalPattern, 0)
    assert(phaseZero === originalPattern)
  }

  it should "correctly shift a six-beat pattern by four beats" in {
    val originalPattern = new Phrase("X_XX__")
    val phaseFour = Composer.getPhase(originalPattern, 4)
    assert(phaseFour === new Phrase("__X_XX"))
  }

  it should "preserve a pattern shifted by the pattern's length" in {
    val originalPattern = new Phrase("XXX_XX_X_XX_")
    val fullCircle = Composer.getPhase(originalPattern, originalPattern.length)
    assert(fullCircle === originalPattern)
  }
}

Using the drop and take functions on the sequence of beats within a Phrase object, we arrive at the following implementation of the getPhase function. In generating the nth phase, this function removes the first n beats from the phrase, and adds them to the end.

def getPhase(original: Phrase, degrees: Int = 1): Phrase = {
  new Phrase(original.beats.drop(degrees) ++ original.beats.take(degrees))
  }

To handle longer pieces of music (which can be modelled as sequences of phrases), we will use scala’s Stream object. This supports lazy evaluation–eliminating the need to construct an entire piece on instantiation–and provides an elegant means to generate repetitions of a particular phrase via the continually function. Since Clapping Music requires two performers, we will use a different Stream object for each part. The first part simply repeats the original pattern; the number of repeats is given by the length of the pattern plus one, since part two will cycle through each distinct phase of the pattern. We define these two parts as follows:

val partOne = Stream.continually(original).take(1 + original.length)
val partTwo = Stream.from(0).map(getPhase(original, _)).take(1 + original.length)

To capture both of the parts as a single expression, we use the higher-order function zip to create pairs of patterns corresponding to the two parts. We wrap this in the composeTwoPartPhaseMusic function which completes our definition of the Composer object:

object Composer {
  def getPhase(original: Phrase, degrees: Int = 1): Phrase = {
    new Phrase(original.beats.drop(degrees) ++ original.beats.take(degrees))
  }

  def composeTwoPartPhaseMusic(original: Phrase): Stream[(Phrase, Phrase)] = {
    val partOne = Stream.continually(original).take(1 + original.length)
    val partTwo = Stream.from(0).map(getPhase(original, _)).take(1 + original.length)
    partOne.zip(partTwo)
  }
}

We can try out what we have written so far on the REPL. The initial pattern for Clapping Music can be represented graphically as “XXX_XX_X_XX_”, so we will use this to check that the correct variations are generated for each phase:

Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_25).
Type in expressions for evaluation. Or try :help.

scala> import com.richashworth.clappingmusic._

import com.richashworth.clappingmusic._

scala> new Phrase("XXX_XX_X_XX_")
res0: com.richashworth.clappingmusic.Phrase = X X X _ X X _ X _ X X _

scala> Composer.composeTwoPartPhaseMusic(res0)
res1: Stream[(com.richashworth.clappingmusic.Phrase, com.richashworth.clappingmusic.Phrase)] = Stream((X X X _ X X _ X _ X X _,X X X _ X X _ X _ X X _), ?)


scala> res1.foreach{i=> { println(s"Part A: ${i._1}  |  Part B: ${i._2}")} }
Part A: X X X _ X X _ X _ X X _  |  Part B: X X X _ X X _ X _ X X _
Part A: X X X _ X X _ X _ X X _  |  Part B: X X _ X X _ X _ X X _ X
Part A: X X X _ X X _ X _ X X _  |  Part B: X _ X X _ X _ X X _ X X
Part A: X X X _ X X _ X _ X X _  |  Part B: _ X X _ X _ X X _ X X X
Part A: X X X _ X X _ X _ X X _  |  Part B: X X _ X _ X X _ X X X _
Part A: X X X _ X X _ X _ X X _  |  Part B: X _ X _ X X _ X X X _ X
Part A: X X X _ X X _ X _ X X _  |  Part B: _ X _ X X _ X X X _ X X
Part A: X X X _ X X _ X _ X X _  |  Part B: X _ X X _ X X X _ X X _
Part A: X X X _ X X _ X _ X X _  |  Part B: _ X X _ X X X _ X X _ X
Part A: X X X _ X X _ X _ X X _  |  Part B: X X _ X X X _ X X _ X _
Part A: X X X _ X X _ X _ X X _  |  Part B: X _ X X X _ X X _ X _ X
Part A: X X X _ X X _ X _ X X _  |  Part B: _ X X X _ X X _ X _ X X
Part A: X X X _ X X _ X _ X X _  |  Part B: X X X _ X X _ X _ X X _

Now that we have a program that can be used to help compose phase music, we can use scala to play this, with the aid of a MIDI synthesizer. Concurrency is a central aspect of music, and is well supported by functional programming languages like scala. So that two parts can be played in unison, we will use Akka actors to represent different musicians in our program. In the below implementation, each Musician receives a series of messages corresponding to the Beat objects in the piece, which it then uses to create MIDI events:

class Musician(channel: MidiChannel, pitch: Int) extends Actor {

  private val beatLength = 175

  def receive = {
    case Ping  sender ! Ping
    case Rest  Thread.sleep(beatLength)
    case Clap  {
      channel.noteOn(pitch, 100)
      Thread.sleep(beatLength)
      channel.noteOff(pitch)
    }
  }
}
case object Ping

Ping messages serve as a mechanism for the actors to reply to clients when a series of messages have been processed; in our case these are sent after the last Beat message.

We can finally bring everything together, and write an App object that sets up the MIDI channel, generates the streams of beats in our piece, and sends the messages in order to the two Musician actors. The Future companion object is used to wait until both actors have processed all the messages before the system is stopped.

object Main extends App {
  val synthesizer = MidiSystem.getSynthesizer()
  synthesizer.open()

  val channel = synthesizer.getChannels()(9) // General MIDI Percussion uses channel 9

  val system       = ActorSystem("MusicianSystem")
  val midiActorOne = system.actorOf(Props(new Musician(channel, 60)), name = "A")
  val midiActorTwo = system.actorOf(Props(new Musician(channel, 67)), name = "B")

  val maxPlayingTime   = 1 hour
  implicit val timeout = Timeout(maxPlayingTime)

  val clappingMusic    = Composer.composeTwoPartPhaseMusic(new Phrase("XXX XX X XX "))
  val phaseRepetitions = 8

  clappingMusic.foreach(duet 
    (1 until phaseRepetitions).foreach(_ 
      (0 until duet._1.length).foreach(i  {
        midiActorOne ! duet._1.beats(i)
        midiActorTwo ! duet._2.beats(i)
      })
    )
  )

  val futures = Future sequence Seq(midiActorOne ? Ping, midiActorTwo ? Ping)
  Await.ready(futures, maxPlayingTime)

  system.terminate()
}

You can listen to how this sounds here:

All the code for this example is available on GitHub, and there are a number of ways in which the ideas outlined here could be developed further. For example, we could extend the Beat type to support musical notes (comprising distinct pitches and durations), and further variations could be generated by reversing the Phrase objects. For the time being though, this has been a useful exercise in learning more about Scala and Akka. Any feedback appreciated in the comments!

comments powered by Disqus