The history of deep control
MIDI’s expressive potential has always been there, right from its inception. Yet it’s mainly thanks to those few who realised how it could be deployed, and imagined new ways of performing that we have such a huge range of options today
As computer musicians, we can’t deny the fact that we’ve occasionally glanced across at our real-world instrument-toting colleagues with more than a whiff of envy. Sure, with our hefty plugin arsenals, sample libraries and pristinesounding virtual instruments we can assemble astonishingly dense tracks and lush aural worlds. But the immediacy of say, an accomplished acoustic guitarist, picking up their instrument and immediately navigating their way around every possible nuance of a note, zoning in and becoming one with their instrument via the ability to deploy natural per‐note vibrato, bending and feel-based expressiveness makes us more than a little dissatisfied with the basic up/down of our MIDI controller’s mod wheels.
To bridge this all-important gap between performer and music, greater emphasis on pernote articulation is required. Previously, if you had, say, wanted to apply some expressive pitchbending on the root note of a chord, you would
To bridge the gap between performer and music, greater emphasis on per-note articulation is required
need to laboriously record the chord’s notes individually and apply bend to just the note needing the bend. Not only is this more time consuming, but it’s a very detached way of emulating free-form expression. With the MIDI Polyphonic Expression (MPE) standard, each note becomes its own channel, with distinct parameter options even when played simultaneously with other notes. While this doesn’t sound all that revolutionary, the end result grants the ability to get extremely versatile, feel-based reactions from your instrument, as it responds more in simpatico with how your hands are interacting with the keys.