5 minute read

What You Need to Know About MIDI 2.0

New Frontiers in Expression and Interoperability: What MIDI 2.0 Means for Musicians

MIDI has finally gotten an update, and for those of us who worked on this longawaited version, we strove

Advertisement

to fix some of the less perfect aspects of the protocol, add cool new things without breaking anything, and make it even more MIDI.

Some of the new additions to MIDI 2.0 are the most exciting aspects of this latest version. The things that have been added break down into two categories:

Things that make MIDI 2.0 easier to use: Devices will be able to automatically configure themselves to adapt to whatever they’re connected to on the other end.

Things that make MIDI 2.0 better to use: New capabilities will make it do more for you on stage and in the studio once everything ’s connected.

MIDI-CI: Devices now tell each other what they can do

Most of the new features in MIDI 2.0 rely on a set of operations that are collectively called “Capability Inquiry,” and this is where the need for two-directional communications becomes obvious. When MIDI 2.0 equipment is powered up, it enters a process called “Discovery”; it might also do this in response

After establishing that both ends of a MIDI connection terminate in a device that supports MIDI-CI, the devices exchange messages to determine what the other one is capable of doing, and may proceed into deeper negotiations. For example, using MIDI-CI and profiles, a controller and synth would be able to query each other, learn that both ends support the drawbar organ profile, and when needed, they would send and respond to the same controller numbers in the correct way.

Or think about sample libraries that are used widely in film scoring -- there’s a convention in that world where the notes on the bottom octave or so of a keyboard aren’t used as “notes”, they’re used as “keyswitches” that control things like articulations -- if you press C#, your string samples are pizzicato, D natural is normal bowing, Eb is col legno, and so on. An “orchestral articulation” profile could be developed to both standardize

JR Timestamps let you use that clock information to precisely timestamp events with a resolution of 32 microseconds. When two MIDI 2.0 devices agree to use JR timestamps, the sender will attach a timestamp message to every MIDI message it sends out, and your DAW will use that data to record the event based on the time it was played, not on the time it was received. Events that happen inside the same 32 microsecond window will share a timestamp, and your DAW will treat them as exactly simultaneous. This is, to put it mildly, a huge upgrade.

Per-Note Control and Higher Resolution: “Analog-ish” nuance

In the past decade or so, several manufacturers have started making new kinds of more expressive MIDI controllers that use an extension to the MIDI 1.0 standard called MPE, for “MIDI Polyphonic Expression.” These controllers take a bunch of different forms: the ROLI Seaboard and McMillen K-Board look mostly like regular keyboards, the Roger Linn Linnstrument looks and plays

When we think about resolution, it’s important not to get too hung up on numbers. Think about one end being all the way off, the other end being all the way on, and the resolution is the number of places between those two extremes that you can stop at. If you’re a bass player, think about this as the difference between fretted and fretless bass.

These higher-resolution controls feel “analog-ish,” as bizarre as that may seem. No matter how high-res you make a digital system, you can measure it with a piece of lab equipment and there’s always going to be a step between two adjacent values that a piece of analog gear could still fit an infinite number of intermediate values inside.

But human perception isn’t like lab equipment -- cognitive scientists talk a lot about a thing they call a “JND” -- a Just Noticeable Difference between two things. If two different stimuli are the same as each other inside of that JND, there’s no way for a human to differentiate them. That wasn’t the case with MIDI 1, however. It was easy to

in Expression and Interoperability:

What MIDI 2.0 Means for Musicians

Brett Porter of Art+Logic

how articulation data is sent, and to define attributes instead of requiring this keyswitch hack.

JR clock and timestamps: MIDI gets its timing act together

A recurring complaint about MIDI 1 (probably starting a few weeks after gear started shipping) is that timing can be sloppy. This shouldn’t be a surprise -- if you play a chord with all 10 fingers and hit all the keys simultaneously, the keyboard is going to send each note out one at a time in sequence -- by the time those messages hit the synth, it’s not unusual to have them sound as an arpeggio.

MIDI 2.0 provides solutions to this problem: JR (Jitter Reduction) clock provides a way for a sender to periodically broadcast a message that tells the receiver what its current time is. That receiver can use that information to adapt to timing differences between the two devices and render events more accurately. like a fretboard grid, and the Eigenharp Alpha looks like a bassoon from outer space. The new Artiphon Orba looks like a hockey puck.

The one thing that they share is an increased number of high-resolution sensors that all send data polyphonically. Most of them track not just velocity and pressure, but also the x/y positions of each finger separately, and potentially other things about the playing technique. These controllers let you do things that don’t make any sense at all in plain MIDI 1. MPE does this with a clever bit of hackery: Both the sending and receiving ends need to agree that each note being played will have its own MIDI channel. This means that anything you can do to notes played on a MIDI channel will apply to only that one note, completely independently of any other notes. Working groups tasked with creating MIDI 2.0 wanted to take the ideas expressed within MPE of per-note control and extend it to new, higher-resolution messages--and thus provide even more expressive possibilities. hear what people call “zipper noise” -- tweak a controller around and you can hear the steps as things change. The extended range of MIDI 2.0 control data lets us build equipment that gets way inside of that JND zone to make systems that are as perceptually smooth as analog systems.

There will certainly be instruments that don’t actually use all of the newly available resolution, and that’s okay. The bottom line will be greater nuance and a greater possibility to use current and yet-to-beinvented controllers more expressively.

By fine-tuning timing, offering higherres per-note control, and allowing devices to talk to one another, MIDI 2.0 builds on the tried-and-true protocol in exciting new ways, ways that promise to inspire musicians, producers, and hardware engineers for years to come.

This article is from: