Announcing Elementary v4.0
6 min read

Announcing Elementary v4.0

Today I’m happy to announce the latest stable version of Elementary Audio, v4.0. The newest JavaScript packages are already available on npm, and if you’re building a custom native integration you can update your native dependency by pointing at the latest commit on main in the elementary repository.

Elementary set out with the goal of providing a simplified approach to writing audio software– one which leans heavily on pure function composition and declarative programming to enable you to spend more of your time on “what” you want your audio to sound like, and less time on all of the complexity wrapped up within the “how” of getting that done. The utility of this approach and ultimately the success of this project depends critically on the vocabulary that these pure functions define, both in the set of audio graphs that the vocabulary enables you to describe, and in the ease with which you can do that. This has been a big point of focus in the development of v4.0, as the introduction of native multi-channel support and the updated frontend API will show below.

The full v4.0 changelog is available on GitHub, but the highlights are as follows:

  • Support for multi-channel graph nodes
    • With new mc node types
      • el.mc.sampleseq
      • el.mc.sampleseq2
      • el.mc.table
      • el.mc.capture
      • el.mc.sample
  • Rewrite frontend library with explicit props args
  • Rewrite garbage collection implementation
  • Rewrite the SharedResourceMap native implementation
  • Expose root node fade times for custom Renderer implementations
💡
With this post I am also archiving the Elementary Audio Buttondown Newsletter and will be sharing ongoing development updates here on my personal blog. I’ve syndicated a select few posts here for continuity already, and I will leave the Buttondown archive up for a while. If you want to receive notifications for future updates, make sure you subscribe here.

Spotlight: Splice x Studio One

Before we get into discussion of the headline changes, I’m really excited to share the release of the Splice x Studio One integration: a partnership that brings Splice directly into the DAW. This is the newest in the “Built with Elementary” showcase, and a big notch in the belt for the Elementary Audio project.

I’ve been with Splice now for just over a year, and for much of that time we’ve been working on this integration. The project architecture is not far from that of a traditional audio plugin, which meant that Elementary’s custom native integration was a perfect fit. We’re primarily using the sample playback utilities (el.sample, el.sampleseq2, etc) under the hood with some custom behavior injected through the Custom Graph Node hook.

If you’re a Studio One user, make sure you check this out! And if you’re not, maybe it’s time to give it a try. We’ve got a lot of exciting work coming down the pipe, and I hope to be able to share new ways that we’re leveraging Elementary to deliver value at Splice.

Multi-Channel Graph Nodes

Elementary has always been able to accept multi-channel inputs and produce multi-channel outputs. In fact, one of the earliest projects I took on was building spatial audio effects in higher order ambisonics, complete with encoder and decoder. However, until v4.0, that has all been accomplished by composing over individual native graph nodes that each only produce single-channel outputs. That approach worked for a long time, even for multi-channel sample playback by using a different lightweight el.sample instance for each channel of the sample file. However, this internal limitation was both inefficient and rendered the vocabulary of primitive functions incomplete.

In an earlier version of Elementary, I introduced an SVF filter and a MultiMode 1-pole filter, both of which produce multiple outputs via various taps in the internal filter structure. Blending these outputs is exactly how you might make a morphing filter like that in Ableton Live’s AutoFilter. Due to the single-channel output limitation of an individual native graph node in Elementary though, it was effectively impossible to do just that because the API required that the node only propagate one of its channels.

Another limitation revealed itself when I introduced the sampleseq node with time stretching and pitch shifting support. The existing approach of using multiple node instances to play different channels of the same audio file breaks when trying to apply time stretching or pitch shifting because it starts to introduce phase incoherence between the different channels of the file.

Elementary v4.0 fixes all of that, with almost no change to the frontend API. Individual native graph nodes can now output an arbitrary number of channels, and the frontend API can compose over those channels jointly or independently as needed. This means efficient multi-channel sample playback, perfect phase locking on multi-channel time stretching/pitch shifting, an open door for multi-mode filter outputs, and even some future-facing opportunities such as hosting audio plugins (VST3/AU/CLAP/etc) inside an Elementary Audio graph. I hope you’ll agree this is a big step forward for the completeness of the Elementary vocabulary.

Explicit Props Arguments

Turning from completeness to ease of use, the next headline change here is a rewrite of the frontend library API (all of those el.* functions). Internally, every native graph node supports the idea of taking properties (props) from the frontend graph description. One of my original design decisions was to reflect that support by attempting to design a flexible frontend API such every el.* function could accept props as an optional first argument. That meant you could call el.svf({mode: 'lowpass'}, 800, 1, input) or instead just write el.svf(800, 1, input) and accept the default properties.

Arguably, it took me too long to realize that the API consistency and flexibility I was aiming for produced more confusion than value. It meant TypeScript type definitions that were so verbose and unhelpful it was easy to write the wrong thing, and it helped facilitate users trying to provide props to graph nodes that ultimately don’t use them, leading to a confusing developer experience.

With v4.0 I’ve addressed this by rewriting the API of each of these functions to be perfectly explicit with their requirements: nodes which expect or require props have associated functions that demand those props as the first argument, and nodes which do not expect props have functions that expect you not to pass any. That means you should see a TypeScript error if you try to write el.add({key: 'sum'}, 1, 2, 3) or el.svf(800, 1, input). Even for nodes that do have meaningful defaults for all of their properties (such as this SVF node), you must be explicit with your understanding of accepting those defaults, el.svf({}, 800, 1, input).

While this is clearly a breaking change, my hope is that it’s easy enough to recover from and ultimately brings much more clarity and ease of use to this vocabulary that lies at the heart of Elementary.

What’s Next

This is a big step for Elementary Audio, and yet, even though I’m more than three years in, it feels like we’re just getting started. So let’s wrap up this update with a brief overview of what’s next on the Elementary roadmap. In no particular order,

  • High level library utilities
    • This is probably the most common point of feedback I’ve gotten about Elementary, and by now it’s obvious that the time is due. I’ll be looking into adding a suite of audio effects, including a flanger, chorus, phaser, reverb, filter delay, and others, as well as a set of utilities for writing samplers and synthesizers.
  • Improving Refs
    • Refs solve a critical problem in using Elementary; often we know precisely what values to change in an audio graph as a result of some input event, why build and diff another entire graph description just to change those values? Unfortunately, the current implementation of refs compromises one of Elementary’s design goals and ultimately makes for overly complicated use. Moving forward, I’ll be looking at potentially deprecating Refs in favor of Signals (roughly like the TC39 Signals proposal).
  • Dev Experience
    • Right now, it feels to me that it’s too hard to get started with Elementary. Even for myself when I want to just scratch out a new idea or explore something. I often reach for the Elementary playground, but it feels to me woefully incomplete. Sometimes I reach then for the web-renderer, which in my opinion has the best “getting started” UX of the available options, but even that feels like setup overhead. And we have plenty of users who want to use Elementary in a plugin running in their DAW, and right now that typically means a big lift compiling the SRVB example project. This is exactly one of the areas I’ll be focusing on in the coming months, aiming to introduce tools that get you to exploring sound with Elementary as quickly and with as little friction as possible.
  • Dev Log
    • Finally, and though this may be an ambitious personal goal, I am aiming to start writing short updates on Elementary development as I work on them. I think this will be a good opportunity to elaborate on the thinking and the work that goes into the details like those I’ve described above, and also a good opportunity to communicate the project’s progress in between these larger announcements. Keeping up a proper dev log will require a fair bit of diligence, and I’ll do my best, but you can help keep me honest by joining the Elementary Audio Discord and chatting about the project!