Hiatus Part II

I need to stop running my mouth so damn much about what I’m doing. I think most people working in experimental electronica love having these conversations, myself included, but what I think is coming off as just sharing information about what I’ve been up to comes off a lot of times as seeking a solution for something.

I had this conversation the other night about issues with DPC latency I’ve been experiencing with my machine recently. For those who don’t know, DPC latency is the enemy of music production; in layman’s terms, it means processes on your machine can block tasks related to the processing of real-time audio leading to dropouts, crackling and all kind of performance issues with your DAW. This is pretty much a PC specific issue because of driver diversity (issue usually surfaces as a result of poorly written/unoptimized drivers) - often times its drivers that have nothing to do with audio (ahem NVIDIA) causing the issue. The music production crowd for PC is a niche market so DPC latency isn’t really prioritized in most off the shelf builds. Music production is really a Mac thing.

I was just talking about this in terms of what I’ve been working on and why I haven’t been playing for a few weeks, but the conversation turned to me dumping my codebase in favor of an entirely new solution in pretty short order. I actually find that this sort of response is common, which is unfortunate because I’m never looking for another solution.

I like figuring things out myself, it’s just my personality. I have a codebase that does exactly what I want it to do and can be easily modified/extended. I can do anything in MIDI that I can think of from code. However, my solution is built on top of VST .NET, which makes it PC specific. I would certainly consider a version of my plugin that would work on a Mac (e.g. a port to C++ or another framework like NPlug) , but there is no scenario where I have a totally different solution that doesn’t involve the Aleator at all. It’s crazy how often people suggest that to me. Even a rewrite would mean a LOT of time spent developing when getting better at making music is my immediate goal. A lot of people in the experimental space are hyper focused on new tech or gear — that has never really been my thing. If you’re a really good guitarist you’re not always chasing new guitars. I’m hyper focused on generatively rendering music that sounds cool. All of the new gear, tech and processes in the world are worthless if they don’t make your music sound better or really expand your palette.

I love the VST/DAW workflow and it separates me from a lot of other people in this space. If I wanted to live code, I’d be doing that. If I wanted to be 100% modular, I’d be doing that. What I want to do is generatively flesh out loosely defined harmonic structures from code with Eurorack as an extension of whatever I have loaded locally in the DAW. If I ever perfect that process, maybe I’ll pivot at that point. It seems unlikely that will ever happen though. There is always some nuance I’m looking to add via code. The next time I’m in a dev cycle I’ll be looking to add some kind of snare scattering functionality. I could probably get a very similar effect by using a delay, but I want to do it programmatically and be able to trigger it in the Aleator UI. One thing is for sure - my code is the art, just as the music is. The idea of passing the generative component of what I do onto AI or some other application/component is antithetical to SERIES. If a suggestion involves prompts I’m immediately hitting snooze; I do the work and I do it manually. I would sit all this shit down and go back to playing guitar on the couch before I went in that direction.

Back to the DPC latency. After a recent Windows update, I noticed crazy DPC latency to the point that my DAW seemed to be useless for a few days. I was suddenly getting crackles and dropouts that made using Reaper unbearable. At some point, that egregious subsided a bit but I can still hear intermittent crackling and LatencyMon doesn’t look great:

Boooo.

This Alienware model is 6 years old and was never a fan favorite. I generally can’t run with a buffer less than 512 or else I risk underrun. So I broke down and bought a Rok Box from PC Audio Labs. These are dedicated DAWs, speced for audio and tested for DPC latency. I’m confident that my sets will be a lot better going forward and my environment will be much more stable.

Machine gets here tomorrow, it will take a bit of time to set it up. Hopefully I will be in shape to test things out at ElectroSonicWorkshop on August 27th.

Snares.

So yeah all that shit I said in the previous post? Forget it. It worked really well, don’t get me wrong; I just really hate the idea of adding more equipment to my solution. While Reaper doesn’t allow me to send MIDI clock messages from the VST, the ultimate destination for them is Mutant Brain (Eurorack). It doesn’t matter that they be actual MIDI clock messages in my synth, I just need any trigger to fire at 24 PPQN. So instead of setting up a totally separate MIDI device and merging the MIDI streams before Mutant Brain receives any data, we can just send pulses as notes from the DAW and configure Mutant Brain to fire triggers based on that. Another problem solved!

I’m enjoying my little hiatus - experiencing some minor FOMO but there’s way too much change for me to considering playing sets right now. The synth is still a work in progress; I’ve recently added Clouds and a looper (I can take the Boss RC-5 out of the signal chain!) and that will be it for a while, but it will take a couple more weeks for me to get familiar with everything. Here’s what I’m working with in the rack at the moment:

image from modulargrid.net

The 1u row gives me some room to move utilities around and expand a bit, but 104hp/4u is about as far as I want to take Eurorack. Eurorack is a tool I’m using, generative MIDI from code will always be my focus.

Speaking of which…anyone who’s seen me perform has probably seen or heard me get trapped in loops with few or no snares. That might work for specific scenarios when I’m recording, but it tends to suck the air out of the room in a live (non ambient) setting. I think a look at how I’m approaching a fix for this will reveal a lot about my process, so let’s talk about it.

Like a lot of data generated by the Aleator, snare totals are calculated using probability theory. Specifically, I use two distributions (probability density functions, not cumulative) - continuous uniform and normal:

continuous uniform distribution

NORMAL DISTRIBUTION

For this example we’ll focus on my use of continuous uniform distribution (first diagram) when calculating snare totals for one drop, steppers and rockers variations. The idea with this function is that an arbitrary value between the bounds will be returned. In my code, the parameters are being set as

                m_continuous.Alpha = 0.0;
                m_continuous.Beta = (double)dp.Progression.TotalMeasures * 3;

This means that in a musical phrase with four bars, there will be a minimum of 0 snares and a maximum of 12. For these kinds of rhythms I distribute snares evenly but even still…any number less than 4 will leave at least one measure with no snares, and there’s a ~33% chance that will happen.

This code is pretty old and I’m hard pressed to think of an instance where I’d be playing a rhythm like this and would want several measures without snares. My fix was to leverage the “Flood” functionality I introduced in the last dev cycle to force more snares into the phrase upon regeneration if the Flood checkbox is checked in the Aleator UI. Right now that basically amounts to increasing alpha from 0 to 3 and multiplying the the total number of measures by 4 instead of 3 to get beta. In this scenario that would give us a range of 3 - 16; much better odds that the majority of measures will have at least one snare than 0 - 12.

This is a really simple and straightforward example of how probability is used in my plugin; there are far more complex applications, especially when it comes to managing pitch and locating notes on the staff. But this is a devlog not a textbook, so there you have it.

I had to defer on snare scattering, maybe that will be Snares II. I need to let all of my current changes marinate for a bit and let any bugs shake out so it will probably be another week or so before I can report on that. ‘Til then…

Clocks.

So yeah, since I started playing out earlier this year and rocking with the Miami/Ft. Lauderdale locals, I’ve been talking process a lot and thought it might be cool to share some highlights from my own. This first installment is about the damn MIDI clock. If you haven’t had to deal with managing it manually, good for you.

One thing about a hybrid solution to anything, you’re always in for some bullshit. If you’re reading this you probably already know that my software is built to send MIDI note and CC messages to the DAW (in my case Reaper), where they can be routed however the user sees fit. That’s all great…as long as you don’t actually have to sync anything. If you just have a bunch of instruments waiting to receive MIDI and play notes, you don’t need to worry about a clock. Locally, CC messages ensure that the DAW and Aleator tempos always match. That means that any time based effects, LFOs, arpeggiators…all of that will be matched up. But what if you want to send MIDI data to an external source that needs to know BPM (e.g. Eurorack)?

The two primary methods for managing the MIDI clock are MTC and MIDI beat clock. MTC is a tool more meant for managing time codes for film and generally, keeping track of frames/seconds/minutes/hours within media. MIDI beat clock is really focused more on sequencing and MIDI device synchronization, so that’s the weapon of choice here. The MIDI clock defines real-time messages for clock/tick, start, continue and stop. I already use Leslie Sanford’s C# MIDI Toolkit to manage MIDI functions within the Aleator (very old but hey - still works) and all of these messages are already available in that library here. Nice - another developer did the hard part like a decade ago and I just profit, right?

Nope. I don’t know about other DAWs, but Reaper doesn’t play nice with this use case. If I attempt to send real-time messages to the host (DAW) from the Aleator along with the other data, I never see it; it just goes into the ether. Clearly this specific scenario is an edge case - there is a lot of bitching about clock/syncing difficulty within Reaper but not so much about controlling the clock from a VST plugin…which I suppose makes sense. What an idiotic thing to attempt. You can slave Reaper to an external clock though - so after I observed real-time events disappearing on the way up to the host, my next idea was to use a virtual MIDI cable and attempt to fool the DAW into thinking that the time code info coming from the Aleator was coming from an external MIDI device. Great idea!

Except for one thing. It seems that the virtual cable can only be connected to one device at a time if that device is Reaper. If LoopBe is enabled within Reaper, it will be inaccessible to the Aleator when attempting to send events (you’ll get the “device already in use” error at runtime). This was happening even though the device was only registered for output in the Aleator and input in Reaper. Not to mention that even if it worked, we now have Reaper synced to Aleator/LoopBe’s clock; I’m pretty sure it can’t be master and slave at the same time, so more than likely there’d be no way to get the clock messages to the interface and we haven’t solved anything. This would also lead to all kinds of shenanigans programmatically managing the transport in the DAW…I just wasn’t prepared for all that.

So we can’t send real-time events to Reaper, and there’s no easy workaround locally. I gotta jump through some hoops for these loops. Behold my configuration heading into late summer/early fall 2025:

The preferred path is annotated in red, the alternative I got working is in green. What a pain in the ass this was. Instead of just sending real-time messages to the DAW and letting them flow to/through the interface with everything else, I need a separate MIDI device to receive them and merge them with the notes and CC data downstream.

Good - more equipment, more cables…longer setup time. Just what I was looking for. The shit’s gonna hit hard though, hopefully it’s worth it. This was mostly a solution/architecture problem - from a pure development standpoint, there really wasn’t a lot going on, just the addition of a few lines of code in my Session and MidiProcessor classes and the creation/assignment of the output device itself.

This is supposed to be a dev log though, so I’ll touch on the few additions there were. The method for setting up the MIDI device works like this:

Sending messages is so simple it’s barely worth mentioning. I will talk specifically about clock ticks though, as there is some nuance there. We discuss clock resolution in terms of PPQN (pulses per quarter note). My internal clock has a resolution of 96 PPQN, but it’s standard to send clock messages at 24. That means that for every 4 of my internal clicks, I’ll send one externally…

And there we have it, clock ticks flowing to Eurorack directly from C# while Reaper is running (I’ve configured Mutant Brain to send clock ticks out on gate 12):

Huh, I thought this was gonna be short one - sorry. Next I’m gonna figure out how to apply scatter to my snares from code so I can sound like Aphex Twin. Assuming I can get it to work, I’ll do a deep dive here. See you out there.

Don't Say I Never Did Anything For You

Hello, yes… I’m back from the dead. Grettings from sunny Delray Beach. Just a quick checkin to let you guys know that I’ve been hard at work on my album (tentatively titled DSINDAFY as referenced in the subject of this post). I’m in the process of figuring out who will mix it - nothing is finalized yet but I believe we’re all in for a treat.

The process on this record was to use the Aleator as a source of ideas and inspiration and then layer/build compositions in the piano roll. So, the backdrop was generated algorithmically but then I went through and carefully crafted the compositions on top of that. Hopefully you enjoy the results.

Feel free to listen to the demos here: DSINDAFY

t00dles -k