I started reading the Audulus 3 Module Library Docs. The signal types section is incredibly useful for understanding the fundamentals of the environment… and I have some questions.
What is the audio rate (at which signals are processed)? Is this a setting or a constant that’s hardcoded into the software?
Is Audulus event driven or everything is re-processed constantly (at audio rate)? My question is: if a value is 0 and it changes to 1, is it processed because there is an event listener registering that there was a change OR the entire signal path is re-processed constantly at the audio rate mentioned above?
It sounds like the 4 expected signal types are:
on / off: ex. 0, 1…
0 to 1: ex. 0.5, 1…
?: ex. -1/12
-1 to 1: ex. -1, 0.5…
The above is correct?
For the 1/octave
What is the range of acceptable values? The human ear has a range (which varies a bit from person to person), but is there an expected range for this type of signal in Audulus? In other words could I go -7 or 10 (even though it would not produce a sound one could hear?
If I have a modulation based on an expression/vector (say it’s a sine) would the modulation be processed at the audio rate?
On iOS, it’s 44.1k. On computer, you can run it at higher sample rates, but there are some issues - one of which I think is the VCOs tuning is tied to sample rate, so higher sample rate increases pitch. This will be fixed at some point.
Everything in Audulus is processed at audio rate. Feedback is processed in frames unless you use a z-1 Unit Delay node, in which case it is processed every sample. You can also use the z-1 Unit Delay for feed forward applications, and they stack so you can create z-2, z-6, etc.
Yes. There’s a small exception for things like the Delay Sync module which outputs a time in seconds, but that’s about it off the top my head.
Also, gates can have height 0 to 1 when going into an envelope, which allows for dynamics. But otherwise they’re just 0 to 1.
The Octave range isn’t restricted on the high end except for the sample rate limit, and on the low end, you can go low and slow so any VCO can be an LFO if you want it to be (and then translate it into the 0 to 1 range). -5 to 5 is 10 octaves, but -4 to 4 is the most musical range.
You mean if you have a sine wave LFO modulating something like filter cutoff, is that processed at audio rate? Yes. Everything is always processed at audio rate, but unless you explicitly tell Audulus with a z-1 node, everything is processed in chunks of about 300 samples called frames. You usually only need to use z-1 in feedback configuration when audio is involved, but it’s helpful in other circumstances too, like the High Low detector or Change detector which compares one sample to the next and iterates it if some condition is met.
Thank you! Almost everything is clear now. Follow up questions:
Is the default on a computer 44.1k as well? You mentioned it can be run higher… Does that mean I can set the rate somewhere?
You lost me here… What is a z-1 Unit Delay node?
When I run into an exception will it be obvious (ex. I open the module and there is a note next to the output)? If it’s not obvious, do I plug the signal into a meter to figure out what values it outputs?
This is confusing… You are saying they are 0 to 1 in both cases…
You mean that unless it’s a z-1 node, Audulus takes 300 samples/frames, processes them, and then takes the next 300 and repeats?
Depends on your computer and its interface. The plugin also will run at your hosts’ sample rate, so you can set it in Ableton Live for example.
A unit delay forces Audulus to process things one sample at a time instead of in batches of ~300 samples. If you just have audio moving from one effect to another, processing in batches is fine. But there are some cases when you want Audulus to run in single-sample processing.
You might be getting confused with batches vs. single sample - every sample is still processed when being processed as batches - it’s not every 300th sample that is processed - It’s just not outputted one sample at a time unless you explicitly tell it to using a Unit Delay node.
z = variable for sample
-1 = delay by one
The only exception I can think of is this one, and it’s documented in the manual.
No - for example, an envelope won’t retrigger unless the gate resets to 0. It just rises to the height of the gate. If you stick a sine wave input to an envelope it won’t trigger because that the infinitesimally small moment in time where the sine wave touches 0 is unreadable at 44.1k sample rate.
So for all other purposes except when entering an envelope, a gate is always 0 or 1.
There are ~300 samples to a frame, and it processes them all in one go, then outputs them, and then takes the next 300 samples and repeats. If there is no advantage to running in single sample mode, it’s pointless to waste the extra CPU cycles on working in single-sample mode. But you have the option to do that when it’s necessary, and Audulus will algorithmically figure out which parts of your patch need to run at single sample and which can run in frames.
@biminiroad: on iOS is audio rate 44.1k even if your device is using a different sample rate? Some iOS devices now use 48k as their native sample rate (which can’t be changed unless using an audio interface) not to mention that some people run their audio interfaces at sample rates other than 44.1k.
Audulus uses the sample rate set by the host system. Typically this is dictated by the ADC and DAC that are the current audio device. For example an iPhone 8 when using the internal speaker or wired headphones is locked at 48 kHz but when using Bluetooth is typically 44.1 kHz. On iOS there aren’t any built-in ways to manually set the bitrate. On macOS you have a bit more control and my Focusrite Scarlett will run at 44.1, 48, 88,2 and 96 kHz. I did some testing this morning and I’m happy to report that @Taylor has already fixed the sample rate issue with the oscillator node. When I first started with Audulus, increasing the sample rate would change the pitch of the oscillator. I created a patch with a sample rate node and oscillator node and confirmed that the oscillator pitch is unchanged whether the iPhone at 48 or 44.1. I then tested it on my Mac and got the same results. The oscillator pitch was stable at 44.1, 48, 88.2 and 96 kHz. Higher sample rates should reduce the occurrence of aliasing but will also use more CPU. There has been some discussion about providing a mechanism to do selective oversampling in the parts of a patch where it would make the most difference, and we may see this introduced in a future version of Audulus.
Hate to squash some dreams here but I think we nixed that - you’ll be limited to global sample rate. There are some things like new nodes that will work more efficiently (like a larger mux/demux that doesn’t need expression nodes to switch) that will shave off some CPU around the edges, but that’s it.
Got it. I’m familiar with the Audio MIDI Setup in Mac OS. You can see what I get in mine below (without plugging in an external interface).
So I could run Audulus at 96kHZ (on my laptop) if I’m okay with the CPU trade off?
I understood what you mean with the batches. I also glanced at the Uni Delay Examples (thanks for providing them!). I will have to return to looking at the z-1 Unite Delay node, when I am slightly deeper in the Audulus game.
In summary: modulation signal coming from sine, triangle, saw waves hits the 0 point too quickly to trigger the reset? Does this mean I should only send modulation signal into envelope as a square wave output?
Btw, I understand what you meant about the gate height [less than 1 results in smaller amplitude] from your example. Thanks for showing it to me!
You mean that I trigger the single sample rate by using z-1 node? Then Audulus determines which part will be processed one sample at a time and the rest it does in frames? I understand this in theory. But I’ll have to revisit the z-1 node examples patch again when I know more of A to understand it in practice.
Btw I think that having an educational onboarding patch which explains + shows examples of the 4 different [expected] signal types in use will be incredibly beneficial to first time users. It would be the first thing I would like to see.
I might make it myself as my contribution to the onboarding part of the project :), but I’m also a bit slow rn… someone else could probably make it x10 faster than me… In any case if anyone volunteers to do this, I will work with them to make sure it has all the detail that would be helpful to a first time user wondering about these things.
P.S. A large chunk of my current “day job” consists of improving the usability of applications.
FWIW, if one doesn’t exist already, a ‘tutorial’ patch that summarizes (with simple examples) best practices for rounding, re-scaling (such as rescaling and stepping knobs) would probably be useful to a lot of people.
You should use clock signals or other gate signals for envelope gate inputs. To modulate the parameters of the envelope, you can use whatever modulation signal you want.
Triangle, Saw, and Sine waves only hit 0 for an infinitesimally small period during their periods. Square hits 0 for longer, and that was just an illustration of it. In general it makes more sense to use a clock (which is basically a square LFO) to trigger envelopes.
Yes, but it’s only necessary when to get the result you want you need it to process like that. If something doesn’t work the way you expect it to, then z-1 might help, but if you don’t know what you’re doing, we can always help here on the forum. In general, when working at the level of modules, you won’t need it.
The expression I use when I want a range of integer values from a knob is floor(k*maximum + 0.5) where maximum is the highest integer value I want. The 0.5 offsets the knob so the the maximum occurs before the knob reaches the very end. So floor(k*8+0.5) will give you a range of 0 - 8. If you need a range starting from a different value just add the offset at the end, floor(k*7+0.5)+1 gives you a 1 - 8 range.