This looks promising. The oscillator mixer made by @robertsyrett is great for when you want to mix up to 6 oscillators together and keep them all at the same -1 to 1 output. This might work together for n- oscillators that are all summed before this circuit.
You might also be able to use this as a compressor?
What I got from the paper is that they are proposing ways of partitioning the detection duties in a device, in order to reserve computational resources by leaving the ‘colouring circuit’ idle. Then, by utilizing various crosstalk techniques, the idea is to awaken the engine when the nightwachman detects a suitable situation. This approach would free up time for devoting to other tasks. Is this correct?
If I interpret this correctly this approach involves iterating over the sample multiple times using a for loop: for n = 1:num_iter where num_iter is the length of x[ ] which is the array of input samples. This will certainly work, but can only be done after the fact. In our case, we are attempting to adjust the gain in real time rather than in post processing. Because in general, there is no way to know what will happen to the signal in the future, we have to use a reactive approach which takes us back to a more traditional compressor design. For the special case of summing oscillators while limiting the max out to -1 to 1 and maintaining their relative volumes, the simplest approach is to add the output and divide by the number of oscillators. This disregards the fact that phase cancellation might occur, but ensures that the total is less than the clipping value. A more sophisticated approach to AGC for a static signal might be to determine the peak signal level and adjust the gain accordingly. Assuming the signal is unchanging this is not too difficult. It becomes more challenging when the signal level might vary, which is when a more traditional compressor would be used.
Someone wrote “You should look into the behavior of the adaptation signal. I think some guys call it the steering vector. Basically how does the a-priori parts of RLS behave and what can be known before hand with respect to adaptation versus known signals.”
Isn’t the point of the paper to perform a division of labour on the algorithm so that you make a prediction based on reasonable expectations, just like how we drive cars on roads we have never been on at fast speeds with the trust that no one has put a switchback ahead without a sign. Whereas if we are on the backroads we don’t know, we use way more of our attentive energy to focus on curve changes?
In this scenario there would be a detection of road grade change that would alert us to attend to unmarked changes when we encountered grated, and then ungrated terrain.
So the idea here is not to achieve fine precision, but to optimize the appropriate level of calculation in terms of elegance, by detectecting when and when not to apply fine grain monitoring.
If this claim were true, wouldn’t it falsify the paper?
Self-driving car algorithms are a lot more advanced than what you can do in Audulus. Predictive algos are possible, just not easy. In audio, we usually use lookahead - delaying the signal by a little to give time to analyze it and adjust.
I suppose that in some sense it’s generally possible to make predictions about the future state of a variable and the same is probably true for music. For example if you have analyzed the fundamental frequency of a note, it would not be unreasonable to predict that that frequency might be present in the next few samples. Or that a kick drum might continue to sound at essentially the same beat. We all have expectations about what is likely to be the next note in a melody given what has gone before. In fact, adjusting the attack and release times and compression factor for a compressor is generally based on expectations about what kind of transients are likely to occur. It would certainly be possible to devise a unit that analyzed the input stream and adjusted itself in anticipation of what is likely to come next, however, as @biminiroad has pointed out, in this case the additional computational complexity may not be justified by the result. A major limitation of the current version of Audulus is the difficulty of storing past state. The S&H node is the only writable storage we have available so retaining information about what has previously occurred in order to make a reasonable prediction of what may be next, would be cumbersome at best. In some sense, the existing compressor modules in Audulus already make a short term prediction in that when an over-threshold signal is received, they assume that the following signal levels will remain above threshold, at least for a time, but then probably decrease.
I thought you were making an analogy to predictive algos with self-driving cars (or just driving a car yourself). There’s lots of predictive “code” written into us as car drivers that aren’t simply reacting to changes in the environment through a single thing. What this appears to do is monitor the output and automatically adjust it after 1 sample. You can see in the graph that the amplitude actually increases in a tiny burst before settling down. This is showing that it’s not predictive, but reactive. It would be more akin to suddenly steering around a deer that jumped out into the road than the general awareness level and thought process you have in unfamiliar roads.
If I’m reading this correctly, it appears to be a buffer (also referred to as a signal follower). If it worked like I think we’re all hoping this AGC circuit above would work, when you stopped playing, it would amplify the noise floor to the set amount and you’d just hear static.
Since Audulus doesn’t have a noise floor it’s possible to conceive of something that would analyze the average amplitude of an incoming signal and divide it by a factor that would get it to settle between -1 and 1. The way its done currently works nicely but has to have a fixed number of inputs (the VCO Mixer 3x1 and 6x1 modules in the released library).