I have been taking a class on synth patching that teaches all the controls and everything in depth, like deep and I learned recently about doubling and transposing to make the sound interesting. So that is what I attempted to do here with this patch exactly, by making a dual square wave, with one oscillator set up normally, and the other transposed exactly (should be, anyway) 7 semitones higher.
It works with all keys on legato mode for the MIDI controller node, but it doesn’t sound right, like the math is not exact, and the tones being emitted are too dissonant, or at least that is how it sounds to me. So then I switched it to duophonic mode, and this is where things got really not awesome. I can’t figure out what is wrong, but each alternating key does not emit any sound now. One will be fine, and the very next one (in semitones steps) doesn’t work.
I cannot for the life of me understand what is going wrong here, except to say maybe it is a bug? Also, attempted to add the nodes to change the math from frequency to octave and then back again, but it acts exactly the same - legato doesn’t sound right, duophonic emits no sound in tonic steps. Why would each alternating key behave in this manner? Does anyone have any idea? I have attached both patches below.
You’re trying to put poly(2) oscillators in to a mono input of the mono to stereo. You only need to add them together and put a poly to mono at the end. The maths is better on the second one to get a musical interval although only 1 expression is really needed to add a 7th to hz: log2(19/12)*hz+hz.
You were using the gate for the oscillators amp and had the gate connected to an envelope changing the level at the same time. you could just put the envelope in to the osc amp inputs instead or in the following, I left it attached to the level and turned on oscillators permanently.
Respect and gratitude to you for the quick reply, however, I think you’re mistaken about your assertions, even though your changes may have resolved the issue.
If I understand correctly, the poly(2) means two notes can sound simultaneously, which would mean that, in theory, I should have 4 voices total when you combine the two dual voiced oscillator nodes. That makes 4 voices, not channels, so it should not be an issue that there are 2 voices going into channel 0 and 2 into channel 1.
If you were correct, there would be no way for us to push 8 voices, or 16 from a single oscillator or more, on the simple dual channel setup that the majority of us have. Unless I am misunderstanding something about the fundamental concepts mentioned above, or about the nodes that I used (which is entirely possible, as I will not ever claim to be the smartest person anywhere under any circumstances), there is something else going on in my patches causing this issue, which is not explained by what you mentioned.
Thanks again for the reply, and if you think of anything else, let me know because I am interested to get this figured out for sure. @robertsyrett@stschoen what are your thoughts?
The problem in your patch has to do with how multichanneling works in Audulus3.
I am not that good at explaining this so I‘m just going to recommend you to take a look at the PolyToMono Node Intro.audulus (101.7 KB) (if you didn‘t already) to understand how exactly polyphony and multichanneling works in Audulus3.
It‘s basically the option to route multiple signals through one cable.
The cause of the problem in your patch is that (as @AccidentalCircuits said) you are using a MonoToStereo node to combine the signal from the two oscillators that both have two channels because you are using 2 voice polyphony.
This is a problem because the MonoToStereo node doesn‘t combine two signals. It takes two signals and puts them out to one multichannel cable.
If the input to a MonoToStereo node already has multiple channels the the node only takes the first channel.
So you can‘t hear every second note you play because it is routed through the second channel and because everything that isn‘t in the first channel is lost at the MonoToStereo node. multichannel example.audulus (5.4 KB)
Don’t sell yourself short, that makes perfect sense, and you did an excellent job of explaining the reason why. Thanks for taking the time to explain in detail, citing examples to show the reasons why this is the case here, although that is not necessarily the way things are expected for me. In my head, I have mental pictures of certain things, and for the mono to stereo, I pictured a set of two small pipes that go into a divided larger pipe, and in this picture in my head, the rules of the larger pipe applied to the smaller pipe, just lacking multiple channels, if that makes sense? So there was expected to be the same capability, with multi voiced audio in a mono channel tube being funneled into a dual channel tube, combining the voices from both sides, and then separating back out when it got divided to a stereo speaker node. Like I said, perfect explanation, and thanks for taking the time to give me a deep dive explanation.
Also, credit where it is due, @AccidentalCircuits, you were absolutely not mistaken. However, it is nice to know the reasons for the unexpected behavior, rather than just being told, “this theory of relativity is wrong and I fixed it for you” without explaining the rules of special relativity lol it’s not on you that I’m dense though. Thanks again for the reply
Just curious, is there a reason that you are doing this with a poly rather routing one midi output to two two oscillators and horn micing the oscillators? Your description of what your exploring seems like that would be the way to go.
I agree with what you said, tho I didn’t know what you were trying to do, I assumed you were making a mistake with the mono to stereo but it turns out you were going for some interesting routing and needed an explanation in that direction. Am curious as to what you are trying to do now.
Hey @espiegel123 and @AccidentalCircuits! Thanks for the replies The reason for the weird routing is because I was hoping to route the tuned oscillator through the righthand channel, and the detuned by 5th interval oscillator through the left, so I could get a better idea of what I was working with, and I had thought it might sound kinda cool to have different, but still harmonious frequencies coming through each channel, and the listener’s brain would do the blending, but it seems the only way I can accomplish that is with two of everything from the oscillator nodes onward to the end, as you’ll see in the attached patch.
It is really not that cool and different sounding that I am willing to double up everything I am working with for a kinda interesting different concept (I figured it would sound a lot cooler with some slight modifications and some effects applied), but this is probably as far as this idea is going to go. If either of you has any suggestions, I’d be happy to know about perhaps a workaround I might be missing, but from where I am sitting, it just looks too tedious and time consuming. Maybe A4 will have a different feature that will make this possible, but it certainly won’t be a deal breaker if they don’t.
Anyway, like I said, thanks for the replies, and sorry for not making my intent more clear from the first post, I was just trying to abbreviate it so I wouldn’t be writing an entire article worth of text for a simple (probably dumb) question lol. Have a great evening!