I have been noticing some people are struggling with GAS. I was watching a demo of the Model D. Personally, getting vintage synths for a good price is not my interest, but I can understand. I think with Berhinger the selling point is that you get x many of oscillators, filters, etc. It’s got MIDI, MIDI USB, etc, etc… Then it is how much?
Yeah well great. But hold on just a minute. We really need to give @biminiroad some credit for his library reface. It has taken me 6 months to even crack some of the amazing tools. I slapped together a subtractive synth that I think competes with the Model D. This is partly because the reface library modules are really well put together. Love the filters. Nothing special here, just basic stuff like you see on these synth reissues. So the point is, watch the video and hear the fatness. Then
open the patch and tell me it doesn’t reduce your GAS!
Of course, if you want it fatter you can just add another oscillator and filter like in the Model D demo. I recommend fiddling with the detuning, drive, cutoff, modulation, etc. This is obviously nothing new to seasoned members, but I think it is important keep returning to these basic benchmarks as the software progresses and manufacturers tempt us! Fatter Subtraction.audulus (403.7 KB)
While I would certainly agree that @biminiroad’s library reface is an outstanding collection of modules and, as you know, I am a big proponent of digital, both in software and hardware, I have to admit there is a certain appeal in the hands-on approach implicit in an analog synth. Even if we were able to exactly model the original MiniMoog, I’m sure there would still be a market for a hardware version. As much as I enjoy using the soft instruments available using Audulus and other software, I still get a kick out of turning the knobs on a “real” instrument. I would agree that you can make almost any sound with a good laptop, some software and maybe a controller or two. In some ways, Eurorack vendors and the rest of the hardware synth market are dealing in obsolete equipment. As general purpose computers get faster and more capable and the software to run them gets more sophisticated, there is less and less need for special purpose audio chips and analog circuitry, but I suspect that many people will still want to buy something tangible rather than another piece of software.
These days almost all recorded sound is reduced to a stream of bits. In theory it should be possible to exactly replicate any sound that can be digitally recorded, but doing so in practice isn’t so simple. While I think it’s possible to closely emulate most of the components in an analog design, the non-linearity in analog circuits in general and filters in particular are challenging to match. I don’t believe that this is impossible by any means and modern software comes very close, but even if we manage to make the differences vanishingly small, people will still swear that the original hardware sounds better. I find it interesting that vinyl records seem to be making a comeback, even though CD’s have some significant advantages in sound quality. You can argue whether there is any audible difference between the two formats and whether a higher sample rate and bit depth really make a difference in practice, but I doubt you can convince LP enthusiasts that vinyl records should go the way of Edison wax cylinders. (I have a pretty big record collection BTW and still have a turntable to play them on, so I guess I’m guilty as well).
Personally I’m glad that Behringer has chosen to build some low cost analog semi-modular units. They have allowed me to get some experience with hardware modular synthesis while staying within my limited budget. They’re certainly not perfect, but they’re excellent value for the money, particularly compared to purchasing Eurorack modules with similar capabilities. In combination with Audulus and an ES-8 they have expanded my musical horizons.
I think that I see a major shift in the reface modules here. Part of talking about the immediacy of hardware is also just good front end design with wide sweet spots. So, I have a Dinky’s Taiko 12 bit digital drum voice eurorack module. Why bother? Because I wanted to be able to have a complete portable eurorack that did not require an ipad to run, but benefits greatly from it.
I also have the analog four. So I understand the balance between digital and analog. But I think that with the reface, especially these filters, what we get is the key features brought into a face that offers just the right controls. I think one can watch these videos and get inspired by the hardware layouts and then see what can be done in Audulus. It seems to me that there is often a sense of progress that is user-end, that may not get as much attention from feature chasing programmers. There is nothing new in most of these synths coming out. They just nailed layouts, scopes, capacitive keyboards, sequences, modulation ins and outs. But we have all this with the ES-8/9.
It’s true that much of the appeal of hardware is the ease with which various aspects of the sound can be controlled. While MIDI controllers offer much of the same basic functionality, the current crop leave much to be desired in terms of transparency. Hopefully the capabilities offered by the new MIDI standard will encourage manufacturers to release new controllers that feature automatic labeling for the controls as well as automatic mapping. Eventually I think we will see controllers that adapt themselves to the software instrument to the point that there will be little difference between a hardware digital synth and a software one and analog emulations will be so close to the hardware that it will make little difference which you choose, at least from the perspective of sound quality.
I want to say that Audulus is making progress in digital synthesis. Not only is this theoretically possible, there is a certain limit to what a medium can do for you. After that, it is a matter of how far you take it. I don’t think anyone can know the limitations of digital into the future. But it is looking like there is something going on here that even VCV Rack can’t approach. Audulus keeps bringing me closer to the phenomena.
I think that emulation is limiting concept. I have tried to make this point elsewhere in terms of DeLanda’s book Intensive Science and Virtual Philosophy here on this forum. Simulations are not emulations. They are closed systems that can have representative relations but not relations of similitude. Unfortunately you hold the view that philosophy is somewhat impractical, so I imagine this is a hard sell. This is what led me to John Conway’s work on cellular automata though.
For me, part of the justification is that these hardware synths are designed to be around for decades and be serviced. This paradigm is more appealing to me to pour hours of my life into learning than as so often happens with your favorite software being no longer supported by a new OS or an update coming along that changes what you liked. There is of course a consumer treadmill for synthesizers as well, and devices that blur the lines between digital device and musical instrument, but generally using a Minimoog in 1979 is the same as 2019.
Yeah, no arguing that. That’s totally a people thing, not going to change any time soon. Honestly that’s the most compelling argument to just make something new and let things sound like what they are. Non-linear filters that model analog circuits can sound really great! and I think Audulus has some good ones, and I have no reservations about mixing and matching.
I know what you mean. Currently part of the software that supports both my UltraNova and my FocusRite is 32 bit and won’t run on the next release of macOS. The app that configures my Korg NanoPad doesn’t work properly on Mojave. Stand-alone hardware doesn’t have to depend on an external O/S to function. It does seem that fewer and fewer devices are truly independent. Even the “analog” model-D and Neutron have apps. For better or worse we’re in a connected world.
Regarding the comparison between emulation and simulation. The thing about philosophy is that it often disagrees with received concepts. So, consulting an encyclopedia for clarification can be at cross purposes with the project of making the present better than the past, if you will. In the case of DeLanda’s work, he is disagreeing with classification itself. Sorting things into “kinds” can lead to problematic infinite regressions in foundational descriptions. It is a problem with set theory, isn’t it?
I drew a distinction between “resemblance” and “representation” above for important reasons. In order to represent a concept, language users need to perform certain logical meta-cognitive tasks beyond those of comparison.
The thing is, you took me to task for questioning the idea that art is subjective in the other discussion we had. This implication of a subject/object metaphysics puts you in the camp of Descartes. But there is a very complicated and rewarding story to be told that leads to Hegel. However, in order not to contradict ourselves we have to drop the old metaphysics; or, as Hegel would put it we could see Descartes as a moment within the larger movement of consciousness becoming self-aware through time. Reason developed. And as we come to understand how, it continues to develop. It is not a large step from simulation to artificial intelligence.
Need to pause here. Notice in the article that the use of the word “emulation” is use dby some to signal mimicking. But simulation seems to have a special sense. Simulations seem to generate intel.
Article: “Purists continue to insist on this distinction, but currently the term “emulation” often means the complete imitation of a machine executing binary code while “simulation” often refers to computer simulation, where a computer program is used to simulate an abstract model.”
The distinction between emulation and simulation in the linked article is different than the distinction DeLanda makes that I was referring to. Both distinctions are very detailed, so not the place to unpack here. I don’t want to invest myself here and then just offend you. Here’s a good tiny description of what DeLanda was on to about 10 years ago, which had been and is being developed:
“Simulations have become as important as mathematical models in theoretical science. As computer power and memory have become cheaper they have migrated to the desktop where they now take the place of small-scale experiments. A philosophical examination of the epistemology of simulations is needed to understand their new role, underlining the consequences that simulations may have for materialist philosophy itself.” (a blurb from an old lecture poster)
If his work concerns epistemology, then he is somewhat in disagreement with conventions and, usually, their foundations.
So I am on board with the wikipedia story and do not see an issue, other than the idea that – we can all well imagine – DeLanda has been delving very deep into the idea of simulation. There is a large territory out there.
Is that enough?
I would say that the more Audulus sets itself apart, the more it is acting as a simulator. But not because it is trying to recreate a specific event/experience/object. Digital synthesis is so infinite that it had to harden certain algorithms in order to undergird the field in which the experiments we perform can take place. We do not directly model the analog circuitry. We make parts that behave together like the parts of analog synthesizers behave together or not like analog synthesizers. We might repurpose a few parts or alter those parts. The analog experiments inspire digital experiments. I think what analog provides are closed systems with unique threshold characteristics which, when driven to the right point, call attention to the nature of electrons and how they behave together, how they make choices just at the brink. But the programming there is minimal. You just need to set the conditions and run the synth and it will surprise you. With digital you don’t run up against the chemical makeup of a vactrol. So there is this play going on. It’s all familiar to all of us. But there is something subtle and worth taking care of that has to do with ‘nature’. The analog synth guides us. And, IMO, it sounds amazing because there is no intermediary between the actual infinte complexity of the world vs. isolated logical systems connected to the world through binary hardware.
The point of the patch above is to say okay, wow. Audulus has nailed this without a team, per se. It has taken a while to get to this point. Imagine the patches we would build if we had a modern supercomputer to do it on. In order to build something comparable to a 70’s synth, we still have so much to learn. But once we can build something like that, our capacity to construct (out of logic) components that have never existed before seems likely. By “we” I mean humanity, since I think it will take a century. The computer is a universe constructor for responsible gods.
I think you have hit upon a fundamental distinction between digital and analog systems. The world is non-deterministic. At its most fundamental level, uncertainly rules. Quantum mechanics informs us that beyond a certain point, the universe is unknowable and we can only deal in probabilities. Analog systems exhibit these fundamental uncertainties at a level accessible to us. Digital systems are designed to remove as much uncertainty as possible. Ideally they behave in a completely predictable manner. Any attempt to simulate the behavior of an analog system with a digital one has to take this into account by intentionally introducing an element of randomness, which is not a trivial task in systems designed specifically to remove randomness. A digital emulation of a resonant filter is an example that comes to mind. As the Q of an analog filter is increased, the filter becomes increasingly unstable until it begins to self oscillate. This occurs gradually. Initially the oscillations tend to die away without further excitation until finally the noise inherent in the circuit is sufficient to induce the oscillations and the filter will oscillate continuously. Because the circuit is non-linear, as the intensity of the oscillations increases, it eventually reaches equilibrium as the circuit saturates. Modeling this behavior digitally is difficult. It’s not too hard to simulate the filter’s behavior at lower Qs, but getting a similar response as the system becomes unstable requires a much more sophisticated model and is still the subject of much academic work.
An exact simulation is not an achievable goal given that we cannot predict with certainty how the analog system will behave. However, given sufficient computer performance, we can get arbitrarily close. At some point the differences will be small enough that in practice they won’t matter.
Of course to a certain extent the difference between emulation and simulation is a matter of detail. In our case, nodes are the fundamental building blocks with which we create out designs. They are opaque to us, although we know that there is underlying structure that isn’t accessible. So from our point of view they would be considered simulations. I hesitate to refer to them as closed systems since they have inputs and outputs and so are not isolated from their environment. If we use these nodes to construct a model of a particular synthesizer, at the level of functional units (VCAs, VCFs, etc.) we are constructing an emulation since we are attempting to copy the structure of the synth rather than simply attempting to reproduce its output. Even in the case where you are attempting to emulate the structure of a physical circuit by duplicating its components, there is a level of abstraction involved, since at least in most cases you are not attempting to model the behavior of individual electrons, but are simulating their behavior in bulk.
Without the benefit of familiarity with DeLanda’s work, I see the distinction between emulation and simulation as a blurry one at best. If a simulation is a “black box” system which produces the same output for the same input and an emulation is a system which also produces the same output for the same input but additionally attempts to model the same internal processes, then emulations can be regarded as a subset of simulations.
If one follows the concept of emulation to its conclusion, an ideal emulation would be indistinguishable from the entity it emulates. At this point it would be fair to ask which is the emulation?
Indistinguishability of output is important but not sufficient to account for comparison in the robust ‘representative’ sense I was on about above. In order for one thing to represent another thing (thing here is an ostensive, not a material distinction), the represented has to be treated a certain way (has to be taken ‘as’ something) by the representer. So there is a very important logical step there. Historically there was a point at which we got away from the account of judgement that made use of ‘likeness’, and substituted it for a story about judgement that is actually modeled on ethics, believe it or not. How can that be?
So, when I look at something it turns out that I am actually making a judgement. I can not see anything as meaningful (therefore, describable) unless I establish some relationship of intent. While objects do not have intensions in the human sense, we ascribe intentions to things, just as we ascribe intentions to other people. We can not read minds, but we do. In order to do this – in order to perceive – one has to become, in a sense, responsible for these hunches. This is the ethical turn. Or, rather, this is when Hegel showed that Kant needs to ground statuses in a theory of reciprocal recognition, the most imfamous being the master/slave dichotomy.
The master is only a master if the slave recognizes her as the master. So the master is a slave to the slave, in terms of identity. When we apply this to perception, to judgement, these claims we make about the world enter into relations with other claims. You can construct all of the connectives in elementary formal logic out of negation. So, by putting claims on contrast, we properly define them.
Suppose you have a discussion with someone about politics. Suppose that they take on a view that seems wrongheaded. One approach is to try to think of examples where if one holds the view in question, it leads to outcomes that are clearly undesirable by the person that holds that view. It is now the responsibility of that person to try to resolve the conflict by, say, dropping their belief. They might also question the validity of the comparison, etc.
In the case of perception the same process is at work. The reason we know this is we need to be able to account for errors in judgement. We don’t need to have perfect judgement, so to speak, as Descartes was worried about. All we need is to be able to tell a story about refraction, for example, to explain why it appeared that my body was wavy in the mirror but my body is not actually wavy.
This is a 400 year project which, I argue, is presently coming to fruition. Elementary formal logic provides a space or location to ‘compute.’ Sellars referred to this as the logical space of reasons I believe. Keep in mind, this is not DeLanda’s area. This is just some backwork for making the case that perception is something we can only be responsible for, in the sense that judgements are claims we make that have to get along with other claims we make. When two perceptions are in conflict we can then reassess things. Mechanics do this all day long.
DeLanda’s Assemblage Theory addresses some of this. Open and closed is too binary. So DeLanda talks about territories and borders. When nations are at war borders are tightened and cultural differences within the tribe are less tolerated.
He uses a few of these dynamic examples to illustrate systems. It goes back to Leibniz, this whole/part problem. It is the set theory problem.
But this all totally applies to threshold systems like analog synthesizers. Why? Because when systems reach thresholds they alter their behavior to accommodate change. In rivers water rolls and eddies, in pots it boils. Steam is water’s way of resolving a conflict with heat, just as a snowflake – crystallization – is itself a solution to a conflict. This leads to more detail about phase transitions and a theory of decision. I will spare you.
Just moving back to the GAS subject. Really enjoying this Moog video. What’s up with these oscillators?
One of the things about the reface I like so much is the packaging of certain features. I know we have a drift. But I wonder if an actual 901 oscillator module would be cool, like the other reface oscillators?
I have a 901 recreation for eurorack and I gotta say the pitch instability was the largest quantifiable difference between it and my other analog oscillators. Also the Pulse waves have a very slight slope, but honestly you couldn’t hear that difference so much as barely see it on an oscilloscope or when you couldn’t get another oscillator to sync to it.
In case you want to go further down this road here are some nice oscilloscope captures of the the 901 waveforms at various frequencies:
This uniqueness discussion reminds me of the filter quality of the Microvolt. I don’t have the resolution to display it on an o-scope myself, so I don’t know what it looked like, but I knew that the filter in the circuit is by far the most interesting thing I have worked with in the past year since I started playing music again and diving into modular. Then I saw this mention in the Sound on Sound article that was able to actually describe scientifically what I am hearing, like what @robertsyrett and @futureaztec were discussing about the oscillator drift/instability in the Moog.
It’s not like it can’t be re-created, however, you really need to exert a lot of effort to replicate the warmth and stand out sound of certain pieces of analog hardware, in some cases. I feel like we were discussing this in another thread recently, as well. Great thread!