Video Synthesis

Just want to chime in here @slupchips. Nice to see a new avatar on the forum. I think there is a lot of possibility here. I think it is about digging into it, sharing your thoughts and evolving through limitations and challenges transparently. Why? Because Audulus 4 is being coded in C right now, we have DSP heads working in the background developing tools, there is some move toward standardizing the library to get syncing and modulation easy to understand and interface.

For example, there are tools to grab the signals from stringed instruments which can then be manipulated both in Audulus but also brought out into analog cv with a dc coupled interface.

Like, I am thinking about a projector so that, say, if I were to get into layered looping I could effect a video signal by playing the instrument, but also capturing and evolving the loops. Sometimes you hit walls with ideas, but it is possible to then limit the scope and still advance the self-learning and community understanding by sharing your process. I would like to see more of this in whatever humble way it comes about. For me, it leans more toward “look what we are making,” than it does toward “look what I made.”

Moving further, I think that instead of just getting shapes to morf with filter sweeps, there is room for the possibility where the visuals will actually bring the mind closer to the sound of synthesis by cuing our attention to the subtle detail present. In this paradigm, there is more onus on the listener to bend to the work of appreciation — but the bending becomes alluring because the logical relationships between perceptive modalities will require less missing premises. Sorry if that sounds heady — sometimes a lack of intelligence comes through the humility of being unable to express it in plain language.

1 Like

Does anyone have anything to say about the feasibility of integrating a short throw projector into a performance?

I’m actually looking at $100 refurbished units on ebay.

Sorry I took so long to respond to your question.

K Machine can be a good tool for creating video loops for use in TouchViZ. Knowing the bpm of your video loops can allow you to a certain degree to adjust the playback speed of the video clips to match the tempo of your audio although clips created as some sort of multiple of the audio’s tempo is easier to work with. K Machine can definitly be used to create these sorts of loops.

One technique to generate video loops for use in TouchViZ is too get a video clip, crop it to half the length of the video loop you want, copy it and reverse the copy in an app like LumaFusion so that the original and reversed loops can be saved as a perfect video loop which can be manipulated in ways analogous to perfect tempo audio loops. You could for example create evolving visual patterns from selecting video loops which have lengths that are not evenly divisible by each other (e.g. 3 and 5) and then use this same technique for audio elements like when the kick and snare in a track are triggered. Even if you’re not using the same sort of technique in your audio, it’s still a good way to create some visual variation while also maintaining some common visual elements over time.

Since you’re working with looped video clips, you don’t need to really worry about syncing up with the audio so much as the MIDI control of the effects in TouchViZ will create the audio/visual synesthesia effect.

You can use a couple of approaches to synchronize MIDI from other apps with TouchViZ. Since TouchViZ doesn’t have it’s own virtual MIDI port, you can sometimes find situations where you need another MIDI app to act as a bridge like MidiFire where you send MIDI to MidiFire and then send that MIDI back out using MidiFire’s virtual output port which TouchViZ can listen to.

An app like FAC Envolver can use the the volume level of an audio signal over time to generate MIDI CC messages to control TouchViZ. You could adjust this output to get a range of control that works well with the clips you’re using. Alternatively you could use parts of some of the MIDI controls used to control your sound chain to also control TouchViZ by once again scaling the output range to work most effectively with the TouchViZ controls.

You could use an app like AUM to host multiple synth and MIDI generating apps so that you have multiple MIDI outputs to generate controls that correlate with multiple sound chains or elements of your music.

Using TouchViZ as part of how you’re creating a music video, you could use an app like LumaFusion to combine multiple layers of TouchViZ output to create a more nuanced video with a wide variety of visual effects that reflect your musical content. With this approach you can use loop or regions and tracks of a song along with corresponding layers of TouchViZ performances. Tweaking the placement of the audio in the video’s timeline would be something you’d experiment with to achieve the best results. These sorts of layered options and flexibility would not really be possible if you’re doing a live performance.

You can use LumaFusion to stretch or shrink clips to a certain degree to matchup with the bpm of your music.

Using blending modes with masking techniques where you combine more geometric sort of shader output clips produced with something like K Machine with more representational clips can be a nice way to superimpose the visual musical changes to whatever sort of imagery you like either at the level of TouchViZ or during video editing in an app like LumaFusion. You could for example take a highly audio reactive K Machine shader and apply it to your vocal track to add as a blend layer in LumaFusion over more region/rhythm/pattern based content recorded in TouchViZ.

Since TouchViZ only deals with the visual side of things, you can adjust the placement of the audio in the video until it lines up well with the musical changes going on. In audition you may have audio that you use to generate some visuals and change it out for other audio that may be different in timbre for example but share other characteristics in common like its rhythmic patterns.

It’s certainly possible to setup MIDI triggers which you control manually but trigger specific or semi-random patterns on the beat or next measure.

If you have multiple iOS devices you could use Link to tempo sync MIDI and audio sources on one device while being able to see TouchViZ being effected on the other device.

Once Audulus 4 is released with AUv3 and MIDI I/O, I think it’d be an excellent tool for these sorts of projects due to the tight control you’d have over associating sound control sequencing to visual MIDI control sequencing.

There’s certainly room for improvement in terms of VJ apps having better sync with audio transport controls the way K Machine does.

The takete app offers a lot of audio video control in real time but unfortunately it only supports core MIDI control without any virtual MIDI inputs.

If you like TouchViZ , you might want to check out the Glitch Clip app as it has some similar workflows but some slightly different approaches and results which would also work with the techniques described above.

Play with clips having different sorts of characteristics like size of details, changes in color over time, repetion, changes in transparency over time, black and white designs, grayscale designs, textures, and shapes to develop your sense of style to create clips that will facilitate it in conjunction with blend modes and special effects options in TouchViZ. As with audio resampling you can do the visual equivalent by importing clips you exported as recordings from TouchViZ back into the app to layer up visual effects for more pattern based elements of your music.

5 Likes

If anyone is looking to get into rack based video synthesis, Gieskes 3TrinsRGB+1c rack module looks like a cool way to get into video synth performance. It’s currently on sale down about 15% from its normal price. Seems like a cool entry point for anyone looking to get into it. :slightly_smiling_face:

2 Likes

@futureaztec I thought you might like to see this video I just saw from Perfect Circuit, which demos the Vidiot video synthesis device from LZX paired with the Elektron Octatrack MK II

I found this to be a neat little demo! :smiley:

4 Likes

@futureaztec you wanted a video synth that was something more like a visual generator for the sounds you were creating, right? If I understood your explanation from above, that is what you were looking for, and I think this Kickstarter project is EXACTLY what you described. Check out the demo video in the project backing page.
https://www.kickstarter.com/projects/critterandguitari/eyesy-video-synthesizer/rewards

1 Like

Yes that really covers what I was thinking. I like the video out options, the midi in, the graphics are appealing and the fact that it is dedicated hardware makes sense. The price is a bit high, but I tend to look at gear over a 5 year period. So maybe I will find ways to earn some extra with my spare time.

I will say one thing though. Moving forward, I am deeply curious about the idea that visuals could have a more direct relationship with synthesizing sounds, such that the two modalities reveal each other through algorithmic experience. The question being; if hearing a synthesizer is an experience of the algorithms (digital/logical or analog/logical) — differential tensions — could there be video synthesis that actually cues the mind toward a finer resolution of hearing with respect to understanding how the algorithms relate. Now, on the digital side (Audulus) we are, an a particular sense, modelling analog modular synthesis when we remain at the module level. It’s simulation insofar as the actions performed in our patches are predictably similar to the outcomes one would get when performing similar patching techniques on an analog modular synthesizer. Of course, there is a 4 dimensional interplay between digital and analog piggybacking, but — it seems to me — the digital realm requires the analog discipline in order to ground the practice away from bad infinities. Again, the limitations of analog experimentation provide the game space, which digital adopts as a limit-world so that constrained expressions can provide positive infinities.

With respect to the inter-modal (eyes and ears), I want to know what can happen beyond “representation.” It almost leads to the thought that simulation is in fact a deeper type of correspondence than representation because the link is not just the willingness to “take” or “treat” something “as” something, but there is a further (retrospective) condition known as “similitude” — which predates the dawning of representation in art history — that gets picked back up and reintegrated.

I think that the Turing test, so to speak, would be whether the experience of hearing synthesizers, while watching the algorithms dance visually, would be perceivably improved for the listener.

1 Like

I found an app for iOS that seems to do some cool animations that result from MIDI events. From the manual’s description, it sounds like it can be linked to other iOS devices over WiFi connection. I haven’t figured out if/how the music internal to the app can be turned off, but the animations are kinda neat. For a completely free app from Yamaha, this seems like it might have some performance potential, and is, at the very least, worth exploring. Yamaha Visual Performer :star_struck::partying_face:

2 Likes

1 Like

I feel pretty strongly about this also, and it became more readily apparent when Bitwig announced “the Grid”, which basically seems to be a direct copy of all Audulus’ functionality, just slightly altered and part of their DAW. I firmly believe this was not an original idea they just happened to have, and this is further proof of what you just mentioned - Audulus is ahead of the curve and seems to be the future of independent music making.

There are now all kinds of things becoming available that are putting the consumer in charge of developing their audio processing chain, instead of counting on some person to read the market and decide to make a single use type of devoted VST or AU plugin for your DAW. It is more value for your money, and the possibilities are truly endless this way.

Granted, it will not completely eliminate other plugins, as there are still things that can be hard to replicate, but the number of those things is significantly reduced by being able to make your own, or go to the forum and download one that someone else already made. Not much else really compares to the freedom that is offered to the users of this app and forum.

Now that I am thinking about it, there are plenty of great forums, and plenty of great apps out there with a lot of knowledge and capability on offer, respectively. However, Audulus is the only one that, for a reasonable price of entry, comes with a free membership to a generous community of like minded individuals working together to make the app better for all. Nothing else I can think of can match the combination of value on offer here.

I might mention that PD, Max/MSP and Reaktor also fit into much the same category as Audulus. I’m not actually sure which came first, not that it really matters in the end. There are many very fine plug-ins to Live that are Max/MSP based. I think the only thing holding back Audulus at the moment is the lack of a decent plug-in version. Hopefully A4 will remedy this.

1 Like

That is a good point. I am aware of those things and I also appreciate their existence, BUT they each have more significant shortcomings, IMO. I really wanted the last paragraph of my post to stand out the most, as that is the most important aspect of it for me.

Reaktor is fun to work with, and there are a lot of cool things you can do with it, but, similar to Max, there seems to be a significant amount of people out there with the intent to profiteer off of their contributions to the platform itself, which degrades some of its goodness. Reaktor is also a giant CPU and memory hog, in my personal experience, and I found it impossible to run inside of a DAW as a plugin with Ableton, Reason, and Logic Pro on my 13” 2015 MacBook Pro.

Additionally, of all of those you mentioned, none of them have a (fully functional) application that works exactly the same or close on iOS as it does on Mac. I know PD has a number of different off-branded, open source applications for iOS that can run pre-built patches, but you cannot build anything for the platform on your iPad or iPhone, like you can with Audulus. At least not since a few weeks ago when I was browsing the App Store looking for other similar applications.

I could be ill informed, but there doesn’t seem to be anything close to the type of positive community for Max, Reaktor, or PD that can contend with the level of helpfulness that the Audulus forum offers at the low cost of a couple minutes registering your email and verifying it is legit. That free resource is what puts this app over the top, for me, and makes it an incredible value when you consider the cost of entry against what you get in return. I guess that is more about the people that use it than it is about the app itself.

This is all just my perspective, obviously, and you are right, those other apps are contenders in the same space, but I do feel strongly about this application being the best, in spite of a few disappointing shortcomings. :slightly_smiling_face:

@stschoen I meant everything I said with complete respect for you and what you brought up. You had good points, and I hope that I didn’t come off as being argumentative or aggressive while elaborating about my thoughts. I realize all those things you mentioned are true, but I was just trying to clarify my point of view so you, and anyone else reading my post wouldn’t think I was some blissfully ignorant fanboy with nothing to back up the statements I was making.

The Reaktor community is deep. I made an album using Reaktor Blocks and one of the builders bought my album. That was a very special moment for me. There is a lot of warmth there.

Also, I would say and I bet other people would agree, Audulus is a satellite :satellite: of both a eurorack scene but also this DSP family. People are giving away their ideas — not all of them. Monetization is a grey and varied subject. But, yeah, Reaktor!

@stevo3985 I didn’t find your comments at all inappropriate, in fact I wholeheartedly agree. I never particularly cared for Reaktor, or any of the NI products in fact. Like you, I find them overly CPU intensive and their UI and documentation leaves much to be desired. Not a user-friendly experience at all, although their audio quality is top-notch. Max and PD are much better regarding efficiency and offer some key features not available in Audulus at this point, particularity the ability to specify the order of execution and the separation of control and audio signals which can have some performance advantages since the control signals are evaluated at a much lower frequency. I started my audio DSP journey with PD largely because it was free, which is always a strong incentive. I found it to have a rather long learning curve and the UI was more aimed at a programmer than a performer. Max can provide a much more professional UI but it requires a fair bit of work to do so. When I discovered Audulus I never looked back. I have Max4Live and have done some work using it, but I find it tedious at best.

All things considered, I still prefer coding in Audulus. The Audulus UI is best in class IMHO, and I’m willing to forgo the advantages of Max for the simplicity and flexibility Audulus offers. In addition, as you point out, it’s the only app in this category offering a complete package on both iOS and macOS. We all have our favorites for the wish list, but for my particular workflow the biggest obstacle to using Audulus is the lack of integration with my DAW. I have developed an acceptable work-around, but it’s a hack at best. A stable, multi-channel AU, preferably available on both platforms, would go a long way toward removing those last few barriers. I know that @Taylor has that as one of the goals for A4, so I’m hopeful that one will be forthcoming in the future.

Personally I don’t have an issue with monetization per se. Many of the Max based Live plug-ins are commercial products. If you are talented enough to create a piece of software people are willing to pay for, regardless of the programming environment, I see no reason why you shouldn’t profit from your work. In addition Max/MSP is an expensive piece of software so in some cases people may simply be trying to cover that cost. I will agree that it can sometimes impede the free flow of ideas and interfere with that very necessary sense of community. I have very little experience with the NI community, and only a passing familiarity with Max/MSP and PD, but I have not had any issues when I have needed help with Max or PD. I have never been a fan of a similar approach for Audulus since the ability to open and inspect a module and perhaps modify it or adapt it to one’s own needs is a key element of the Audulus experience. The combination of an open platform and a supportive community makes Audulus an excellent environment for learning about audio DSP as a creative tool.

3 Likes