We wrote a tutorial with videos and #FOSS source code for the technique we use to live edit our C++ game engine: Runtime Compiled C++
We use RCC++ to speed up development of our in-house voxel game engine. It's similar to Unreal's Live Editing, but for any C++ code. Since many gamedevs develop engines in C++ I thought you might be interested. Here are the links:
Very Cool :)
All stuff I’ve done before of course, so not hard to add, but I think end users will like the spectrum analyser.
Took a break from my own stuff to add some visualisations to Sonic Pi and fix up the windows build. Interesting learning experience.
Nice bit of live coding - like the simplicity of sonic pi, and this style of ambient music.
Experimenting with an Orca mode in my Vim-like editor. It should work quite nicely; and make a nice sequencer option for the live coding tool....
There are interesting things the editor can do; such as render the operators as icons/animated graphics, etc; since I have control over how it looks. Interesting work for the future....
This person is a wizard and I *really* appreciate the amount of effort put into documenting the design and implementation struggles of their craft. https://mitxela.com/projects/flash_synth
Started adding settings, etc. over the weekend; lots to do to make the tool something everyone can use, and not just me.
Audio processing path now looks like this:
- Sample Stereo input
- FFT; magnitude, with temporal blending and convolution filtering to smooth it out.
- Bucketing into 4 'course' frequency buckets for a vec4 shader input
- Transfer to a texture containing 4 rows (Spectrum Analysis * 2, Raw Audio * 2)
Next: Detect the main frequencies; since it should be cool to show a color in the visuals based on the predominant note or even chord.
.. it's a hard problem because you are always 'late'. You get the audio (I suspect) after the soundcard does, or at least at the same time, but then you have to FFT, upload a texture, etc. Stereo FFT on a 48Khz audio channel is a tiny amount of work though. But what if you get a buffer of audio every 10ms or so. I suspect I have to calculate time into the audio buffer for each frame and generate the FFT from there. This is tricky stuff.
Working on latency between audio and visuals. In this profile you can see the problem on my laptop - 50ms to render the frame is too long. On my main machine I got it down to 16ms, before realizing I was synced to VBlank, and the geometry in my scene contained a massive amount of polygons due to over-subdivision of the geometry.
Got it further down to about 1ms/frame but still mulling over sound card settings, buffer sizes, etc.
Stereo FFT up and running. Lots of fun to play with ;)
I added a way to import shadertoy shaders using their API. It's an easy way to make the tool instantly useful to people; and a great resource for techniques. cpp isn't the nicest language for web traffic, but I made it work using a couple of OS libraries.
The original shader is here:
To make it work I just have to splice a list of uniforms into the shader; which of course is what the ShaderToy site does behind your back.
Here's a deliberate refinement of the idea; a movie this time.
I save every coding 'accident'. The nice thing about graphics programming is that the accidental stuff is often more interesting than the intention ;)
The result here is due to a faceted torus being modulated by a sin wave around its circumference. Animated, it looks a little like a space 'butterfly' ;)
Revel in the marvels of the universe. We are a collective of forward-thinking individuals who strive to better ourselves and our surroundings through constant creation. We express ourselves through music, art, games, and writing. We also put great value in play. A warm welcome to any like-minded people who feel these ideals resonate with them. Check out our Patreon to see our donations.