Follow

@npisanti what's the status of that log file? :P

@cancel sorry, i didn't had time to do it today, i will do it tomorrow
D=

PS: as additional info, the jitter wasn't in the delta time of the different OSC messages i was sending in the same tick, it was between the consecutive ticks of orca

@cancel oh errata corrige, i just did it now
gist.github.com/npisanti/4a052

i don't know why it seemed a lot more tight this time... can i ask a question? what is the bar in the bottom used for? because if it's for cpu usage it was maxed out by the OSC messages last time i had jitter

@npisanti @cancel It's the number of outgoing messages MIDI+OSC+UDP combined.

@neauoire @cancel oook, so no relationship to cpu, it maxes out by design when i send 4-5 messages out each frame

@npisanti @cancel yeah, it's just a little thing that helps visualize outgoing data. Not related to CPU burn.

@npisanti out of about 600 output events, only 8 had a jitter above 100 microseconds.

0.000102
0.000125
0.000171
0.000215
0.000328
0.000354
0.000416
0.000888

the 8 events that had jitter above 100 microseconds, with the time jitter in seconds (so 0.000888 is 888 microseconds, or 0.888 milliseconds)

@npisanti only 30 events had jitter above 100 nanoseconds

@npisanti

your osc receiving code has only 500 microsecond accuracy at best

github.com/npisanti/ofxPDSP/bl

and you use a separate thread for each receiving address, so each address will have different jitter/timing

@cancel i don't use a different thread for each address, there is just one ofxOscReceiver and multiple message buffer (an internal pdsp class) you are right about that the code is just 500 microseconds accurate, but if i'm under 1 millisecond i think i'm fine, if the jitter is more than 5 ms it could be felt, sometimes i even used a 1-3 ms random note delay as "humanizing". But anyway if the logs report the jitter is under 100 nanosecond that is totally acceptable and it is my fault somewhere.

@cancel so next time i feel it's jittery i will lower the ofxPDSP refresh rate and check again, then try to check how my OS is giving priority to threads, sorry for wasting your time

@cancel ( i use a separate thread for each receiving port, but orca just sends to one port )

@npisanti ah ok, i misunderstood

you can tweak the scheduling in orca-c

github.com/hundredrabbits/Orca

set more of these to 0 to make it more aggressive (but it will burn more cpu)

orca-c schedules itself ahead of time (w/ quadratic backoff) and then does a spin for the last 100 microseconds (if it thinks it won't be scheduled again in time) to make it as exact as possible

but it's still possible for the terminal or something to block it, since it's single-threaded (for simplicity)

@cancel ok, i also reviewed my code, probably the OSC input in ofxPDSP could be improved by starting from the ofxOscReceiver code and tweak it instad of using it as an object.

For orca-c I will try to run orca with realtime priority as it is at option with the linux RT kernel i'm using, as your method of getting the strict timing seems fine to me

Sign in to participate in the conversation
Merveilles

Revel in the marvels of the universe. We are a collective of forward-thinking individuals who strive to better ourselves and our surroundings through constant creation. We express ourselves through music, art, games, and writing. We also put great value in play. A warm welcome to any like-minded people who feel these ideals resonate with them. Check out our Patreon to see our donations.