what on earth is this?
they're in all kinds of gadgets, that produce 2 signals, which, if you were to treat them as X, Y coordinates, the signal would draw a little square for you. thus the "quad" in "quadrature".
these are great rotary encoders because it's about the simplest way to tell which direction the knob is getting rotated.
they can be produced mechanically but often they're a little LED, paired with two light sensors, and a slotted wheel between.
my friend @smiffy calls this grey code.. .which… IT TECHNICALLY IS. the special case of grey code reduced to just 2 bits.
a thing I've been thinking about lately is how it just so happens that the quadrature signal coming out of a rotary encoder is pretty close to the signal you need to drive a stepper motor. If I can get enough attention and energyy together I might test this theory out.
QPSK is, so it is written, a very popular modultion/demodulation scheme that builds on thr quadrature encoding, to do a neat magic trick. in an audio/analog signal, QPSK (quadrature phase shift keying) can transmit 2 bits at a time
it works like so:
1. two bits map to one of four corners in a square, we’ll call one of these corners/bitpairs a “symbol”
2. each of those corners corresponds to an angle sine and cosine of a circumscribed circle
3. then for each symbol, cut out a peice of sin pi(e)
now, interestingly, with QAM, corresponding your symbols to a 4 x 4 grid on the phase/amplitude plane is not the only possibility. if you feel confident in your transmission mediujm’s
fidelity, you can slice it up even finer.
or, carefully map out the characteristics of your medium to find an optimal quantisation.
maybe it could be dynamically negotiated.
maybe the optimal arrangement of
points is not a neat grid. fuck around and find out.
okay so, i could try and get a cassette recorder, shitty variety, ans try to
map out an optimal encoding of QAM.
(look how much the “mo-dem” diagram for QAM looks like a frequing esoteric sigil)
the adaptable nature of QAM means the symbol encodings can avoid regions of Phase/Amplitude space that decode poorly and best. each frequency/symbol rate as well. appeently high end of casette with tascam deck and metal tape is 16000khz. the safe or typical is 8000-11000khz
so,, let’s say, optimistically, i can get 11000khz on audio tape, which would the. translate to 5500 baud (symbols per second), then, REALLY optisitically, assume i can get 4 bits into each symbol. that’s 22kbits per second, or 2750 bytes per second. beats the pants off of the c64 standard 300/bits per second. (37 bytes per second)
of course, the phase/emplitude encoding scheme isn’t limited to just one frequency at a time is it? what if it were possible to stack multiple simultanpus carrier waves on the one signal, seperate them out later?
and they’re not limited to quantised points either? each carrier wave could encode a continuous X/Y position in the phase and amplitude of the wave. the signal could be encoded off a dual analogue stick playstation controller or similar.
i mean, i don’t really know the limits
So here are some FASCINATING experimental results from attempting a 64 QAM encoding on average audio tape
1. something needs to be done in the encoding to account for wildly variable playback speed and amplitude.
2. ~4000 baud but 2480Hz carrier wave? i thougth the symbol rate needed to me smaller than the carrier?!
3. he could only get reliable frequency response up to 9kHz
4. He thinks custom pixel animation formats aren't as cool as video.
i am going to need to get a cassette recorder aren’t I? should i get a vintage or new one? i almost feel like i should aim for lowest common denominator for these experiments.
the linked experiments use a pilot wave on the second stereo channel for timing. i wonder if a multicarrier set up would work as well. have the timing signal on the same channel but at a seperable frequency
this guy actually succeeded in getting video onto audiotape
the discussion on the comments section of hackaday is interesting, aside from all the usual hacker dick measuring bullshit. someone mentioned “nbtv”, hmm
another big takeaway from these experiments is that, for specifically the case of recording and playing back on cassette, amplitude and phase modulation are challenging since those are precisely the unreloable dimensions of the medium.
so is frequency really. but it’s all in matters of degree. first off, 64 strikes me as incredibly ambitious. really what you want to do is to pick a constellation where you measure the range of a decoded symbol, and leave enough margin around that range.
but, it seems that, if we have dsp on our side, we *can* decrease that range by correcting the phase and amplitude fluctuations.
it seems also that the system could be made a bit more robust by not looking so much at the absolute position of each symbol quantisation, but at the *transition* vector from one symbol to the next, so that, in the state machine, you can eliminate transitions through 0, or very short transitions from one symbol to the next.
at least, that is how some schemes deal with phase, it seems. the symbol angle is not encoded as the absolute phase of the carrier, but the phase difference between the current cycle and the last cycle. but even that could be unreliable on a casette tape. hmm. I'm buying a recorder so I can do experiments!
What if the DTMF approach, of encoding symbols into chords is better?
in the DTMF scheme, the two disjoint (non overlapping) sets of frequencies are played in two tone "chords", to form a coordinate in a 2D matrix of, I thought 16, but it turns out 26+4+10+2= 42 symbols- in the standard. (Apparently 4 of those are reserved for the military. holy balls)
that's standard dtmf. What if you played 3 frequencies together as a harmonic chord from a musical scale, and encode bits in
1. the presence or absence of a note in the chord
2. the interval the chord is moved up or down from the previous chord
3. the interval of the notes within the chord
4. whether the chord is louder, quieter or the same as the volume in the last chord.
since some of these are ambigious in absolute terms, encoding and decoding is done with a finite state machine.
that's just an idea. maybe it would avoid some of the problems with the instability (in absolute terms) of audio cassette by relying less on an absolute clock and more on relative measurements and self synchronisation, it could even have a variable symbol rate, and may even accidentally sound nice sometimes.
@mathew a friend of mine had one (or at least something like it?) when I was a kid. can condirm: it’s ahit.
@zens to get 11KHz frequency response you would be looking at a decent Japanese hi-fi grade cassette deck in *good* condition from the late 1970s onwards one the 1980s or 1990s is more likely to be affordable. But these are getting harder to find and can fetch quite high prices of course you could also use it to listen to music with - but anything other than ferric cassettes are also fetching premium prices (maybe chrome tape is no longer made these days?)
@vfrmedia even more impossible in australia. not a good place for retro nerds unless you’re mister money bags. shipping anything here is $$$$
@zens You might be able to get cheap SMTPE hardware for your timing track, as digital made it pretty well obsolete.
@grumpysmiffy what kind of resolution does it have? it’s needed more for like, clk signal more than for absolute timing. being able to seek on the tape could be cool though
@grumpysmiffy yeah, would only be good gor course time then. the timing signal i am talking about would need to closely match the carrier wave
@zens a lot of cheap tape decks are mono (as you probably know to watch out for).. and with low grade tape there can be a lot of bleedover between channels. therefore a single channel modulation scheme might be the way to go there?
@palomakop i had actually forgotten before got excited and decide to buy one that looks like it’s mono.
lowest common denominator!
@zens I literally just finished a course on exactly this at the university a few days ago (and I got a 3/5 grade, so I guess I'm the most authoritative expert on the subject that exists).
Every configuration of signal to noise ratio (which here is determined by your tape), modulation method (e.g. M-ary QAM), and symbol rate, leads to a specific bit error rate (that can be calculated), and you need to balance those choices out to get an error rate that you're OK with.
@zens I don't remember the formulas by heart, but basically if you know the signal to noise ratio of your tape, which modulation you've chosen to use, and at what rate you're putting symbols on the tape, you can calculate at what probability any given bit read off the tape will be read incorrectly. Then you can for example lower the symbol rate, use an error correcting code, use a different modulation (or why not all of these?) to get the probability of errors arbitrarily low.
@zens I'm sincerely sorry if I'm just here explaining something you already knew, I only wanted to bring up this specific relevant part of the subject (the error rate/probability) that you hadn't mentioned yet in the thread
@vurpo it’s interesting! would be good to get the formula, though maybe it’s just multiplying those varables together
@zens It was one specific formula for each different modulation, and they were definitely more complicated than just multiplying (or I would have remembered them)
@zens I only found http://www.comlab.hut.fi/opetus/333/reports/Jing_error_probability_of_digital_signaling.pdf which is just a set of lecture slides, but it was a bit hard to find good resources with concrete answers. Some key concepts to know when reading it (look these up on Wikipedia) are AWGN or average white Gaussian noise (the most common mathematical model of a channel with noise), the Q function (a probability-related function), and "Eb/N0" (a sort of normalised SNR measurement, also the Wikipedia page really is named that).
@zens the probability of *symbol* error is just the probability that noise in the channel will cause a transmitted symbol (e.g. one of the 16 points in your QAM constellation) to be received as a different one of those points, and using your code (e.g. Gray code) you can estimate how many bits inside the symbol (since one symbol in that case contains 4 bits) would be changed by that error
(and channel=tape, transmitted=recorded, received=read back)
@zens The simplicity of 2-bit Gray is quite beautiful. When I finally get a working Installation of the old Xilinx FPGA design software, I'll do an up/down counter with a decimal decoder using simple logic gates that runs off one of these. It's like schematic capture, but allows you to emulate it in use.
What I'd REALLY like to do with this, is make an electro-mechanical version, but I think I missed the boat on getting hold of old telephone exchange gear.
@zens Back at school, I visited Plymouth telephone exchange, late in the transition period, where there was still one room with Strowger switches. The sound was straight out of a Steampunk fan's wet dreams.
The electromechanical handling of loop disconnect dialling is quite amazing. But an absolute shitter to maintain! Tommy Flowers, who built all the Bletchley Park machines was seconded from Post Office Telecommunications.
This might be orthogonal to your point, but it's interesting…
I've worked with A/D systems like this. From a hardware perspective it isn't two bits. The light sensors don't return 'on' and 'off' – they return a voltage that rises and falls.
From a software perspective, this means the A/D converter returns an unsigned int, which is sampled at a high rate looking for what are called 'rising edge' and 'falling edge' within a time domain. The code then converts these to bits.
Basically the input signals from the light sensors produce a sign wave of voltage as the slit approaches them, is over them, and leaves them.
The A/D converter could include TTL logic that does the rising/falling edge detection and then sets a signal line high and low, but it would be logic unique to the whole encoder system. This is too expensive, as is dedicating a small CPU to the task.
So it always ends up being the job of the embedded systems programmer to write the code.
@jackwilliambell i don’t really have an overall point, i am just thinking out loud
that’s a fair enough point. i am kinda dumb about circuits. nevertheless, there seems to be a lot that can be done with a quadrature signal without even trying to concert to a digital signal, isn’t there.
looking at the stepper motor drive signal diagrams, it looks like i can make that by inverting A and B coming off a rotary encoder to make C and D. is that naive?
Surprise! Stepper motors operate on a sine wave too. I know the diagrams all show a square wave, but in actual fact the power supply kind of ramps voltage up quick, drops to a plateau it holds a moment and then (relatively) more slowly drops back down to zero.
The kinds of things you are thinking about would work if the hardware matched the abstractions we use to understand them. But, like much software, they are leaky abstractions.
Doesn't mean you can't do it all in code though.
@jackwilliambell I'm trying to learn how the world of analogue computing works, which is a world of sine waves and physics. I realise of course, that in an electronic circuit, there's no such thing as a square wave- it's physically impossible.
if you're careful, you can get kinda close, but you're more likely to get something like this:
@jackwilliambell and so what I proposed was, how much can i do without the microprocessor? could I generate the appropriate stepper motor driver signal without one?
The answer is: you can with sufficiently clever circuits. However, it would be an 'art project' in the sense you couldn't make a competitive product with it.
Ever since the 1980's it's been cheaper to use minimal circuitry with the cheapest CPU capable of handling the load.
Now you're moving out of my knowledge realm. I know just enough electronics engineering to understand circuit diagrams and make educated guesses as to what happens in operation. But I'm not an electrical engineer.
I've worked with them though. They write REALLY crappy code when they are trying to bring some hardware up. Then they bring me in and announce that I don't have to worry because they already did the interrupt handlers and shit.
Yeah… Right… Pull the other one.
@jackwilliambell I need to get a bit less lazy and get more hands on. I was doing arduino projects for a while when my pocket chip was working, but that stopped working and now I don't have a good way of reflashing it, or reprogramming the arduino in a way that makes me feel safe. (I don't want to plug my shitty electronics fuck ups into my primary computer. seems like a bad idea)
But I get a bit overwhelmed in watching the video tutorials when the person makes seemingly arbitrary decisions
@jackwilliambell use a 380 resistor her to set the circuit into the down state, pull up the pullout diode to reverse the electron flow in the dimorph circuit, to step down the step up.
Also, imagine APIs WRITTEN IN C, NOT ASSEMBLER, that require you to check if a register is locked, if it isn't you lock it, change it to a pointer to a structure containing the API arguments, unlock it, and then throw an interrupt.
Really. And, since I'm not allowed to change the interrupt handler myself to use a const pointer to a memory structure I end up writing hundreds of function wrappers for each variation.
BTW – there's a reason for the whole lock routine. But that reason makes no sense on a single core CPU. Only the EE's aren't software people. They know all this, but they also know how they learned to do it in school, which was the safest way possible, and that's how they do it.
You know, they never actually do make that argument. They know every new product is a complete rewrite anyway.
Which might be one reason why their APIs suck so badly, even the ones that don't use interrupts.
Seriously, Embedded development is a whole other world.
Also FYI: I used 'lock' above. The EE's don't call it 'locking'; the correct term is 'latching'.
It sounds really weird when someone with a heavy German accent says 'latching'. You want to come to attention!
Merveilles is a community project aimed at the establishment of new ways of speaking, seeing and organizing information — A culture that seeks augmentation through the arts of engineering and design. A warm welcome to any like-minded people who feel these ideals resonate with them.