Editing Digital signal processing (DSP) ...
Introduction to digital signals
Continuous vs discrete
In order to understand the benefits and limitations of digital signal processing (DSP) it is important to understand the distinction between continuous and discrete signal representations. In the Synth DIY world, much use is made of the terms analog and digital without any clear explanation of the very different assumptions that underpin those terms.
A signal can be defined as any function that conveys information about the state of a physical system. This is usually represented as a variation of values over time or space. Signals are represented mathematically as functions of one or more independent variables.
Continuous-time signals are defined across a continuum of time and reflect a continuously variable value. So, for example
- f(t) = sin(2wt)
is continuously defined for any and all values of t.
Discrete-time signals are only defined at specific times and the independent variables can therefore only take on discrete values. Discrete-time signals are represented by a sequence of discrete values. In the context of Synth applications, the specific times at which the signals are defined are regular and evenly spaced.
In real world applications, the values a signal takes on as it varies usually represents an amplitude. The signal amplitude can also be either continuous or discrete.
When we talk about an analog signal, we are usually referring to a signal that is both a continuous-time and continuous-amplitude signal. A digital signal is both a discrete-time and discrete-amplitude signal, which translates into the reality that it is both sampled (discrete-time) and quantised (discrete-amplitude).
This all sounds very academic, of course, but is important to understand the fundamental differences between the two domains when making design decisions around sample rates, bit resolution and cost when selecting components for implementing DSP hardware.
Input sampling and quantisation considerations
When selecting an ADC (Analog to Digital Converter), there are two main dimensions to consider.
The first is the resolution of your conversion - the number of bits you want to represent. The choice here will affect the amount of quantisation noise introduced into your signal. For example, let's assume you have a bipolar input signal that varies between -1V and 1V. With an 8-bit converter, that means that a range of width 2V is represented by values of anything from -127 to 127 (or 255 steps). That means that the input voltage represented by 1 bit of the converted signal is equivalent to 7.84 mV. Assuming a perfectly linear converter, where the quantisation error is uniformly distributed between -1/2 LSB (Least Significant Bit) and 1/2 LSB, this means that when an input signal increases from one sample to the next by 3.92 mV or less, its digital representation will not appear to change, whereas if it moves 3.93 mV or more, it will change by 1 bit (and similarly for decreases). This can be considered to be like a rounding error that gets injected, as noise, into the input stream as a consequence of conversion.
For an "ideal" ADC, the signal to quantisation noise ratio (SQNR) is calculated as:
SQNR = 20 log10(2Q) dB
where Q is the number of quantisation bits. As a simple rule of thumb, this can be approximated to about 6.02 Q dB
For CD quality audio, the signal is represented as 16-bit samples per stereo channel, which should result in a SQNR of about 96 dB, which is considered to be significantly below the threshold of human perception. (There are a whole slew of reasons why high quality audio signals are sampled and processed at much higher resolutions such as 20- and 24-bits which will be discussed in a later section on signal arithmetic).
The second design dimension to consider is the rate at which your conversion is going to take place. The decisions you take here affect both the quality of the input signal and the quantity of processing you will be able to perform on it before presenting an output.
The first thing you need to consider is what is the bandwidth of the signal you wish to process on the input. For example, if you are processing audio signals, what sort of quality are you satisfied with? For telephony quality audio, for example, the input signal is band-limited to about 4 kHz, where the economics of higher-level grouping and super-grouping of FDM channels result in a compromise where any frequency components above 4 kHz are redundant and all the necessary speech information can be represented sufficiently clearly below that threshold. For CD quality audio, the bandwidth is extended out beyond 20 kHz, as being better than the average healthy human's auditory bandwidth.
One of the features of sampled signals is the idea of aliasing. This is a form of distortion where higher frequency components, when sampled, appear "folded" (in the frequency domain) into lower frequency artefacts. In order to minimise this distortion the designer has to decide the effective bandwidth of their system and enforce it. That is done in two ways. The first is to place a low-pass filter on the input signal, commonly known as an anti-aliasing filter, and the second is to select the sampling frequency correctly. This is done by calculating the Nyquist Rate, which is twice the highest frequency component required in the input signal.
So, for CD-quality audio, the standard sample rate is 44.1 kHz. This means that the Nyquist Frequency (the upper limit for input frequency components that don't produce aliasing effects) is 22.05 kHz. Since the upper threshold for a normal adult with healthy hearing tops out at around 15 kHz, this means that an economical linear anti-aliasing filter can be produced to eliminate sufficient aliasing components for good listening quality.
Therefore, the design goals for any chosen ADC would appear to be very simple - maximise the sample rate and the number of bits and you can't go wrong. But ...
In practice this approach has to be constrained by the economics of the application and the nature of the actual processing that is going to take place once the signal has been sampled and quantised.
For those new to the idea of digital sampling, I'm sure it comes as no surprise that increasing the resolution of your ADC has a direct correlation with the increase in cost of your components. Similarly, increasing the sample rate can have a commensurate effect.
If the goal is to produce high definition audio (where resolutions of 20- and 24-bits are sampled at 48 kHz, 96 kHz, 192 kHz and beyond), a great deal of expense and time will have to be spent on components and board design for analog input stages, converters and DSP processors. This is usually outside the range of all but the richest DIYers.
If, however, the goal is to manipulate CV signals for modular synthesis, far cheaper approaches are available to the layperson.
Example: Designing the input of a CV quantiser
A common component for the modular synthesist is a CV quantiser - a module that maps a continuously varying CV into a voltage corresponding to a note on a predefined scale for pitch generation.
If we assume a normal Equal Temperament Scale of 5 octaves between 0 and 5 V (for example), this represents 12 half tones per octave (or per volt). This is 60 steps which can be represented in 6 bits (26 = 64 steps). Many cost-effective microcontrollers come with one or more ADC pins that range from 8- to 12-bit resolution, which makes them more than suitable for the purpose. A CV quantiser is either going to be free-running (i.e. quasi-continuous) or triggered by a clock or event from another module. It is not, even when free-running, going to be sampling at audio frequencies. It is actually going to be sampling at very low frequencies, and there will be little requirement for anti-aliasing because the clock rate of the controller is likely to far, far exceed the rates needed within the quantiser itself. This should make the selection of a core component very cheap and easy to build around. The main design challenges are more likely to be the calibration of the DAC output and the writing of the internal software.
Of course, having seen how easy it is to design a simple quantiser, it should be possible to see that, given 10- or 12-bits to play with, a more complex quantiser, supporting portamento and microtonal scale mappings, could be built without incurring any significant additional hardware cost.
Output stage design
Digital signal processing applications
Delays (echo, reverb, etc)
Filters: Finite impulse response (FIR)
Filters: Infinite impulse response (IIR)
Canonical form implementations
- DSPWiki The Digital Signal Processing (DSP) and Synthesis Wiki.
- The Scientist and Engineer's Guide to Digital Signal Processing by Steven W. Smith, Ph.D.
|This article is a stub. You can help the Synth DIY Wiki by expanding it.|