MUS271A (Max w6) – delay & filtering

This lesson will cover two fundamental methods in processing sound: delay and filtering. These techniques are related as filtering is based on a mixing a delayed sound with itself. Like all techniques covered in this class, this is only an introduction, there are many more ideas that can be developed beyond the patches presented here. The patches for this lesson can be downloaded here: 03-filters&delay

 


delay

Screen Shot 2018-07-27 at 9.24.41 AMDelay is a method of transmitting sound through a media so that it can be heard later. This media can be magnetic tape, digital memory, or even the air.
In Max this is done with two objects, tapin~ and tapout~. The audio memory is defined by tapin~, and tapout~ needs to be connected to tapin~ to have it use the tapin~ memory. The argument to tapin~ defines the maximum length of memory, in this patch that is 1000 ms. Both the argument and the input parameter to tapout~ is the delay time. In the case of this patch we are listening to the sound that entered the audio memory 395 ms ago. The tapout~ delay time will be limited to the length of audio memory defined. Any number of tapout~ objects can be connected to a single tapin~ object.


 

Screen Shot 2018-07-27 at 9.47.59 AMWith many natural acoustic spaces (e.g. in a large room), the delayed sound re-enters the delay after it is heard, and the echo is repeated many times. This repeat can be easily added by sending some of the sound from tapout~ back into tapin~. In this patch, the output is multiplied by a feedback gain of 0.5 and added to the input signal before entering the delay line. This gain represents the gain reduction after each echo. If you select the first preset, you will here multiple decaying echoes. The second preset has much less gain reduction, and is an example of a rhythmic use of delay. The third preset shows the effect of having gain at or above 1.0. The signal soon clips and the delay memory soon fills with distortion. The clip~ object is essential to prevent the volume from getting out of control.


 

Screen Shot 2018-07-27 at 11.54.52 AMThis next patch shows the simple addition of a sine oscillator as a modulation source for the delay time. When time is modulated, the period of any pitched material in the delay line is expanded and contracted. This will cause doppler pitch shifting effects. The 4 presets show various types of time modulation. The fourth example uses an audio frequency modulation oscillator. This will frequency modulate the contents of the delay memory. Flange, chorus and pitch shifting are all created using carefully tuned time modulation. Try to expand on this patch to create these effects. It will help to use sound sources other than the percussive “pingmaker” subpatch to demonstrate these effects.


 

to be continued….

MUS271A (Max w5) – modulation 2 (frequency)

Last week we looked at amplitude modulation using 2 oscillators (ring modulation), as well as with an oscillator and a function (wave shaping). This week we will look at frequency modulation with 2 or more oscillators (frequency modulation – FM) or an oscillator and a function (phase distortion – PD). As an oscillator can be considered as a function, one might say that FM and PD are same technique. How they differ is in structure, and how the parametric controls produce timbre change. Patches for this class can be download here: w5fmpd


frequency modulation

FM is structure very similarly to AM/ring modulation. In the simplest form, there are two oscillators: a carrier and a modulator. In both AM and FM, sidebands are created around the carrier frequency. In AM, the sideband frequencies are Fc + Fm and Fc – Fm. In FM, the sideband frequencies are Fc + n * Fm and Fc – n * Fm; where n goes from 0 to ∞ and the number of audible sidebands increase as the depth of modulation increases.

Screen Shot 2018-07-25 at 12.08.51 PMThis patch shows 2 oscillator FM synthesis. Both oscillators are sine waves, and one can see the shape of the carrier being deformed as the depth of modulation increases (in the image, the modulation depth is 146). In FM, both carrier and modulator have set frequencies. The output of the modulator is multiplied by the depth of modulation, and this is added to the carrier frequency, If the carrier and modulator frequencies are related by an integer ratio, the sidebands will appear on harmonics of the carrier. One should also note that the modulation depth needs to be increased as the frequency increases so that the amount of frequency deviation is proportional to the carrier frequency.


 

Screen Shot 2018-07-25 at 12.20.35 PMIn this example, the previous patch is given new parameters. Now the modulator frequency is expressed by the ratio of the modulator frequency to the carrier frequency and the modulation depth is expressed by the ratio of modulation depth to modulator frequency (index). Ratio and index are much more sensible and powerful controls for FM synthesis. In this patch you can hear the number of audible sidebands increase quickly as the index increases. The spectral sparse-ness increases dramatically as the ratio increases, and a non-integer ratio results in sidebands which are not necessarily related to the fundamental. This patch shows the power of FM, many timbres can be created with only 2 oscillators and 3 parameters.


 

Screen Shot 2018-07-25 at 12.42.08 PMAs the index increases, one should also note that harmonics will be reflected around 0Hz and the nyquist rate. Reflection around 0Hz will give additional inharmonicity when the ratio is non-integer. When the ratio is an integer, the reflected or wrapped sidebands will land on harmonics and add to the non-wrapped sidebands. Reflection around the nyquist rate is undesirable, as the timbre will become dependent on the sample rate frequency. One trick before we move on – a very high index (over 10,000) can turn an FM oscillator pair into a tunable noise generator. Related to this is the patch on the right, in which the two oscillators are modulating each other, so that each oscillator is both carrier and modulator. By varying the frequencies and indices, one can produce many types of chaotic and unpredictable noise.


 

Screen Shot 2018-07-25 at 1.20.59 PMIn my final FM patch, I have implemented some of the FM examples that John Chowning developed when he was creating the technique of FM synthesis. In this patch, the index is modulated over the course of a note by an ADSR envelope. A pow~ function is used to give the ADSR a linear (1) or exponential (4) decay. Most of the presets sound better with a linear decay, the final two (a bell, and my version of a bell) sound better with an exponential decay. My bell preset was developed by matching the loudest harmonics of an actual bell, some of which are generated with wrapped sidebands.


 

FM has been extended much further than John Chowning’s original work. Some of the work involves greater control of the spectrum by using multiple carriers and modulators, mixing aspects of FM synthesis with additive synthesis; adding realistic vibrato in vocal synthesis and using noise sources to simulate breath or bow noise. Many papers in the Computer Music Journal and the proceedings of the International Computer Music Conference detail these developments. The Yamaha DX7 synthesizer patches should also be studied for a thorough knowledge of the technique of FM synthesis.


 

phase distortion

 

 

 

Screen Shot 2018-07-25 at 1.51.19 PMPhase distortion is a technique where the phase is reshaped by a function or through a lookup table to distort the waveform. A sinusoid will be created when the phase distortion function is the identity (let x = x) function; but otherwise the waveform is distorted, adding additional sidebands. Phase distortion is very close to frequency modulation with an integer ratio. In the case of phase distortion, the phase shaping function is cross-faded between the distorted function and an identity function. This has a similar effect to changing the index from maximum to 0 in FM synthesis. In the image on the right is a simple phase distortion patch adds differing numbers of cycles of a cosine wave to the phase. Note that the phase distortion signal is enveloped by a raised cosine so that the beginning and end of the phase is not distorted. Like FM, phase distortion can be developed much further than this example. A good reference example  is the Casio CZ-101 synthesizer, which created some very detailed sounds using 2 phase distortion oscillators.

MUS271A (Max w4) – modulation 1 (amplitude)

The next two lessons will focus on techniques which involve reshaping waveforms by modulating a simple (often sinusoidal) waveform. These modulation techniques include:

  • Ring Modulation (aka Balanced Modulation, Amplitude Modulation)
  • Waveshaping
  • Frequency Modulation
  • Phase Distortion

Each type of modulation creates a new waveform that has more harmonic content than the original. The modulating oscillator (or modulator) is connected to either the frequency or amplitude of the audible oscillator (or carrier). Varying the amplitude or frequency changes the slope of the carrier waveform continuously, and this creates new harmonics in the resultant waveform. These new harmonics are called sidebands, as they appear on both “sides” of the carrier frequency – both lower and higher.

Patches for this week are located here: w4rmwsenv. In this case, these patches are starting points – the techniques are developed further in this post. Also included are some simple examples of mixing and envelope usage.

Ring Modulation

This technique is called ring modulation because of the ring of diodes used in the analog implementation of this technique.

Digitally, the technique is much simpler, one simply needs to multiply the amplitude of the carrier oscillator by the output of the modulating oscillator. Here is a simple example:

Screen Shot 2018-07-16 at 12.50.14 PMThe carrier is at a fairly high frequency (8372 Hz), and the modulator is at 987.8 Hz. Note that the output consists of two harmonics – one is at C – M (~7384) and the other at C + M (~9360 Hz). These two sidebands are created by the sum and difference of the slopes of the two oscillators. The carrier oscillator is completely absent (this is referred to as “carrier suppression”). One can bring back the carrier by adding an offset to the modulating waveform as in the following patch:Screen Shot 2018-07-16 at 1.00.49 PM

Here a signal of value 1.0 is added to the modulating waveform. This signal is multiplied by the carrier. The carrier then appears in the sonogram in the middle of the two sidebands.

Different signals can be used for the carrier or modulator. In this patch a radio button and selector~ object is used to change the modulator waveform. All the harmonics of the Screen Shot 2018-07-16 at 1.14.46 PMmodulator are now applied to the carrier, giving a much denser waveform. For example, choose the 3rd radio button to use a square waveform. Here one can see all of the harmonics of the square wave on either side of the carrier. Also notice that low sideband harmonics wrap-around 0 Hz, and proceed back upward.Screen Shot 2018-07-16 at 1.21.20 PM

 

Most of the time, ring modulation creates rather dissonant or non-harmonic timbres. This can be limited by relating the frequencies of the carrier and modulator by integers or simple ratios. In this example the carrier is 5/3 the frequency of the modulator.

 


Single Sideband Ring Modulation (AKA Frequency Shifting)

With a little more effort the upper or lower sideband from ring modulation can be suppressed. This technique requires a sine and cosine oscillator pair for the modulating oscillator, and a 90 degree phase shifted version of the carrier.
Screen Shot 2018-07-18 at 12.47.10 PMA multiply of sin x sin and cos x cos will create a 180 degree and 0 degree phase shifted components. Adding the two products will leave only the upper sideband. In Max one can use the hilbert~ object to create sin and cos components from any sound source. In this example, a triangle wave is being processed by hilbert~. The frequency of the modulator is the frequency shift of the upper sideband. As the triangle wave is shifted upward, you can hear the harmonics go out of tune, One can also apply SSB ring-modulation/frequency shifting to sound files or live sound sources. At small settings there is still some harmonic integrity, but this soon disappears.

Screen Shot 2018-07-18 at 1.20.47 PMAnother variant of this technique is to use feedback to create a series of harmonics. If the carrier and modulator are related by simple ratios, a consonant timbre is created. Also, as feedback is increased – the gain is increased, so output volume may need to be adjusted to avoid  clipping distortion

 

 

 


Wave Shaping

And speaking of clipping distortion, another form of amplitude processing is wave shaping, a technique in which the original waveform is reshaped by a transfer function. The function is used to map input values to output values and will change the harmonic content of the original waveform.

Screen Shot 2018-07-18 at 3.44.47 PMA very typical transfer function is one which produces clipping when the input reaches a limit. In this transfer function, the input value is mapped to the x-axis, and the output on the y-axis. One can think of the input coming in the bottom of the function and the output proceeding out of the right of the function. You can see that when the input goes above 1.0, the output is clipped to 1.0, and similarly the output is clipped to -1.0 for input signals below -1.0. This transfer function is similar to simple amplifier distortion, much like what you would find in a two transistor fuzz pedal. The clipping in this case will add a great number of harmonics to the input signal (aka harmonic distortion). Also note that a steeper slope will produce gain equivalent to the slope.

Screen Shot 2018-07-23 at 12.03.10 PMThis transfer function can be implemented in Max with a multiply for the slope between -1.0 and 1.0 and by using pong~ to limit the wave to -1.0 and 1.0. in this case the sine wave is shaped into a sine with the top and bottom flattened. The spectrogram shows the additional harmonics added by this clipping.

 

Screen Shot 2018-07-23 at 12.18.31 PMThe polynomial “y = 1.5 * x – 0.5 * x^3” can be added to this waveshaper to soften the transition to a clipped waveform. This polynomial causes the slope to decrease as x increases. Any polynomial can be used in Max as a waveshaping function. In this example the polynomial is inserted after the clipping function as the x^3 term will increase rapidly after x passes the -1.0, 1.0 limit.

Screen Shot 2018-07-23 at 12.34.56 PMChebyshev polynomials are also often used for waveshaping as they can transform a sine wave into it’s harmonic. This example uses the 5th chebyshev polynomial “y=16x^5 – 20x^3 + 5x. As gain increases from 0.0 to 1.0, the output is reshaped from the fundamental to the 3rd partial to the 5th partial. Multiple chebyshev polynomials can be combined. This type of waveshaping can be very useful when creating tones which have more harmonic content as the amplitude increases.

Screen Shot 2018-07-23 at 12.49.38 PMAnother interesting type of wave shaping using half of a cos~ function and the wrap mode (0) of pong~. As the amplitude increases, the sin wave is reshaped through more and more cycles of a cosine function, resulting is a large number of new harmonics. This type of waveshping sounds much like FM synthesis (covered in next week’s lesson).

MUS271A (Max w10) – digital reverb

This class is not about fully understanding digital reverb – but just enough to get comfortable with some of the ideas. The patches can be downloaded from here: 09max-reverb.

Screen Shot 2018-03-06 at 6.53.08 PMFirst I would like you to listen to a chain of allpass~ fillters. This allpass filter is a specially configured delay with feedback that is designed to have a flat frequency response. Though it has a flat frequency over it’s entire decay, at any moment it is pitched. Note how the combination of different delay times and gain will sound more noise-like or more metallic. We include the allpass~ filter in most reverb designs because it adds a dense group of many short echoes.

Screen Shot 2018-03-06 at 7.19.43 PM


Our first reverb in this collection is the classic Manfred Schroeder reverb. This is just one of his designs, a combination of 4 delays with feedback (aka comb filters) and 2 allpass~ filters. In this example, I combined 2 of these reverbs in a matrix to create a stereo reverb. One innovation of this reverb is that the gain on each comb~ filter is set so that they all decay at the same time. You can adjust the delay time (500 in patch) to make a longer reverb. This reverb design is the basis of the free
verb~ 
object.

 

Screen Shot 2018-03-06 at 7.27.14 PM


 

The next reverb is based on the design of Christopher Moore’s Ursa Major Spacestation. This reverb is notable for it’s use of multitap delay, time modulation, and separate delay taps for early reflections. I should note, my patch sounds similar, but nowhere near as warm and rich as the actual hardware.

 

Screen Shot 2018-03-06 at 7.24.51 PM

 

 

This reverb starts to feedback and resonate when the gain is set too high. In this image, the gain is set to 3.2 (the maximum allowed by the patch).

 

 

 

 


 

Next we have Miller Puckette and John Stautner’s feedback delay network reverb. I implemented the 16 x 16 matrix reverb in this example. There are no allpass filters in this design. Instead, the diffusion comes from the feedback matrix connecting the 16 delay lines. The matrix has a unitary gain, and the reverb will nicely feedback indefinitely if the gain is set to 1.0. Many reverb designs have been based on the FDN including IRCAM’s Spat, and possibly several of the Eventide reverb designs (my guess).Screen Shot 2018-03-06 at 7.33.49 PM


 

The last reverb patch in this collection is Jon Dattorro’s emulation of a famous commercial reverb. This reverb design features a circle of allpass filters and delays, with many early reflection taps in the loop. Two of the allpass filters are modulated with varying time, and the sound enters the network after being diffused by a chain of allpass filters. Like the Puckette FDN, the gain can be set to 1.0 for “infinite” reverb.

Screen Shot 2018-03-06 at 7.43.12 PM

MUS271A (Max w9) – ambisonics tools

Screen Shot 2018-03-06 at 6.39.52 PMHead on over to Zurich University of the Arts – Institute fo Computer Music and Sound Technology (aka ZHdK – ICST) to download some very usable tools for ambisonic encoding and decoding (the URL is https://www.zhdk.ch/en/5381). ambipanning~ can encode a signal and place it in a set speaker array. ambiencode~ will encode a number of signals at different positions into ambisonic format. ambidecode~ can take that ambisonic set of channels and decode it into a set speaker format. There are many details and sub patchers to look into and understand in each of the help files, but this is a fairly easy and powerful system to work with. To start, you need to know the location of each of your speakers, and learn the message format to specify that location.

MUS271A (Max w9) – granular with phasor and poly~

Download the patch and abstraction here: 09-phasorgrainScreen Shot 2018-03-06 at 6.12.32 PM

To enable faster granular modulation we can use phasor~ to clock the grains at audio rate rather than metro. The signal from phasor~ is multiplied by the number of voices outside of the patch to create a ramp the rises from 0 to almost 8. Also, each message to the poly~ voices is preceded by the message target 0 so that the message (a list of parameters) is passed to all voices.
Screen Shot 2018-03-06 at 6.13.08 PM

Inside of each voice, the input from phasor~ * 8.0 is shifted down by the voice number. If the result of this is less than 0.0, 8.0 is added. The intention here is to open the cos~ window whenever between 0.0 and 1.0, and have each voice’s window offset an amount based on the voice number.

Finally, random numbers are generated quickly with a metro, and a gate stops frequency updates when the voice is active. That is, parameters are only updated when the grain is silent.

MUS271A (Max w8) – spatialization 1

Here are a few patches that use ILD (inter-aural level difference) and ITD (inter-aural time difference) for more realistic panning. Download here: 07-ILDpanning


 

Screen Shot 2018-02-27 at 5.21.30 PM1) This patch simply drops the ear most distant from sound (contra-lateral) by 12 dB relative to the ear closest to the sound (ipso-lateral). It uses cos and sin to turn azimuth into cartesian coordinates.

 

 

 

 


Screen Shot 2018-02-27 at 5.31.12 PM2) This second patch replaces the simple gain control with 2 cascaded lowpass filters at 1400Hz to simulate the filtering effect of the head on the contra-lateral ear. Your results may vary – a smaller head would require a higher frequency filter.

 

 

 

 


 

Screen Shot 2018-02-27 at 5.51.30 PM3) With the third panner we add ITD (inter-aural time difference). The difference is set at 1ms when the position is 90 degrees or 270 degrees. No difference when the sound is directly in front (0 degrees) or behind (180 degrees) the listener. Note that a quick change of azimuth can cause doppler effects due to the modulated delay time.

MUS271A (Max w8) – granular patches

Here are all the patches for this topic: 06-granular


 

Screen Shot 2018-02-27 at 2.14.40 PM1) this first patch demonstrates basic granular synthesis using the poly~ object and metro. “note” is prepended to the message sent to poly~ as it is typically used for polyphony.

 

 

 


Screen Shot 2018-02-27 at 2.19.10 PMthe sine grain abstraction is a random pitch sine wave generation with a raised cosine envelope. to turn a cosine into an envelope/window, one must invert it, cut the amplitude by 1/2 and shift it up by 1/2 so that it starts at 0, goes up to 1, and ends at zero. this inverted and shifted cosine is known as a raised cosine window.

 

 

 

 


Screen Shot 2018-02-27 at 2.55.37 PM2) this example adds 2 operator FM synthesis to a granular framework. the external patch is almost identical to the previous example except that more parameters are packed together and sent to the poly~ object.

 

 

 

 

 

Screen Shot 2018-02-27 at 2.59.44 PMthe “grain” abstraction is similar to the previous example except that for each grain a random ratio and index is generated and given to a small fm2op abstraction (below).

Screen Shot 2018-02-27 at 3.01.59 PM

 

 

 

 

 

 

 

 


 

Screen Shot 2018-02-27 at 3.44.30 PMScreen Shot 2018-02-27 at 3.46.57 PM3) the third example is a granular harmonic oscillator in which each grain generates a random harmonic of the base pitch.

 

 

 

 

The abstraction is almost identical to the original sine example except that an additional random object is added to create the harmonic frequency multiplier.

 

 

 

 

 


 

Screen Shot 2018-02-27 at 3.52.52 PMScreen Shot 2018-02-27 at 3.54.14 PM4) The final example is a granular sound file player which randomizes the playback start position. A slider in the main patch is used to set the original start position. The sound needs to be loaded with the “replace” message before this patch will work.

 

 

 

The playback abstraction requires a little more logic to derive the playback start, end and time from the pitch and position parameters in the main patch.

MUS271A (Max w7) – sampling

These patches should get you started with sampling. Not all techniques are shown in my examples, and many things are easier if you use the groove~ object. Also, only a few of my examples are shown in this blog entry.

Download the patches here: 05-sampling


Screen Shot 2018-02-28 at 9.08.39 AM

 

1) This first example shows manual playback of a sample by moving a slider to move through the samples. Your slider movement is smoothed out using the line~ object.

 

 


Screen Shot 2018-02-28 at 9.15.47 AM

2) In this example, line~ is again used to playback the sample, with the speed of playback controlled by setting the beginning and end playback points and the amount of time to get to the end. Reverse playback is easily achieved this way.
Also, sin and cos are used for crossfading between 2 samples.

 

 


Screen Shot 2018-02-28 at 9.32.56 AM

3) An unrefined patch for stutter playback. The startms number box controls the playback position in the buffer~. The size of the stutter is controlled by the metro time. Pitch shifting is controlled by speed ratio.

 

Screen Shot 2018-02-28 at 9.47.20 AM

 

 

A slightly more refined stutter playback patch using trapezoid~ to remove clicks on the beginning and end of each segment. Also, pitch can be controlled by MIDI note number with 69 representing normal playback speed.

 


 

Screen Shot 2018-02-28 at 9.50.50 AM

 

 

4) A sound file player which uses the folder object to open all sound files in a folder, and a popup menu to list and select them. A coll object
could also be used to organize the files.

 

 


Screen Shot 2018-02-28 at 9.56.47 AM

5) OLA (overlap add) sound file playback. This plays the sound in many overlapping segments, with each segment enveloped by a raised cosine window/envelope. Position and pitch can be controlled independently so that time stretching and pitch shifting are possible. A random offset can be added to each segment to avoid repetition.

 

 

 

 

MUS271A (Max w2 & 3) – oscillators – additive synthesis

In the next few classes we will look at the fundamentals of synthesis. In class we will actually build more extensive patches than are covered here. In this class we will look at synthesis of classic waveforms, sine, sawtooth (aka ramp), square (also pulse, aka rectangle), and triangle. These waveforms are often used in synthesis not only because they are simple to create with analog circuitry, but also because they share characteristics with acoustic instruments. All waveforms have harmonics with decaying amplitude as you go up the harmonic series, similar to most acoustic instruments. The square and triangle waves have only odd partials, similar to a pipe with one end closed. The sawtooth wave has all partials, similar to a pipe with both ends open, or a string. The sine wave has only one partial, so can easily be used to create complex tones by aggregation (additive synthesis). All of these patches can be downloaded here.

Screen Shot 2018-04-09 at 2.13.23 PM1) Basic Waveforms

All of these waveforms are available in Max as internal objects. The following simple patch shows all of them (saw~, tri~, rect~ and cycle~). Each has a frequency input and a sync (phase reset input). rect~ and tri~ have duty cycle inputs, that reshape the waveform by moving the center of the wave shape. Listen to the combination of these waveforms by clicking on the toggle buttons.


Screen Shot 2018-04-09 at 5.41.58 PM2) Duty cycle modulation

This next patch shows the effect of changing the duty cycle in the rect~ (pulse/square wave) and tri~ objects. You will notice that the tone gets brighter when the duty cycle moves away from .5 and toward either 0 or 1.0. A pulse wave with a very small duty cycle (either almost 0.0 or almost 1.0) will have nearly a flat spectrum with little rolloff. Both the pulse wave and triangle wave cancel all even partials when the duty cycle is 0.5. You can see the  odd partials diminish by moving from .55 to .50. Modulating the duty cycle with a slow sine wave is a good way to give the sound timbral variation.


Screen Shot 2018-04-09 at 6.00.36 PM3) Detuning, more specifically – SUPERSAW!!

Another way to get timbral modulation is to group several oscillators of the same type and detune them slightly. This will cause the oscillators to go in and out of tune at the rate of the difference between the two frequencies. That is, if the oscillators are separated by 1 Hz, you will hear them go in and out of tune once a second. If you use waveforms which have no odd harmonics (sine, square and triangle), you will have a moment when all harmonics cancel. For this reason, detuning is usually done with sawtooth waveforms or more than 2 of the other waveforms. In this example, I am using 3 sawtooth waveforms. The pow functions are calculating the detuning. This patch is also known as a supersaw, and adding more detuned sawtooth oscillators can make it more complex.


 

Screen Shot 2018-04-09 at 6.18.36 PM4) Hard Sync – phase resetting.

All of the oscillators in Max have “sync” inputs which reset the phase to the beginning of the cycle. If you use the sync input and the frequency input, the waveform will be reshaped by having a frequency higher than the sync frequency. That is, a frequency of 1.5 Hz will complete 1.5 cycles per second, and a sync frequency of 1 Hz will cause the waveform to repeat every second. The resultant waveform is 1.5 cycles of the normal waveform repeated every second. This interrupted waveform will have a sharp discontinuity and many more high harmonics. This synthesis technique is called hard sync. One interesting aspect of this is that whenever the frequency input is an integer multiple of the sync frequency, you will get a harmonic of the sync frequency.

A couple of notes on this patch. The upper display represents the phase of the oscillator, the middle display is the resultant waveform and the bottom display is a sonogram. There is a bit of ugly logic in the middle of the patch to cause the sawtooth to “wrap” or to keep an amplified phasor~ within the range of 0.0 to 1.0. Max doesn’t have a wrap~ object like pd, so I rescaled the input and output to phasewrap~ to get the same behavior. An experienced Max programmer is welcome to tell me of a better way to do this :).


5) Additive synthesis

Additive synthesis is the technique of adding multiple waveforms which are at the harmonic frequencies of a fundamental frequency. The partials are typically sine waves (which have no harmonics themselves). The harmonic ratios can be easily manipulated, as can the amplitude of each partial. All of these numbers can and usually do change over the duration of a note. This amount of detail allows one to specify an exact timbre, but also requires a large amount of data (typically a separate amplitude and pitch trajectory for 32 or more partials. For this reason, additive synthesis is not often used, as it takes a lot of exacting work to get a good sounding result. Current common uses of additive synthesis are pitch shifting and autotune.

Screen Shot 2018-04-09 at 6.43.59 PM5a) The tone wheel organ. One common example of additive synthesis is the tone wheel organ. The amplitude of each partial is controlled by drawbars. Here is a patch which simulates the drawbar settings for a simple tone wheel organ. Only 8 sine wave oscillators are used, and an ADSR envelope generation object is used to shape the note. This patch is designed to be played by a MIDI keyboard, but the note can be set (the number box above sig~) and a 50ms note played with the bang above delay 50cycle~ 4 provides a little vibrato. The amplitude is not normalized in this patch, so the output volume needs to be turned down to avoid distortion.

Screen Shot 2018-04-09 at 7.03.21 PM5b) Going down the rabbit hole. This patch demonstrates simple additive synthesis, but also demonstrates the need for more detail. However, creating many oscillators and amplitude controls can be tedious. In this next example I am using the poly~ object to create any number of harmonics. poly~ uses an abstraction (a separate patch) and creates many copies of it. Each copied patch can find out which copy it is from the object thispoly~. For additive synthesis, that number is used for the partial number and is multiplied by the fundamental frequency. There is also a amplitude adjustment which mutes the voice when the frequency is above 20000 Hz. This is a crude method to stop aliasing.

On the right side of the abstraction is a sel object which computes the amplitude for each harmonic. In this example, I am creating various simple waveforms. From left to right: sine, pulse-train, sawtooth and square. You can see under each sel output is logic which determines the amplitude of each partial based on the thispoly~ number. These amplitudes are sent out the out~ 2 outlet to be summed to an overall amplitude.

Screen Shot 2018-04-09 at 6.59.46 PMThe external patch is simple in comparison. poly~ creates 64 patches for partials, a radio button selects the waveform, and the left output of poly (the summed sine waves) is divided by the summed partial amplitudes so that the resultant waveform has an amplitude of 1.0.

oscbank~ can alternatively be used for additive synthesis, but independent control of each partial is more difficult.