MUS271A (Max w10) – digital reverb

This class is not about fully understanding digital reverb – but just enough to get comfortable with some of the ideas. The patches can be downloaded from here: 09max-reverb.

Screen Shot 2018-03-06 at 6.53.08 PMFirst I would like you to listen to a chain of allpass~ fillters. This allpass filter is a specially configured delay with feedback that is designed to have a flat frequency response. Though it has a flat frequency over it’s entire decay, at any moment it is pitched. Note how the combination of different delay times and gain will sound more noise-like or more metallic. We include the allpass~ filter in most reverb designs because it adds a dense group of many short echoes.

Screen Shot 2018-03-06 at 7.19.43 PM

Our first reverb in this collection is the classic Manfred Schroeder reverb. This is just one of his designs, a combination of 4 delays with feedback (aka comb filters) and 2 allpass~ filters. In this example, I combined 2 of these reverbs in a matrix to create a stereo reverb. One innovation of this reverb is that the gain on each comb~ filter is set so that they all decay at the same time. You can adjust the delay time (500 in patch) to make a longer reverb. This reverb design is the basis of the free


Screen Shot 2018-03-06 at 7.27.14 PM


The next reverb is based on the design of Christopher Moore’s Ursa Major Spacestation. This reverb is notable for it’s use of multitap delay, time modulation, and separate delay taps for early reflections. I should note, my patch sounds similar, but nowhere near as warm and rich as the actual hardware.


Screen Shot 2018-03-06 at 7.24.51 PM



This reverb starts to feedback and resonate when the gain is set too high. In this image, the gain is set to 3.2 (the maximum allowed by the patch).






Next we have Miller Puckette and John Stautner’s feedback delay network reverb. I implemented the 16 x 16 matrix reverb in this example. There are no allpass filters in this design. Instead, the diffusion comes from the feedback matrix connecting the 16 delay lines. The matrix has a unitary gain, and the reverb will nicely feedback indefinitely if the gain is set to 1.0. Many reverb designs have been based on the FDN including IRCAM’s Spat, and possibly several of the Eventide reverb designs (my guess).Screen Shot 2018-03-06 at 7.33.49 PM


The last reverb patch in this collection is Jon Dattorro’s emulation of a famous commercial reverb. This reverb design features a circle of allpass filters and delays, with many early reflection taps in the loop. Two of the allpass filters are modulated with varying time, and the sound enters the network after being diffused by a chain of allpass filters. Like the Puckette FDN, the gain can be set to 1.0 for “infinite” reverb.

Screen Shot 2018-03-06 at 7.43.12 PM

MUS271A (Max w9) – ambisonics tools

Screen Shot 2018-03-06 at 6.39.52 PMHead on over to Zurich University of the Arts – Institute fo Computer Music and Sound Technology (aka ZHdK – ICST) to download some very usable tools for ambisonic encoding and decoding (the URL is ambipanning~ can encode a signal and place it in a set speaker array. ambiencode~ will encode a number of signals at different positions into ambisonic format. ambidecode~ can take that ambisonic set of channels and decode it into a set speaker format. There are many details and sub patchers to look into and understand in each of the help files, but this is a fairly easy and powerful system to work with. To start, you need to know the location of each of your speakers, and learn the message format to specify that location.

MUS271A (Max w9) – granular with phasor and poly~

Download the patch and abstraction here: 09-phasorgrainScreen Shot 2018-03-06 at 6.12.32 PM

To enable faster granular modulation we can use phasor~ to clock the grains at audio rate rather than metro. The signal from phasor~ is multiplied by the number of voices outside of the patch to create a ramp the rises from 0 to almost 8. Also, each message to the poly~ voices is preceded by the message target 0 so that the message (a list of parameters) is passed to all voices.
Screen Shot 2018-03-06 at 6.13.08 PM

Inside of each voice, the input from phasor~ * 8.0 is shifted down by the voice number. If the result of this is less than 0.0, 8.0 is added. The intention here is to open the cos~ window whenever between 0.0 and 1.0, and have each voice’s window offset an amount based on the voice number.

Finally, random numbers are generated quickly with a metro, and a gate stops frequency updates when the voice is active. That is, parameters are only updated when the grain is silent.

MUS271A (Max w8) – spatialization 1

Here are a few patches that use ILD (inter-aural level difference) and ITD (inter-aural time difference) for more realistic panning. Download here: 07-ILDpanning


Screen Shot 2018-02-27 at 5.21.30 PM1) This patch simply drops the ear most distant from sound (contra-lateral) by 12 dB relative to the ear closest to the sound (ipso-lateral). It uses cos and sin to turn azimuth into cartesian coordinates.





Screen Shot 2018-02-27 at 5.31.12 PM2) This second patch replaces the simple gain control with 2 cascaded lowpass filters at 1400Hz to simulate the filtering effect of the head on the contra-lateral ear. Your results may vary – a smaller head would require a higher frequency filter.






Screen Shot 2018-02-27 at 5.51.30 PM3) With the third panner we add ITD (inter-aural time difference). The difference is set at 1ms when the position is 90 degrees or 270 degrees. No difference when the sound is directly in front (0 degrees) or behind (180 degrees) the listener. Note that a quick change of azimuth can cause doppler effects due to the modulated delay time.

MUS271A (Max w8) – granular patches

Here are all the patches for this topic: 06-granular


Screen Shot 2018-02-27 at 2.14.40 PM1) this first patch demonstrates basic granular synthesis using the poly~ object and metro. “note” is prepended to the message sent to poly~ as it is typically used for polyphony.




Screen Shot 2018-02-27 at 2.19.10 PMthe sine grain abstraction is a random pitch sine wave generation with a raised cosine envelope. to turn a cosine into an envelope/window, one must invert it, cut the amplitude by 1/2 and shift it up by 1/2 so that it starts at 0, goes up to 1, and ends at zero. this inverted and shifted cosine is known as a raised cosine window.





Screen Shot 2018-02-27 at 2.55.37 PM2) this example adds 2 operator FM synthesis to a granular framework. the external patch is almost identical to the previous example except that more parameters are packed together and sent to the poly~ object.






Screen Shot 2018-02-27 at 2.59.44 PMthe “grain” abstraction is similar to the previous example except that for each grain a random ratio and index is generated and given to a small fm2op abstraction (below).

Screen Shot 2018-02-27 at 3.01.59 PM










Screen Shot 2018-02-27 at 3.44.30 PMScreen Shot 2018-02-27 at 3.46.57 PM3) the third example is a granular harmonic oscillator in which each grain generates a random harmonic of the base pitch.





The abstraction is almost identical to the original sine example except that an additional random object is added to create the harmonic frequency multiplier.







Screen Shot 2018-02-27 at 3.52.52 PMScreen Shot 2018-02-27 at 3.54.14 PM4) The final example is a granular sound file player which randomizes the playback start position. A slider in the main patch is used to set the original start position. The sound needs to be loaded with the “replace” message before this patch will work.




The playback abstraction requires a little more logic to derive the playback start, end and time from the pitch and position parameters in the main patch.

MUS271A (Max w7) – sampling

These patches should get you started with sampling. Not all techniques are shown in my examples, and many things are easier if you use the groove~ object. Also, only a few of my examples are shown in this blog entry.

Download the patches here: 05-sampling

Screen Shot 2018-02-28 at 9.08.39 AM


1) This first example shows manual playback of a sample by moving a slider to move through the samples. Your slider movement is smoothed out using the line~ object.



Screen Shot 2018-02-28 at 9.15.47 AM

2) In this example, line~ is again used to playback the sample, with the speed of playback controlled by setting the beginning and end playback points and the amount of time to get to the end. Reverse playback is easily achieved this way.
Also, sin and cos are used for crossfading between 2 samples.



Screen Shot 2018-02-28 at 9.32.56 AM

3) An unrefined patch for stutter playback. The startms number box controls the playback position in the buffer~. The size of the stutter is controlled by the metro time. Pitch shifting is controlled by speed ratio.


Screen Shot 2018-02-28 at 9.47.20 AM



A slightly more refined stutter playback patch using trapezoid~ to remove clicks on the beginning and end of each segment. Also, pitch can be controlled by MIDI note number with 69 representing normal playback speed.



Screen Shot 2018-02-28 at 9.50.50 AM



4) A sound file player which uses the folder object to open all sound files in a folder, and a popup menu to list and select them. A coll object
could also be used to organize the files.



Screen Shot 2018-02-28 at 9.56.47 AM

5) OLA (overlap add) sound file playback. This plays the sound in many overlapping segments, with each segment enveloped by a raised cosine window/envelope. Position and pitch can be controlled independently so that time stretching and pitch shifting are possible. A random offset can be added to each segment to avoid repetition.





MUS271A (Max w1) – starting point

The focus in 271A will be sound generation with Max/MSP. In the first two weeks I will go over any concepts people are unfamiliar with, so there will be less standard material. However, here are some patches we will use as a jumping off point. The patches can be downloaded here 00-maxbasics


Screen Shot 2018-03-15 at 3.10.30 PMHere is a patch showing a square -saw oscillator as an abstraction (a patch which shows up as an object within Max). Frequency is converted from MIDI note number with mtof, The toggle button selects the square or saw wave (this is not a true saw wave). live.gain~ is used as an output volume control and meter. ezdac~ is used to send the audio to the computer sound output. Finally scope~ is used to display the waveform. We will look at how to modify the scope~ display using the info panel (clicking the “i” in the right column.Screen Shot 2018-03-15 at 3.11.49 PM

The 02-squaresaw patch makes up the abstraction. The inlets and outlet are labelled “1”, “2” on the top and “1” on the bottom. Both signals and messages can pass through these ports. The rest of the patch is a simple combination of a rect~ generator and a cycle~ generator (making square and sine waves).




Screen Shot 2018-03-15 at 3.24.12 PMThis next patch shows the many MIDI messages and the objects which receive them. The simplest is probably bendin which receives pitch bend information. Pitch bend has a range from -64 to 63, with 0 in the center. I am dividing the lower range by 64 and the upper range by 63 so that bend up and down are symmetrical.

ctlin receives all other MIDI controllers (knobs, switches, sliders, pedals). The value (0-127) comes out the left outlet, and reflects the control parameter value. The center outlet is the controller number (also 0-127). Finally the right outlet is the MIDI channel. This allows a keyboard or other MIDI device to target 16 different MIDI destinations (typically different synthesizers).

notein receives both note down and note up messages. The left outlet is the note number (0-127, C4 = 60), the center outlet is the velocity or pressure (0-127), and again the right outlet is the MIDI channel. A velocity of 0 indicates a note up or release.

In this patch I am using poly to pack a voice number with the MIDI note and velocity. This voice number is used by route to direct the note and velocity to one of three sound generators. The outputs of the sound generators are all sent to “sum” with send~ and receive~.

Another patch in the class one zip archive shows how to use the computer keyboard to generate “note” messages.


Screen Shot 2018-03-15 at 3.47.26 PMThis patch is a simple use of metro. A BPM value is converted into milliseconds per beat by dividing 60000 by the BPM. metro then sends “bang” messages to two messages which control the amplitude and frequency of a sine wave (cycle~), which gives a sort of kick drum sound.



Screen Shot 2018-03-15 at 3.52.13 PMThis patch uses qlist as a sequencer.It will be explained more throughly in class. When metro is started, the time is incremented 10 ms at a time by counter. This time is used whenever one of the note messages (56, 55, 59, 58) is clicked. This information is collated into an append message and recorded by the qlist. A “bang” message causes the qlist to replay the note messages and send them to the simple synthesizer voice. qlist restarts playback after finishing by triggering another “bang” at the end.