mus177/206 – updated fuzztrem, other JUCE details

Here is the updated fuzztrem plugin with 3 parameters and controls added ClassTest1126.

Sample code to parse MIDI – noteOff, noteOn and bendPitch are my own methods – you will need to write your own versions of these methods to connect the MIDI data to your process.

   MidiBuffer::Iterator it(midiMessages);
    MidiMessage msg(0x80,0,0,0);
    int pos;
    int32_t note;

    // start with the MIDI processing
    while(it.getNextEvent(msg , pos))
    {
        if(msg.isNoteOn())
        {
            note = msg.getNoteNumber();
            noteOn(note);
        }
        if(msg.isNoteOff())
        {
            note = msg.getNoteNumber();
            noteOff(note);
        }
        if(msg.isPitchWheel())
        {
            bendPitch(msg.getPitchWheelValue());
        }
    }

Simple code to save your current settings to a session (pulled from my plugin ++pitchsift)

void PitchsiftAudioProcessor::getStateInformation (MemoryBlock& destData)
{
    // You should use this method to store your parameters in the memory block.
    // You could do that either as raw data, or use the XML or ValueTree classes
    // as intermediaries to make it easy to save and load complex data.
    
    ScopedPointer xml (parameters.state.createXml());
    copyXmlToBinary (*xml, destData);
}

void PitchsiftAudioProcessor::setStateInformation (const void* data, int sizeInBytes)
{
    // You should use this method to restore your parameters from this memory block,
    // whose contents will have been created by the getStateInformation() call.
    
    // This getXmlFromBinary() helper function retrieves our XML from the binary blob..
    ScopedPointer xmlState (getXmlFromBinary (data, sizeInBytes));
}

mus177/206 – third assignment

  1. make 2 audio externals, one which generates sound, one which processes sound.
  2. both externals should take a float message or an extra signal inlet to allow timbre modulation
  3. make a pd patch which uses both externals and demonstrates controls
  4. make sure input signals and messages do not go beyond allowed limits
  5. comment your code and pd patch
  6. turn in code and pd patch week 6 tuesday

mus177/206 – 3 oscillator examples

a basic triangle oscillator o1

a flexible (and probably too complicated) overtone oscillator otosc

a bell like oscillator, with non integer harmonics, and LCM phasor o2

—-

a couple extra methods to create sine waves:

// a resonant sin wave - based on the state variable filter
ResonantFilterSine(float *frequency, float *output, long samplesPerBlock)
{
long sample;
float freqAngle;
for(sample = 0; sample<samplesPerBlock; sample++)
{
    freqAngle = twoPi * *(frequency + sample)/sampleRate
    *(output+sample) = sinZ = sinZ + freqAngle * cosZ;
    cosZ = cosZ - freqAngle * sinZ;
}
}

// method six - complex multiply sin
SinComplex(float freq, float *freqBlock, float *output, long samplesPerBlock)
{
	long sample;
	float freqAngle, angleReal, angleImag;
	
	freqAngle = twoPi * freq;
	angleReal = cos(freqAngle);
	angleImag = sin(freqAngle);
	for(sample = 0; sample<samplesPerBlock; sample++) 	
        { 		
            if(freqBlock) 		
            { 			
                freqAngle = twoPi * *(freqBlock + sample);
                angleReal = cos(freqAngle);
                angleImag = sin(freqAngle);
            } 
            *(output+sample) = angleReal * sinZ - angleImag * cosZ;
            cosZ = angleReal * cosZ + angleImag * sinZ;
            sinZ = *(output+sample);
            if(sinZ > 1.0f) sinZ = 1.0f; 
        }
}

MUS174B – Syllabus

mus 174b – audio studio techniques – winter 2018
cpmc 203/268/269 – tuesday, thursday 11:00 to 12:20
instructor – tom erbe – tre@ucsd.edu – cpmc 254

teaching assistant – jordan morton 

topics

  1. general mix and edit principles
  2. editing
  3. filtering, eq – depth and layering
  4. compression, expansion, gate, limiting, signal routing for effects
  5. echo, delay, chorus, flange – tempo synchronization of effects
  6. reverb, spatialization, varispeed/doppler – use of space in mixing
  7. distortion, emulation, spectral effects
  8. mastering techniques
  9. midi (control & synths), tempo templates, synchronized effects, plugins
  10. integrating computer music software with your DAW

books

  • bartlett – practical recording techniques
  • tape op magazine www.tapeop.com
  • bob katz – mastering audio

class requirements

  • 5% attendance + participation
  • 30% each of 3 assignments
  • 5% extra for leading a group

assignments

  • assignment one – layering and space – electronic music
  • assignment two – dynamics and highlight effects – acoustic instruments
  • assignment three – depth through reverb & structure through automation – mastering

 

MUS271A – Syllabus – Winter 2020

music 271a – electronic music techniques – winter 2020
cpmc 365 – wednesday 2 to 5
tom erbe – cpmc 254 – office hours: tuesday 1 to 3, thursday 10 to 2.
tre@music.ucsd.edu – https://tre.ucsd.edu/

We will be talking about fundamental techniques in computer sound generation in this class. I’ll spend 1 or 2 classes on each topic and we will present works in progress every two weeks. The topics are most likely to be:

  1. oscillators, modulators, harmonics, sequencing, polyphony
  2. filters, amplifiers, distortion, waveshaping, harmonizers, MIDI
  3. sampling, looping, brassage, granular techniques
  4. delay, chorus, flange, reverb
  5. spectral techniques, convolution, time & pitch shifting
  6. spatialization, binaural filters, VBAP, ambisonics
  7. physical modeling, waveguides, modal synthesis

Additional topics can be added to this class. We will discuss your specific interests and needs in the first class. Throughout the class we will cover general Max/MSP programming techniques, controllers, interfaces, live instrument integration, etc.

readings (assigned each week)

weekly classnotes

  1. starting point
  2. oscillators – additive synthesis
  3. oscillators continued
  4. modulation 1 (amplitude)
  5. modulation 2 (frequency)
  6. delay & filtering
  7. sampling
  8. granular synthesis, spatialization 1
  9. spatialization 2 ambisonics, advanced granular
  10. digital reverb

“Williams Mix” is 1/15 second too short….

I was just looking over the structure of “Williams Mix”. It is divided into 11 sections that repeat a rhythmic structure 5 6 16 3 11 5. In section 4, the 3 subsection is one inch short. I imagine when they were calculating tape start times for editing, they forget to carry the one. The 16 subsection is based on a 10.25 inch length, so is 164 inches. It starts at 1129.75 inches and ends 1293.75 inches. The 3 subsection is also based on a 10.25 tape length, so should be 30.75 inches. However, it ends at 1323.5 inches: 1323.5 – 1293.75 = 29.75. Earth shattering news?

So the piece shouldn’t be 4:15.8, but 4:15.866666…

analog #1 – noise study – rand, randx, randi – notes

I have been looking at James Tenney’s documentation for Analog #1 – Noise Study in “Computer Music Experiences 1961-1964.” I am hoping to recreate the instrument that Tenney used in this piece. As it was completed in 1961, the software he used was probably closer to Music III than it was to Music IV, though it seems he describes the instrument in Music IV terms.

In the “CME” document Tenney shows a unit generator (“U5”) called RANDI with two inlets, with the left controlling amplitude and the right controlling frequency or period. RANDI is a interpolating random number generator, that generates numbers at an audio rate. This is probably the same unit generator that is called RAND in Max Mathews’ article “An Acoustic Compiler for Music and Psychological Stimuli”. In Tenney’s 1963 article “Sound-Generation by means of a Digital Computer” he describes a RAND and a RANDX. RAND is the interpolating random generator, and RANDX holds the random value. In the “Music IV Programmer’s Manual”, the two functions are RANDI and RANDH (interpolation and hold).

There are two parameters for all of these unit generators. The left input is the limit of the random number generation or amplitude. The right inlet is described as a bandwidth control in the Music III document, where bandwidth = sample-rate/2 x right-inlet/512. If this inlet varies from 0 to 512, the bandwidth would vary from 0 to the nyquist rate. It is also described in the Music IV manual as a control of the rate of random number generation, where a new random number is generated every 512/right-inlet samples. This is pretty consistent through-out all of the documents, with one exception in Tenney’s 1963 article (figure 13) where he implies that the right inlet I controls period (period is 512/I).

So to emulate RAND/RANDI, I need to have two inlets – one which determines amplitude (I1), and the other which determines frequency (I2), where frequency is in a linear scale from 0 to 512, and 512 corresponds to a new random number generated every sample. Looking at James Tenney’s various example patches, with continuous envelopes sent into the inlets, it seems likely that floating point numbers were used for both of these inlets.

This untested code will generate a new random number that goes from i1 to -i1 every 512/i2 samples. This is what is implied by the Music III and IV documentation, but I can’t be sure of the method of random number generation. Also, like the originals, this is sample rate dependent. It will only act like the original code if run at a sample rate of 10,000 Hz.


 static t_int *randi_perform(t_int *w)
 {
 t_randi *x = (t_randi *)(w[1]);
 t_float *freq = (t_float *)(w[2]);
 t_float *out = (t_float *)(w[3]);
 int n = (int)(w[4]);
 int blocksize = n;
 int i, sample = 0;
 float phaseincrement;
 float findex;
 int iindex;
 while (n--)
 {
 // first we need to calculate the phase increment from the frequency
// and sample rate - this is the number of cycles per sample
 // freq = cyc/sec, sr = samp/sec, phaseinc = cyc/samp = freq/sr
if(*(freq+sample) != 0.0f)
 phaseincrement = *(freq+sample)/x->samplerate;
 else
 phaseincrement = x->x_f/x->samplerate;
 // now, increment the phase and make sure it doesn't go over 1.0
 x->phase += phaseincrement;
 while(x->phase >= 1.0f)
 {
 x->phase -= 1.0f;
 x->previous = x->current;
 x->current = random();
 x->current = (x->current/1073741824.0f) - 1.0f;
 }
while(x->phase < 0.0f)
 {
 x->phase += 1.0f;
 x->previous = x->current;
 x->current = random();
 x->current = (x->current/1073741824.0f) - 1.0f;
 }
*(out+sample) = x->previous + x->phase * (x->current - x->previous);
 sample++;
 }
 return (w+5);
 }

My next step is to build this into a PD external and test in a patch similar to what is used in Tenney’s “Noise Study”.

Update – 11/12/14

I have implemented the noise generator above, and have inserted it into a PD patch (I have edited the real code into this article). The results sound very much like Noise Study. Here is a single voice

Screen Shot 2014-11-12 at 12.42.10 PM

Now to determine the overall structure.