mus172 – syllabus

music 172 - computer music ii - spring 2021 - zoom
class tu/th 2:00-3:20, lab th 3:30 - 4:20
office hour tu/th 3:30-5:00, https://ucsd.zoom.us/my/tomerbe
email tre@ucsd.edu
tas

Topic, Teaching Schedule and Class Notes

In Music 172 we will be looking at the application of computer music theory in electronic musical instruments, and audio effects. We will cover a wide range of topics, and you will be expected to go further in your own work (through the assignments). As we cover each topic we will look at the signal processing theory behind the technique, and we will discuss how to implement the technique in a way that it is musical and usable. Almost all of the topics  in class will be demonstrated using Pure Data. It will help you understand the topic if you create the PD patch as it is being discussed.

The objective of this course is to get a familiarity with the implementation of common computer music techniques, and the theory behind them. Using Pure Data, you will become familiar with a tool which can be used for the prototyping of musical instruments, game sound and sound effects, as well as audio analysis and research into synthesis and audio processing methods.

During the course, the topics covered will likely be:

  • drum machines: sampling, looping, granular techniques, modal synthesis
  • drum machines: EQ, amplifiers, distortion
  • drum machines: sequencing, beat clock, MIDI, controllers
  • analog synthesis: oscillators, waveshaping, harmonics
  • analog synthesis: modulation, wave-folding
  • analog synthesis: filters, polyphony
  • effects: delay, chorus, flange, phase shifting
  • effects: vocoding, harmonizers
  • effects: reverb, spatialization, reverb, vbap, ambisonics

Class notes will be developed as we reach the topics. Class notes will be in the Modules section of canvas.

Grading

  • There will be 4 assignments and a final project. Your projects should generally take the form of a piece of music, or a software/equipment design. Regardless, you will need to demonstrate the techniques asked for in the assignment. Quality, design, musicality and originality all count.
  • All of your work should be original. If you study with your classmates, you should each complete an original patch. Collaboration is not allowed. Late assignments will be accepted, but will lose 1 point each week (assignments have 15 points). Late assignments can be handed in up to the end of week 10. If you fall behind, please ask for help! Assignments should be ZIP compressed, include all patches, abstractions, externals and sound files. They should be handed in via Canvas.
  • You will get a participation score based on the number of on-topic questions you ask during class (chat questions count). If you are asynchronous, you may use canvas to submit your questions. 10 questions are required for the quarter.
  • If you would rather use Max/MSP, you may use it instead of Pure Data for your assignments.

Percentage breakdown:

60% – 4 assignments (15 points each)

30% – final project

10% – attendance and participation

Text and Required Materials

The Theory and Technique of Electronic Music – Miller Puckette (required & online: http://msp.ucsd.edu/techniques.htm)

Designing Sound: Andy Farnell (recommended: http://aspress.co.uk/ds/about_book.html)

Pure Data (required software: http://msp.ucsd.edu/software.html)

abstractions to download (this will increase over the quarter)

output

minikey.pd

spectdisp.pd

playsound~.pd

recordsound~.pd

MUS160B – Weeks 8, 9, 10

I think it would be most useful if I met with each of you for a half hour in the last 3 weeks, to discuss your final concepts and production plan. Please send me the time you are available – it might be best to indicate 2 or 3 times, in case another student claims the time first.

Also, note that week 8 and week 10 openings are not completely during our class time. I have to be in an interview with prospective new ICAM faculty from 11 to 12 those weeks,

  • W8 – 2/27, 10:30 – 11: Joaquín Villegas
  • W8 – 2/27, 12 – 12:30: Jennifer Ablay
  • W8 – 2/27, 12:30 – 1: Kenneth Doan
  • W8 – 2/27, 1 – 1:30: Chloe Bari
  • W8 – 2/27, 1:30 – 2: Caleb Hess
  • W9 – 3/6, 11 – 11:30: Tracy Levick
  • W9 – 3/6, 11:30 – 12: Erick Morales
  • W9 – 3/6, 12 – 12:30: Erick Garcia
  • W9 – 3/6, 12:30 – 1: Alfred Valencia
  • W9 – 3/6, 1 – 1:30: Forest Reid
  • W9 – 3/6, 1:30 – 2: Brandon Huynh
  • W10 – 3/13, 10:30 – 11: Camden Greenwood
  • W10 – 3/13, 12 – 12:30: Salvador Zamora
  • W10 – 3/13, 12:30 – 1: Shelby Tindall
  • W10 – 3/13, 1 – 1:30: Mao-Shin Hsieh
  • W10 – 3/13, 1:30 – 2: Hilda Liu

Music 160A meetings

project development meetings, bring research materials, bibliography

week 4 – wednesday jan 30th

  • 11:00 – Levick
  • 11:20 – Garcia
  • 11:40 – Doan
  • 12:00 – Valencia
  • 12:20 – Hess
  • 12:40 – Reid
  • 1:00 – Zamora
  • 1:20 – Bari

week 5 – wednesday feb 6th

  • 11:00 – Greenwood
  • 11:20 – Ablay
  • 11:40 – Huynh
  • 12:00 – Morales
  • 12:20 – Villegas
  • 12:40 – Hsieh
  • 1:00 – Tindall
  • 1:20 – Liu

 

Some notes on For Ann (rising)

James Tenney composed For Ann (rising) in 1969 and made several realizations with tape and signal generators. In 1991 I was asked to engineer a compilation of his early computer and electronic music, “Selected Works 1961-1969”. Instead of using one of the tape versions of For Ann (rising), we decided to realize it digitally in Csound. Jim described the piece to me over the phone. The piece consists of 240 sine wave sweeps, each of which lasts 33.6 seconds long and rises 8 octaves (4.2 seconds per octave). Each sweep has a trapezoidal amplitude envelope which rises from 0.0 to 1.0 gain in the first two octaves, stays at 1.0 for the 4 mid octaves, and drops from 1.0 to 0.0 for the top two octaves of each sweep. A new sweep starts every 2.8 seconds. The initial Csound orchestra and score was simply:

sr=44100
kr=44100
ksmps=1
instr 1
kf expon 40, 33.6, 10240
ka linseg 0, 8.4, 2000, 16.8, 2000, 8.4, 0
a1 oscil ka, kf, 1
out a1
endin
----
f1 0 16385 10 1
i1 0 42
i1 2.8 42
i1 5.6 42
and so on.....

The tuning difference between each successive sweep is a 12tet minor 6th.

For the final version on the CD, Jim asked me to extend each sweep by 4.2 seconds (1 more octave). We moved the start point to A0 (27.5 Hz) – the end of each sweep is A9 (14080 Hz). This increased the length of the piece slightly 240 * 2.8 + 33.6 + 4.2 = 11:49.8.

I have recently put together a new realization of For Ann (rising) in Pure Data. I am using metro, delay and vline to generate the sweeps in this version, as these objects (unlike many objects in PD) are sample accurate, and should give consistent tuning accuracy. Also for this realization, I have added the capability to perform the piece with the original 12tet minor 6th, a just 1.6 ratio, and a golden mean (phi) ratio between successive sweeps.

The PD patch is available here, for those who want to hear the piece.

mus177/206 – updated fuzztrem, other JUCE details

Here is the updated fuzztrem plugin with 3 parameters and controls added ClassTest1126.

Sample code to parse MIDI – noteOff, noteOn and bendPitch are my own methods – you will need to write your own versions of these methods to connect the MIDI data to your process.

   MidiBuffer::Iterator it(midiMessages);
    MidiMessage msg(0x80,0,0,0);
    int pos;
    int32_t note;

    // start with the MIDI processing
    while(it.getNextEvent(msg , pos))
    {
        if(msg.isNoteOn())
        {
            note = msg.getNoteNumber();
            noteOn(note);
        }
        if(msg.isNoteOff())
        {
            note = msg.getNoteNumber();
            noteOff(note);
        }
        if(msg.isPitchWheel())
        {
            bendPitch(msg.getPitchWheelValue());
        }
    }

Simple code to save your current settings to a session (pulled from my plugin ++pitchsift)

void PitchsiftAudioProcessor::getStateInformation (MemoryBlock& destData)
{
    // You should use this method to store your parameters in the memory block.
    // You could do that either as raw data, or use the XML or ValueTree classes
    // as intermediaries to make it easy to save and load complex data.
    
    ScopedPointer xml (parameters.state.createXml());
    copyXmlToBinary (*xml, destData);
}

void PitchsiftAudioProcessor::setStateInformation (const void* data, int sizeInBytes)
{
    // You should use this method to restore your parameters from this memory block,
    // whose contents will have been created by the getStateInformation() call.
    
    // This getXmlFromBinary() helper function retrieves our XML from the binary blob..
    ScopedPointer xmlState (getXmlFromBinary (data, sizeInBytes));
}

mus177/206 – introduction to JUCE

This week we will be getting started in using JUCE to create audio plugins. I will show you both the audio processing and graphic layers, and how they connect. To start, you need to download the JUCE SDK from www.juce.com. Join with either the Education or Personal license.

Along with these lessons, you should look at the following youtube videos from The Audio Programmer (Joshua Hodge) – https://youtu.be/7n16Yw51xkI. These cover most relevant aspects of JUCE in detail. Much better than the tutorials on the JUCE website.

Here is the class example from week 7. It was pointed out to me that JUCE has deprecated the existing createAndAddParameter method (with all the arguments). The valid way to use this method is to create a RangedParameter and hand it to createAndAddParameter. Here is gain from the class example: I create an AudioParameterFloat which gets added to the value tree (std::make_unique is a safer substitute for new).

 parameters.createAndAddParameter(std::make_unique<AudioParameterFloat>("gain", // parameter ID
 "gain", // parameter name
 NormalisableRange<float> (-60.0f, 12.0f), // range
 0.0f, // default value
 "dB"));