2014 – A performance-composition improvisation and visualization system based on real-time modeling of user input. See video for demo of an improvised part based on user input through a virtual midi keyboard.
Code:
This interactive performance tool is a program that makes markov models of live midi input, replays those models, and visualizes what it is playing.
The user can combine a real or virtual midi keyboard with this program. What the program plays depends on what the user plays. The user should feel that they can play a lot, or just sit back and listen. The user can also control what midi instrument is being played back by invoking the program with an instrument number, or using the arrow keys if the program is already running.
Musical information from two markov maps is displayed using color, location, and size of circular nodes that float upward over time. The visual representation of sound events orients observers in the audience to the ongoing conversation between human and computer. Ideally this program would be performed with a disklavier.
The software combines several programs: RtAudio, RtMidi, FluidSynth, and OpenGL. The MidiMonster class deals with their interactions. All incoming notes are parsed into five maps: pitch, interval size (note-wise), duration, volume, and onset overlap. The order of each map is easily adjustable for each parameter. The maps are filled by the user’s midi input and read by MidiMonster’s improvisor. Only the live player can add data to the maps.
The maps sometimes forget information in order to make the system’s response to the user more dynamic. There is some randomization in note replay for the same reason.
The original idea for Improvisor, sketched below, was to enable a performer-composer to create sound and interactions via drawing. Visual contours would create an interlocking pattern between the human and computer voices, illustrating the variety of exchanges that can take place during improvisation – complementary, interfering, building, or repetitive etc.