[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[SIGMusic] Architecture thoughts



I think I have a credible architecture to connect our program to its
inputs and to chuck. I don't know much about how we're planning to
construct the music generation routines, so someone who does can tell me
if those will mesh with my model.

Shared data:
vector<int,voice> voices
  maps MIDI note number to a particular "voice" (the musical
  representation of a single character on screen).
misc. state variables
  e.g. beats per measure (time signature), intensity of action, etc.

Components:

1. MIDI input
This uses the ALSA sequencer api to listen for events; a callback runs
whenever new data arrives. This callback will add or remove voices based
on note on/off events, set shared variables based on controller events,
and so forth.

2. Main loop
This loop must get woken up once per beat. I think that POSIX timers can
do this; if not, it is also ok to have the main loop simply sleep for
the duration of slightly less than a beat (see below). It queries every
voice to ask it "what notes are you playing at this particular beat?"
Because each voice is an object, voices can keep whatever state they
want from beat to beat. The public function that each voice exposes
accepts the current global state and returns a vector of notes.

3. MIDI output
The main loop fires MIDI events to chuck. The ALSA sequencer api gives
us the ability to schedule events for a particular time, meaning that
there is nothing wrong with sending the MIDI data early.

At this point, you may be wondering "if our main loop runs fast, what
happens when a character leaves? Won't there be a bunch of messages
still in the queue?" Read on...

4. Chuck
Every instrument is active for the whole time; an instrument receives
MIDI events to play a note, to active, and to deactivate. The activate
and deactivate messages are delivered immediately (alsa has this ability
as well), and the note events are delivered at the appropriate time. The
instrument keeps track of whether it is active or inactive; it simply
ignores any note events that are scheduled for when the instrument is
inactive.

I think that this architecture handles temporal issues correctly and
provides a workable dataflow from the MIDI input to chuck. Is the method
of providing information to voices sufficiently expressive? Does anyone
see any logical flaws?

-- 
Jacob Lee <jelee2@xxxxxxxx>