[all pages:] introduction tscore cwn mugraph signal score2sig utilities references file_tscore_umod file_mugraph_umod


go one page back go to start go to start go one page ahead
signal bandm models of music utilities

score2sig, How to Combine tscore And tsig For "Score Controlled Synthesis"




1          Fundamentals
2          Synthesis and Musical Score Data
2.1          A Simple Synthesis Model

^ToC 1 Fundamentals

"Score Controlled Synthesis", means to create a "fixed" recording of a synthetic sound, the structure of which is controlled by a certain musical score.

This discipline is one of the oldest in computer music, and is the basic paradigm for ancient programs like "Music one" to "Music five", and the resulting "C-Sound".

In the context of bandm software, this is accomplished by a combination of tscore for generating the control information, and tsig for performing the sound synthesis.

(This dichotomy could appear as a continuation of out-dated concepts, but we currently already support a much higher degree of modularity, aiming at full compositionality in the future.)

The theoretical and technical possibilities for score controlled synthesis are unlimited. They may seem overwhelming to the musician's mind, who has to impose some order and structure. For this, several very different approaches have been developed during the history of analog and digital sound synthesis, each of them constructing a symbolic algebra, a mental model.
Only the presence of such an abstracting model allows decisions to be taken and notated, variants to be explored systematically, increasing complexity to be managed, and results to be fixed.

One of these settings is basically an abstraction from traditional instrumental performance, and from sound synthesis by voltage controled synthesizers:

  1. The overall sound result of a certain realization of the score is defined as the acoustic sum of the single "voices".
  2. Every voice is a sequence of (non-overlapping) "events".
  3. Every event has a well-defined extension in the performance time of the realization.
  4. Every event is a acoustic phaenomenon defined by a basic sound material and some amplitude envelope.

This model is simple enough to be well manageable in very different production contexts, and to serve as a basis for further complications in subsequent definitions steps, e.g. by concatenation:
An overall "reverb" device (or some other post-processing device) can be fed with the output of a whole voice ensemble. In parallel, its control parameter settings are treated and controled by one dedicated parallel "voice" on its own.
Or a circuit which realizes a certain voice takes as its basic sound material for forming one single event the output of some other voice, which runs much faster and plays a multitude of events for forming one single event of the former ("granular synthesis").

So the procesing networks which correspond to these voices can be combined along very different axes. These complexities are manageable, only because the basic model, on which the user mentally operates, is a simple one.

Let's keep in mind this possiblity of arbitrary further complexity when developing such a simple example for score-controled synthesis in the next sections.

^ToC 2 Synthesis and Musical Score Data

In classical "C-Sound" synthesis, the parameters of the score are columns of a table which (a) in each column and each line contain exactly one numeric floating point datum, which (b) in the "orchestra" setting, i.e. in the synthesizing circuit, control some total arbitrary "electric" parameter. The semantics and effects of each column of the score are determined by the inlet of the circuit into which it is fed.

In contrast, when looking at musical scores in "classic western notation" / "CWN", we have parameters which are defined by a long tradition in instrumental performance. But of course also with this type of score, at last these "electric" parameters are required to control the synthesizing algorithm.

This gap must be bridged.

From the viewpoint of creative aestethics, this can be done by any arbitrarily defined translation process.
From the viewpoint of economic desires (i.e. the wish to replace expensive musicians by cheap computers), this translation must try to follow the traditional meaning of parameters as close as possible.
From the viewpoint of empirical research, there is e.g. the "rubato" project which tries to generate performance parameters out of data which is the result of automated music analysis.

Roughly spoken, a score is the source of parameters from four different categories:

  1. Constitutive
    These are the parameters like pitch, duration, intensity, which "consitute" an event, which are necessary anyhow, and which are valid for any kind of interpretation of the score text.
  2. Contextual
    These parameters are implicit in the score. E.g. metric weight: A note on a "heavy" time beat should be articulated in a different way than a note on a "light" beat. A note requested to be legato can be articulated quite differently when followed by a small or large interval.
  3. Analysed
    This is the above-mentioned idea of the "rubato" project: To generate performance parameters by very different kinds of musical analysis.
  4. Dedicated
    At last, dedicated parameter tracs can be adjoined to any voice which specially control certain input of the synthesizer. These are "electric" parameters, denotated directly in the score.

Please note that these categories are only roughly defined and quite dubious from a scientific standpoint. In many cases the borders are not cleanly definable!
E.g., a "legato" parameter is an constitutive articulation parameter, but not realizable without contextual knowledge. And the different metric weights of notes, are they simply contextual, or already analysed ? And content mark-up like "H_ _H" or "CH_ _CH" (see Schönberg and Berg), when directly fed into some synthesizer input as an electric gate-like signal, can we call it dedicated?

So this categorization can only serve as a rough orientation. But they are useful for describing our overall strategy when combining score level and sound synthesis circuits:

  1. First we translate some constitutive parameters in their "natural" way,
  2. and combine them with some contextual parameters.
  3. Those are fed into the synthesis circuits.
  4. All inputs of the circuits which are still open, can be controled directly by dedicated electric parameters, added freely to the score in further parameter tracks.
  5. As long as the impression of the muscial interpretation must be improved by further sound processing, the corresponding devices can be added to the synthesis circuit. The newly required control parameters must be treated by any of these methods.
  6. Only in a last step we will try to integrate some of the "rubato" results, as mentioned above.

^ToC 2.1 A Simple Synthesis Model

For synthesis let's take a comparatively simple example, the "wave table synthesis":

    +-----------------------------------------------------+
    | sequencer:  play score for one(1) voice             |
    |                                                     |
    | f              gate     a-d-s-r                     |
    +-|---------------------------------------------------+
      |                    |  |||||||||||
      V                    |  |||||||
  +-----+                  |  ||||
  | phi |---\              |  ||||
  +-----+   |              |  ||||
            |              V  VVVV
            |             +--------+
            |             | ADSR   |
            |             +--------+
            V                  V
      +-----------------+     +------+            +-----+
      |  table look up  |---->| AM   |----------->| ___ |
      |                 |     +------+        +-->| \   |
      +-----------------+  V     VVVV         |+->| /__ |
                          +--------+          ||  +-----+
                          | ADSR   |          ||
            |             +--------+          ||
            V                  V              ||
      +-----------------+     +------+        ||
      |  table look up  |---->| AM   |--------+|
      |                 |     +------+         |
      +-----------------+  V         VVVV      |
                          +--------+           |
                          | ADSR   |           | 
                          +--------+           |
                               V               |
      +-----------------+     +------+         |
      |  noise generator|---->| AM   |---------+
      |                 |     +------+
      +-----------------+

This synthesis works as follows:

  1. Every event from the score level is realized by one single basic frequency value, and one "gate" signal in time.
  2. Two table look-ups deliver two different sound signals, e.g. sinus and square, or more complicated mixtures, both with this basic frequency. 1
  3. Both these signals get their own "ADSR" amplitude modulation, started with the event's trigger. Thus a blending from one sound to the other happens during the event.
  4. Additionally, a noise signal can be added, which also has its amplitude curve.
  5. The parameters for the ADSR amplitude curves can change with every event individually.

Please note that often the following variant is more appropriate:

       |              |  ||||                       ||||
       |              V  VVVV                       VVVV
       |             +--------+                  +--------+  
       |             | ADSR   |                  | ADSR   |
       |             +--------+                  +--------+
       V                  V                         V 
   +---------------+     +------+         +------+ +------+
   | table look up |---->| AM   |-------->| SUM  |-| AM   |-----
   |               |     +------+     +-->+------+ +------+
   +---------------+      |           | 
                          V           |
                         +--------+   |
                         | 1.0-in |   |
                         +--------+   |
                          V           |
   +---------------+     +------+    /
   | table look up |---->| AM   |----
   |               |     +------+   
   +---------------+  

Here the first ADSR/AM combination controls the MIXTURE of the two wave forms explicitly, and the second ADSR/AM controls the overall loudness of the result.
"Mathematically" both variants seem identical, but "ergonomically", w.r.t. the way the parameters are calculated, there is a big difference!



1 Of course square curves etc. must be filtered with the Nyquist frequency for not causing distortions.





[all pages:] introduction tscore cwn mugraph signal score2sig utilities references file_tscore_umod file_mugraph_umod


go one page back go to start go to start go one page ahead
signal bandm models of music utilities

made    2016-07-01_17h16   by    lepper   on    linux-q699.site        Valid XHTML 1.0 Transitional Valid CSS 2.1

produced with eu.bandm.metatools.d2d    and    XSLT    FYI view page d2d source text