Voice leading in C major triads accounting for inversions

Import music and graphite modules

In [1]:
# Import music modules
import itertools
from music21.scale import MajorScale
from music21.harmony import chordSymbolFigureFromChord
from numpy import inf
from numpy import linalg as la
import networkx as nx
from orbichord.chordinate import VoiceLeading
from orbichord.graph import createGraph, convertGraphToData
from orbichord.generator import Generator

# Import graphic modules
import pandas as pd
import holoviews as hv
from holoviews import opts, dim
from bokeh.sampledata.les_mis import data

hv.extension('bokeh')
hv.output(size=180)
defaults = dict(width=300, height=300, padding=0.1)
hv.opts.defaults(
    opts.EdgePaths(**defaults), opts.Graph(**defaults), opts.Nodes(**defaults))

Configure a chord generator using seven pitches of the C major scale. Identify chords using their ordered pitch classes so chord with repeated pitches are treated as the same chords. Select only those chords that are triads. The resulting generator should be all triad chords of the C major scale.

In [2]:
def combinator(iterable, dimension):
    return itertools.product(iterable, repeat = dimension)

scale = MajorScale('C')

chord_generator = Generator(
    pitches = scale.getPitches('C','B'),
    combinator = combinator,
    identify = chordSymbolFigureFromChord,
    select = lambda chord: chord.isTriad()
)

Define an efficient voice leading object using as C major scale to define its steps and max norm as metric.

In [3]:
max_norm_vl = VoiceLeading(
    scale = scale,
    metric = lambda delta: la.norm(delta, inf)
)

Create a chord graph passing as input a generator, voice leading objects, and tolerance function. The tolerance function provide the criteria to select how far chords can be connected by efficient voice leading.

In [4]:
nodes, adjacencies, weights = createGraph(
    generator = chord_generator,
    voice_leading = max_norm_vl,
    tolerance = lambda x: x <= 1.0
)

Convert the chord graph into links. The label function name chord based on their pitches.

In [5]:
edges, vertices = convertGraphToData(
    graph = (nodes, adjacencies, weights),
    label = chordSymbolFigureFromChord,
    identify = chordSymbolFigureFromChord
)

links = pd.DataFrame(edges)
print(links.head(3))
   source  target  value
0       0       1    1.0
1       0       2    1.0
2       0       3    1.0

Make a dataset from graph nodes.

In [6]:
nodes = hv.Dataset(pd.DataFrame(vertices), 'index')
nodes.data.head()
Out[6]:
index name group
0 0 C 1
1 1 Am/C 1
2 2 F/C 1
3 3 Dm 1
4 4 Bdim/D 1

Additionally we can now color the nodes and edges by their index and add some labels. The labels, node_color and edge_color options allow us to reference dimension values by name.

In [7]:
chord = hv.Chord((links, nodes))
chord.opts(
    opts.Chord(
        cmap='Category20',
        edge_cmap='Category20',
        edge_color=dim('source').str(), 
        labels='name', node_color=dim('index').str()
    )
)
Out[7]:

It is also possible to visualize chord voice leading using networkx. For this we need to convert the chord graph in a nx.Graph object.

In [8]:
graph = nx.Graph()
for vertex in vertices:
    graph.add_node(vertex['name'])
for edge in edges:
    source = vertices[edge['source']]['name']
    target = vertices[edge['target']]['name']
    weight = edge['value']
    graph.add_edge(source, target, weight=weight)

Then the graph can be visualized using networkx

In [9]:
gview = hv.Graph.from_networkx(graph, nx.layout.circular_layout)
gview.opts(node_size=40)
labels = hv.Labels(gview.nodes, ['x', 'y'], 'index')
(gview * labels.opts(text_font_size='10pt', text_color='white', bgcolor='white'))
Out[9]:

Conclusion

All the triad chords are at only one step away from each other in the pitch class space. This is in the case we allow voice leading to change multiple pitches. However, each pitch can move at the most one step of the major scale.

In [ ]: