Some inequalities for faster integer division by a constant

*** This is currently a work in progress, and TODO shows me and any readers where more work is needed.

??? is a reminder to me to check that something I believe to be true really is true.

TODO put page links to quickly get to the conditions TODO maybe a short contents table with those links TODO maybe add a brief history of developments of the theory at the end

Modern compilers – for example LLVM’s Clang and GNU’s gcc – and some libraries – for example by – make integer division by a constant divisor faster by replacing a relatively slow division operation floor(n/d) by floor(n*m/2**s), where:
* floor(X) is the greatest integer less than or equal to X
* 2**s is 2 exponentiated to the power s
(ui) Given integers 1<=d<=nmax, we have integers m & s so that m/2**s is a close enough approximation to 1/d that:
(ui’) for integers 0<=n<=nmax we have:
(ui”) floor(n/d) = q = floor(n*m/2**s)

The point is that dividing by 2**s can be done with bits shifts, which are fast, so instead of a slow division operation we can use a faster multiplication and bits shifts. (This only applies if we know d in advance, and we may be dividing by d several times: while the calculation floor(n*m/2**s) is fast, it does take some time to find m & s.)

But how to find suitable m/2**s? Here we:

* Give an unusual unified approach to finding and proving sufficient conditions for m/2**s. We believe this gives short simple proofs which easily explain why the approximations work, and why they eventually fail for “too large” n.

* Derive and prove a condition which seems to be novel, and which:
** gives a simpler and faster algorithm for finding the best possible m/2**s than the algorithm currently used??? by LLVM;
** may be nearly as fast as the algorithm currently used by GCC, which usually, but not quite always, finds the best possible m/2**s.

* Suggest a modification to the algorithm currently used by GCC which makes that faster.

* List several practical and/or theoretical conditions due to others for finding m/2**s, and use the unified approach to give simple, very short, proofs of these. This list is also useful for showing the similarities and differences of these conditions. (A few of these conditions are in our opinion not useful for either actually finding m/2**s or explaining how and why the approximation works – we think none of the conditions is good at both – but we have included them because they are in the literature.)

The unified approach – an example:
(ui.n) if n>0 a sufficient condition for (ui”) is: 1/d <= m/2**s < (1 + 1/n) * 1/d
* instead of floor(n/d) use floor(n*m/2**s)
* which is equivalent to instead using floor(n * d*m/2**s / d)
* let K = d*m/2**s so equivalently instead use floor(n*K/d)
* then it is obvious that a sufficient condition for (ui”) is n<=n*K<n+1
* so if n>0 then a sufficient condition for (ui”) is 1 <= K = d*m/2**s < 1 + 1/n
* dividing all parts of that inequality by d gives (ui.n)
We believe this is an elegant sufficient condition which clearly shows how good the approximation is to 1/d. Moreover, it can be easily modified to give a similar necessary & sufficient condition for (ui), which we do below.

This uses (with hindsight: it wasn’t found by following Jacobi’s advice!) a maxim attributed to Carl Jacobi that “man muss immer umkehren” (one must always invert), albeit he made that about finding new research topics.
For dividing n by d, instead of:
* finding conditions for changing to dividing n by (an approximation to d)
* we find conditions for changing to (dividing an approximation to n) by d
but doing it in a way in which the latter is equivalent to the former.

We believe this approach gives simpler and more obvious proofs of conditions for (ui) than those in the literature that we are aware of, and below we give those proofs.


* HD10 means chapter 10 of the second edition of “Hacker’s Delight” (published in 2012), which discusses this, and proves necessary & sufficient conditions. (References to inequalities and conditions in that chapter are like this: (27), that is the identifiers are numbers inside round brackets, which is how the late Henry S Warren wrote them in his book.)

* Here we are mostly concerned with the mathematics underlying this, not with practical implementations, although we don’t completely ignore those. So this omits some important details on how to use m/2**s once found: for these you need to read other articles and/or books like HD10. At the end of this post are some links. TODO

* The somewhat clumsy system used to label the conditions is to allow additions to be made without having to renumber, and to at least hint at a condition’s originators.

* all “variables” are integers except those in upper-case like E, E’, E”, H, K, etc, which can be rational or real;
* in the mathematics “/” is rational or real division, so 17/3 is not evaluated as 5 or 6; we always explicitly use floor(expression) to convert an expression to an integer. (But in any computer code examples “/” is used to mean rounding to the integer towards zero, so in code 17/3 is evaluated as 5 and -17/3 as -5.)

* qc = floor((nmax+1)/d)
* nc = qc*d – 1; then nmax – (d-1) <= nc <= nmax;
* or we can use the other HD10 formula for nc, and then qc = (nc+1)/d.

Measure how good the m/2**s approximation is to 1/d, and 2**s/m is to d:
* delta = m*d – 2**s, so m = (2**s + delta) / d and 2**s = m*d – delta;
* d – 2**s/m = (m*d – 2**s) / m = delta/m
* m/2**s – 1/d = (m*d – 2**s) / (d*2**s) = delta/(d*2**s)


* We mostly, but not always, use nmax = 2**w – 1 where w >= 1 is an integer, often 32 or 64 for bits sizes of integers. But always nmax>=d. (We could also develope the theory for nmax<d, but then it gets messy, and the algorithms for finding m/2**s are made more complicated, both of which are unnecessary because if nmax<d then floor(n/d)=0 when 0<=n<=nmax.)

* In practice m = ceiling(2**s/d), where ceiling(X) is the least integer greater than or equal to X, but that is not essential for the mathematics.

* It is also not essential for the mathematics that we use m/2**s as an approximation to 1/d, although that is what we always do in practice. For the theory we can almost always use integers m/b instead of m/2**s, where b does not have to be a power of 2. All the measures of approximation just above work just as well if we use a general b instead of 2**s.

* For the mathematics it is not even essential that m & b are integers: in almost all of it we can instead approximate 1/d by 1/(d-E), where E is zero or a smallish positive real number. Then we have:
* m/2**s = m/b = 1/(d-E) and 2**s/m = b/m = d-E
* E = delta/m

* A reason for the apparently pointless generalisation (and idiosyncrasy – we admit that using 1/(d-E) or m/b instead of m/2**s looks more than somewhat peculiar compared with probably everything else written on this and similar topics, especially when in practice we only use m/2**s) is a curiousity of this subject seems to be that ideas & theorems & proofs for seeing what’s really going on and why it works aren’t directly good for actually finding m/2**s, and that different but equivalent ideas & theorems & proofs for actually finding m/2**s are less good at easily showing what’s happening. We believe generalising makes clearer what’s really happening. (Also using m/b saves a few “**” or superscripts and makes the formulas somewhat less cluttered.)  And if a practical computer is ever built using a non-binary base with an operation equivalent to bits shifting, then we’ll be ready to implement faster division by a constant divisor!

(The Bourbaki method for mathematically capturing a lion in a desert:
The capture of a lion in the desert is a special case of a far more general problem. Formulate this problem and find necessary and sufficient conditions for its solution. The capture of a lion is now a trivial corollary of the general theory, which on no account should be written down explicitly.
(Online here, but originally #27 in “15 New Ways to Catch a Lion” on page 11 of Manifold (18, Spring 1976). Manifold was a magazine mostly, but not exclusively, for mathematics undergraduates, and was based at, but was not officially part of, the Mathematics Institute of Warwick University.) Here we follow the Bourbaki method, but with a crucial difference: we may stray from the path of allegedly true “Bourbakianism”, and explicitly apply the general theory to the actual problem!)

* For practical algorithms to find m/2**s use necessary & sufficient conditions (ui.qc), (, and sufficient conditions (ui.GM), (, (ui.Alv).

These conditions are for unsigned integers, but can be adapted for signed integers. TODO: maybe do that at the end

*** Six necessary & sufficient – and hence equivalent – conditions for (ui)

( For (ui) it is necessary & sufficient that: 1/d <= m/b < (1 + 1/nc) * 1/d.
In fact we prove this for 0<=n<=nc+d-1, where nc<=nmax<=nc+d-1.
Proof: as for (ui.n) above, let K = d*m/b
* at n=d it is necessary that d<n*K=d*K, or 1<=K=d*m/b;
* at n=nc it is necessary that nc*K<nc+1=qc*d, or K=d*m/b < 1 + 1/nc;
* nc<=K*nc<nc+1; if 0<=n<=nc multiply by n/nc, so n<=n*K<n+1, which is (ui.n);
* if d=1 then nmax=nc and we are done
* if d>=2 then for 1<=i<=d-1<=nc we have i<=i*K<i+1
so for qc*d=nc+1<=n=nc+i<=nc+d-1
we have qc*d<=n<=n*K=(nc+i)*K<nc+1+d-1+1=nc+1+d=(qc+1)*d,
so qc<=n*K/d<qc+1 and floor(n*K/d)=qc, and we are done.

This condition is not practical, but it has that simple proof, and it and similar conditions are very useful for showing how close m/b = 1/(d-E) must be to 1/d, and for comparing sufficient conditions to necessary & sufficient conditions. All other conditions used (whether equivalent, or only sufficient) are easily derived from it.

For all the remaining necessary & sufficient conditions,
from ( 1/d<=m/b is equivalent to b<=m*d, and so to 0<=m*d-b=delta,
and also to 1/d<=1/(d-E) and d-E<=d, and so to 0<=E.

(ui.E) For (ui) it is necessary & sufficient that: 0<=delta, or 0<=E;
and: E=delta/m<1/qc, or qc*delta/m = qc*E < 1;
where instead of floor(n/d) we use floor(n*m/b)=floor(n/(d-E))=floor(n*K/d),
where K = d*m/b = d/(d-E).

Proof: from ( m/b = 1/(d-E) < (1 + 1/nc) * 1/d
is equivalent to nc/(d-E) < (nc+1)/d = qc
and nc = qc*d-1 < qc*(d-E) = qc*d – qc*E
and so to qc*E<1. The rest follow easily.

These aren’t useful practically, but are useful as intermediate conditions, and for explaining how and why the approximations work. They may be novel: they are not in HD10 (published in 2012), or on a related website archived here (I suggest using snapshots *before* 2019):*/

Condition (ui.E) is a key to seeing why suitable m/2**s work up to nc+d-1,
and why they may go wrong at nc+d = (qc+1)*d – 1.
Consider two equivalent variants for calculating q, using as an example:
nmax=23; d=10; qc=2; nc=19; E=0.4; D=d-E=9.6; K=d/(d-E)=1.041…;

* 0…8_9_9.6=1*D_10…18_19=nc_19.2=2*D_20…28_28.8=3*D_29
q=floor(n / (d-E)): this is approximating d with d-E, and is like using a measuring rod of length d-E instead of the correct length d: as we lay rods of length d units alongside rods of length d-E, the end of the latter rods drifts back from the end of the former rods, and eventually the end of the latter rods is 1 or more units (including parts of a unit) behind the end of the former, and the integer that is 1 unit behind the end of the d units rods is the least integer for which: floor(n/d)<floor(n/(d-E))

* 0…9_9.4=9*K_10…19=nc_19.8=19*K_20…28_29_29.2=28*K_30_30.2=29*K_31
q=floor((n*d/(d-E)) / d): this is approximating n with n*d/(d-E)=n*K, and is like stretching the line with integer points by a factor d/(d-E) before dividing by d, making it clear why we need nc*d/(d-E)<nc+1.

The next three conditions for (ui) use increasingly large integers.
The first two give practical algorithms for finding the best possible m/2**s: the second is in HD10 and is used??? in LLVM??? and libdivide???; the first may be novel, and is simpler and faster than the second.

(ui.qc) For (ui) it is necessary & sufficient that 0<=delta and qc*delta<m.
Proof: from (ui.E) use that E = delta/m
Or from ( use the direct proof that these two are equivalent: this isn’t a circular argument because we also prove ( directly from (
This gives a simple algorithm to find m/2**s which easily fits within w bits: this is because if m = ceiling(2**s / d) then qc*delta <= nmax. The code (see Annex.ui.qc) is shorter, simpler, and faster than for ( Neither the condition nor the algorithm are in HD10, and both may be novel.

( For (ui) it is necessary & sufficient that 0<=delta and nc*delta<b.
Proof: from (ui.qc): qc*delta < m
is equivalent to (qc*d – 1) * delta = nc*delta < m*d – delta = b.
Or from ( m/b < (1 + 1/nc) * 1/d is equivalent to nc*m/b < (nc + 1) / d
and nc*m*d < (nc + 1) * b = nc*b + b
and nc*m*d – nc*b = nc * (m*d – b) = nc*delta < b
This is HD10 (27), we believe due to Henry S Warren, but with 2**p generalised to b. This gives code which can be made to fit within w bits at the expense of some complexity because mostly nc*delta>nmax: for example see the code in HD10, which is used as the basis for code in LLVM’s Clang compiler???.

(ui.GMW) For (ui) it is necessary & sufficient that 0<=delta and nc*m<qc*b.
Proof: from (ui.qc) qc*d*m – m = nc*m < qc*d*m – qc*delta = qc*b
Basically part of step 6 of “Algorithm 1 Granlund-Montgomery-Warren” in the LKK paper???  We believe this condition doesn’t have any practical uses, because the values used in it are so large that code fitting in w bits would need to be more complicated than for (

(ui.HD.25) For (ui) it is necessary & sufficient that b/d <= m < (b/d)*(nc+1)/nc.
Proof: from ( multiply all expressions by b, and rewrite (1 + 1/nc) as (nc+1)/nc.
This is HD10 (25), we believe due to Henry S Warren, with 2**p generalised to b. We believe this is only useful as an intermediate result in the HD10 proof of (

*** Three sufficient conditions for (ui) with b = 2**s and nmax = 2**w – 1

These conditions are sufficient, but not necessary, and (ui.GM) & ( are equivalent. All give algorithms to find m/2**s.

Pedantic correction – that’s not quite correct: as implemented in GCC, XXXXX TODO this is because GCC wants, entirely reasonably, to use the same function to find m/2**s for unsigned and signed integers, and to do that it needs to exclude delta=0, which is not done by ( However, that only makes a difference if abs(d) is an integer power of 2, for which the smallest values of delta making the condition correct are 0 or d (for all other d we have the smallest values of delta are 0<delta<d), and for these GCC just uses shifts, and doesn’t need to find a multiplier m, so for all practical purposes (ui.GM) and ( *are* equivalent. 

(ui.GM) For (ui) it is sufficient that 2**s <= m*d <= 2**s + 2**(s-w)
or (ui.GM’) 1/d <= m/2**s <= (1 + 1/2**w) * 1/d
Proof: from ( use nc <= nmax < nmax+1.
This is Theorem 4.2, condition (4.3) in Granlund & Montgomery’s 1994 paper.

Note that ( & (ui.GM’) are similar, the differences being ( uses nc, but (ui.GM’) uses nmax+1, and in ( the right side comparison is “<” but in (ui.GM’) it is “<=”.

HD10 says that ( – and so also (ui.GM) – gives code which is simpler than code from (, but which nearly always finds the minimal m/2**s. Condition (ui.GM’) helps explain this: nc < nmax+1, but by at most d, so for small d by not much. HD10 has an example: for w = 32 bits the smallest divisor with (ui.GM) & ( not giving the minimal m/2**s is 102,807. (TODO: show my proof of a lower bound – 2**((w+1)/2) Z??? – for d below which (ui.GM) & ( always find the same m/2**s as (ui.qc) & (

But: (ui.qc) gives code for the minimal m/2**s which is almost as simple as that using (, and which is at worst only a little slower than using (, and might even be slightly faster. It is, for now, an open problem how code from (ui.qc) compares in performance with code from (

( For (ui) it is sufficient that 0 <= delta <= 2**(s-w)
Proof: from ( use that nc <= nmax < nmax+1.
This is HD10 (30), I believe due to Henry S Warren.

Since (ui.GM) and ( are derived from equivalent conditions by using nmax+1 instead of nc, it’s arguable that in itself shows that (ui.GM) & ( ) are equivalent, but we will prove it directly. From (ui.GM’) we have:
m*d*(nmax+1) <= b * ((nmax+1) + 1) so (m*d – b) * (nmax+1) <=b

The algorithm from Granlund & Montgomery (used in GCC), from their (ui.GM), is elegant, but we believe it can be improved. As currently used in GNU gcc it starts with s = w + ceiling(log2(d)), which guarantees that the inequality is true, and iterates downwards until it finds the least s for which the inequality is true. Instead we suggest start s = w + ceiling(log2(d)) – 1, and iterating downwards from there; with this it is possible that the inequality is false at the start s, but in that case using (start s) + 1 makes the inequality true, and it is easy to get the m for s+1 from the m for s; this suggested change means that in the GNU GCC compiler we can avoid using wide_int integers, which seem quite slow compared with 32 or 64 (etc) bits fixed width integers.

And if we use the (ui.qc) improvement of the HD10 code used in LLVM, then it’s possible that iterating upwards to find m/2**s may be faster than iterating downwards as is done in GCC, albeit (to repeat myself) the GCC method is an elegant algorithm. This may be especially true for relatively small divisors.


TODO give my lower bound for divisors for which (vii) & (viii) always give the minimal m/2**s


(ui.Alv) s = w + ceiling(log2(d)) and m = ceiling(2**s/d)
Proof: delta < d, so delta <= 2**ceiling(log2(d)) = 2**(s-w), which is (;
and TODO prove always 2**w <= m < 2**(w+1)
HD10 says this is due to Alverson, and that it is simpler than the HD10 methods because it doesn’t need trial and error to find s, so is more suitable for building in hardware, but that always 2**w <= m < 2**(w+1), so the code using m/2**s for division is often more complicated than that using (ui.qc) or ( or (ui.GM) or (


(ui.RF) TODO apply idea of (ui.i) & (ui.qc) to ridiculousfish’s proof for his libdivide n+1 algorìthm && does rf prefer to be cited as rf or r_f? TODO ??? Mayve mention this may be somewhere in HD (but I now can’t find it) but not in HD10 and so not in context of faster int div, so …..)

TODO  give an account of finding qc & nc for finding m/2**s as approximations to c/d so that floor((n*c + h) / d) = floor((n*m + H) / 2**s)   TODO  thought: might we want the remainder for this without the quotient? If so could the ideas in LKK 2018 paper be used?

TODO  other similar ideas, eg 64 bits nanoseconds converted to complete seconds or minutes or hours or days, and how for one of those by doing a pre-shift for dividing by a power of 2, and then doing a multiply and add, then another shift, we can get a quotient which is either exact or is only -1 out.

TODO Algorithm 1 Granlund-Montgomery-Warren” in the LKK paper?  maybe do a bit more on the conditions in the different steps of this, for each first as d-E &/or b/m, then as m/2**s and 0 <= n < 2**W

TODO maybe add more on maximum & minimum & miscellaneous stuff, etc, in HD10

Posted in Uncategorized | Leave a comment

Proto Mega Dance Party XXX!

Chok Notes to Self

These are feedback forms from an amazing, life-changing workshop on making solo work run by performance artist, Bryony Kimmings.

At the end of the week, we all showed a few minutes’ worth of material. This is some gorgeously generous feedback from the group. Am indebted to Bryony and the team. This was the start of what I thought would be my first full length solo show.

I’ve come a long way!












View original post

Posted in Uncategorized | Leave a comment

This is my brain on dogs

Chok Notes to Self

So I just had a look at a pile of feedback from peers I got via a Bryony Kimmings workshop few years ago. As I work towards my new show, kind my first full length theatre show, this feedback from amazing artists is reminding me of what I have to offer and how I can be of service. Am posting it here as a reminder to self.
Bryony Kimmings Chisenhale Workshop
What did you feel?
Happy, excited, admiration
You’d be great on a night out
Empathy re. Letter
Confusion, nostalgia, sadness
Excisted, intrigued, wanted more of your craziness
Laughter, intrigued – what was really going on. This woman! Does she KNOW?
Happy, uncertainty
Happy but needed to know context – wanted to know more about cultures and home
Excitement, amusement, some grossed-outness
Amazement, kind-of-fear, softness, humour, pathos
Three things you liked?
Smile then sudden turn…

View original post 160 more words

Posted in Uncategorized | Leave a comment

Edinburgh Fringe Festival 2019

Initially a quick post so I can link to three shows which I recommend, one of which I haven’t seen yet.

* In the 1999 London Mime Festival I saw a wonderfully inventive & funny show “The Letter” by Paolo Nani. I went because an inspiration for it was the French writer Raymond Queneau’s “Exercises in Style”.
Here’s a review of that 1999 performance:
I just found out, completely by chance, that it’s on in Edinburgh until Sunday 25 August 2019, so I for one will be going to see it:
That link has links to other 2019 reviews, at least one of which (I think) get’s the basic plot a bit wrong, but they are all very positive.
It’s at 17:30 and lasts about one hour.

The rest of my suggestions are contemporary dance shows.

* “From the Top” by Victor Fung (Hong Kong) at DanceBase. I saw this yesterday: a good dance piece and a funny take on a somewhat dysfunctional relationship between a choreographer and two dancers rehearsing a new piece. It’s about 30 minutes long, but to me it felt like only 15 to 20 minutes. (That *is* a compliment.)
At 18:30 until Sunday 25 August 2019.
A review:

* “Floating Flowers” (Taiwan) at DanceBase. I saw this yesterday – it’s excellent, but is unsurprisingly sold out for the rest of the run. I suggest getting to the venue at least an hour, and probably better is at least 90 minutes, before 15:30 show start to queue for returns (that’s what I did). It’s about an hour long, and is on until Sunday 25 August 2019.


* Between 1996 and 1998 I saw three excellent dance pieces (two solos for the choreographer herself, and one for four female dancers) by Mui Cheuk Yin (also from Hong Kong). So I was delighted to find that she’s in this year’s Fringe. I’d be going anyway, but I’ve also been told by another choreographer that it’s good. I haven’t seen this yet, and judging by the two reviews I’ve skimmed it might not appeal to people who don’t see a lot of contemporary dance.
“It Will Come Later” at 13:00 for about one hour until Sunday 25 August 2019.

* I’ll be going to several Korean events, all of which are described here:

Posted in Uncategorized | Leave a comment

Han-Na Chang conducting Tchaikovsky’s Symphony No 5 in E minor

From the Cambridge University Press book “Essays in Music” by Hans Keller – an extract from a review of Wilhelm Furtwängler conducting in Mysore in 1950:
As was to be expected, the most overpowering interpretations in the Philharmonia Orchestra’s first two concerts under the greatest developer and deepest experiencer among living conductors were those of the symphonic and evolving works: above all the 5th Beethoven, but also the 5th Tchaikovsky, Brahms’s Haydn Variations, and The Hebrides, with whose development Mendelssohn seemed, for objectively incomprehensible reasons, dissatisfied. The Tchaikovsky became not only a great, but even an immaculate work: does then its weakness simply consist in its occasionally hiding its strengths, which it needs a Furtwängler to uncover?

I considered Tchaikovsky’s Symphony No 5 in E minor to be a somewhat lesser work than his 4th and 6th symphonies (partly because that’s what the conventional thinking seemed to be), until the BBC Proms 2014 when a performance by the Korean cellist and conductor Han-Na Chang with the Qatar Philharmonic Orchestra changed my mind. I was promming in the Royal Albert Hall arena, and apart from it being a wonderful performance it was also a delight to watch her conducting.

There is a video of the performance on YouTube, which I highly recommend:

@0m00s: 1 Andante – Scherzo. Allegro con anima
@15m00s: 2 Andante cantabile con alcuna licenza
@28m10s: 3 Valse. Allegro moderato con Patrioso
@34m05s: 4 Finale. Andante maestoso

I’ve watched this video several times, and the Finale more than ten times, yet it wasn’t until watching the Finale again on 2019-08-16 that I noticed she was conducting this symphony from memory and not using a score: in the hall and on previous occasions of watching this video I must have been so preoccupied with watching her conducting that I couldn’t see that!

Two of the comments are from people who saw Han-Na Chang conduct this symphony in St Louis and Oslo, and from those and my experience I think if you have an opportunity to see her conduct this symphony (or Tchaikovsky’s 4th and 6th symphonies, or anything else for that matter) then don’t miss it.
 * I  just watched her repeat this amazing performance last night with the St. Louis Symphony here. Truly an electrifying experience that brought the audience to its feat as soon as the last chord died away. The rest of the all-Russian program here was equally impressive. She’s going places, mark my words.
* Fantastic conductor and person .. We enjoyed the moment, admired her gestures and a seldom sensible conduction of Tsjaikowsky symphony 5th this week. And the excellent Oslo Philarmonic Orchestra

Some reviews which explain why this was a special performance:

Have you ever felt a sense of elation at the end of Tchaikovsky’s Fifth Symphony? In many ways, I find the Fifth as intensely painful and autobiographical as the “Pathétique”, with the finale seeing the composer nobly ‘putting on a brave face’, accepting fate’s hand stoically. The last pages often play through a mist of tears, the triumphant march ringing helplessly hollow. Not so here. Han-Na Chang, conducting the Qatar Philharmonic in its BBC Proms debut, lit the biggest firecracker under the Allegro vivace of the fourth movement I’ve ever heard. It took me completely unawares – and judging by the scramble in the strings, possibly the orchestra too – as the movement raced away at lightning speed. Any sense of Tchaikovsky’s fragility and self-doubt were swept away in a joyous romp to the finishing line … her conducting this afternoon was quite outstanding. Her style is often understated: small, precise beats and cues, much of it led from the wrist rather than the shoulder, with glowing beams of encouragement to her players. Her gestures grew more expansive in the Tchaikovsky, where tempo choices, shaping of phrases and application of rubato showed imagination. After a darkly lugubrious opening on clarinets, the Allegro con anima found Chang whipping up a maelstrom in the brass and timpani. The Andante cantabile was characterized by a gorgeously played horn solo. Brass often tended towards the excitable and raucous – there was no lack of passion in this exhilarating interpretation. … [I recommend reading the whole review.]

Tchaikovsky’s Fifth Symphony was also interpreted brilliantly by Chang and her performers. … There was a real feeling of symbiosis here; Chang directed, but only enough so that the musicians gave their best. It was definitely a team effort, not the ‘conductor knows all’ approach which can very often be stifling.


… With her intensely dynamic conducting style, Chang was faithful to the emotional panache of the score, summed up in the thrilling conclusion …

Posted in Uncategorized | Leave a comment


A comment I made in 2014 or 2015 on a YouTube upload of a performance of Sibelius’s astounding “Luonnotar”, a concentrated (only ten minutes long) tone poem for soprano and orchestra. (Wagner lovers temporarily avert your eyes now: I have been known to say that Wagner takes over ten hours for what are basically squabbles between some gods and giants, but Sibelius takes only ten minutes for the creation of the world!)

I was searching YouTube for a performance of Luonnotar to send a link to some people, and quite by chance decided to try this upload in the search listing. That was a very good random choice because for many years I have thought this recording of Luonnotar (Taru Valjakka [Finnish] soprano, conductor Paavo Berglund [Finnish], Bournemouth Symphony Orchestra [English]) is probably the best I have heard.

For a long time I lived in Bournemouth [England] and went to almost all the Bournemouth Symphony Orchestra (BSO) concerts. Living in London in the late 1990s, I found a CD of music by Sibelius with the BSO conducted by Berglund. One piece was Luonnotar, so I bought the CD because from when I first knew about Sibelius’s Luonnotar I thought it had to be an amazing piece, but I couldn’t recall hearing a performance on radio that had lived up to my expectations.

I can’t remember how many times I immediately replayed this performance when I first listened to the CD, but it was a lot, and it is still for me the performance that probably gets closest to the heart of Luonnotar.

One reason I like it is that I think it has a deliberate roughness, which seems highly appropriate for Luonnotar – see just below. (I should add that in my experience the Bournemouth Symphony Orchestra is very good: in fact I have aphoristically described them as “the best Russian orchestra west of Oslo”, a reference to their strengh in playing 19th and 20th century Russian music, and to recordings of Tchaikovsky symphonies by the Oslo Philharmonic conducted by Mariss Jansons.)

Luonnotar on Wikipedia
Part of this Wikipedia article on Luonnotar used to be like this:
“… composed for soprano and orchestra in 1913. … the mythical [Kalevala] birth of the earth and sky … becomes an intense Sibelian metaphor for the inexorable force – even the terror – of all creation, including that of the artist. One of the composer’s most compelling works, it alternates between two musical ideas. … In its relentless physicality and eruptive violence, Luonnotar is unlike anything else in the entire repertory.”

Unfortunately that fell prey to Wikipedia’s understandable policy that Wikipedia articles must not contain original research, or indeed comment.

This is a pity, because those words seem to me to epitomise Luonnotar, so what I would like to happen is that someone who can write well about music would quote those words as having at one time been on Wikipedia and explain why they are accurate, and then we can hopefully get the Wikipedia entry changed to quote an outside source quoting and explaining a deleted Wikipedia entry! Maybe Anne Ozorio? (See just below.)

Discussion on the Wikipedia article on “Luonnotar”
A quote from a Wikipedia contributor: “this doesn’t capture the elemental flavour of the work at all, but it’s a start”

Musicweb-International article on “Luonnotar”
Appreciating Sibelius’s Luonnotar Op. 70 by Anne Ozorio
Luonnotar (Op. 70, 1913) must be one of the most distinctive pieces in the repertoire. It transcends both song and symphonic form. … The orchestra may play modern instruments and the soprano may wear an evening gown, but ideally they should convey the power of ancient, shamanistic incantation, as if by recreating by sound they are performing a ritual to release some kind of creative force. … But it is the voice part which is astounding. Technically this piece is a killer – there are leaps and drops of almost an octave within a single word. When Luonnotar calls out for help, her words are scored like strange, sudden swoops of unworldly sounds supposed to resound across the eternal emptiness. These hint of the wailing, keening style that Karelian singers used. This cannot be sung with any trace of conservatoire trained artifice: the sounds are supposed to spring from primeval forces. … Singing over the cataclysmic orchestral climax that builds up from “Tuuli kaatavi, tuuli kaatavi!” must be quite some challenge. The sonorous wall of sound Sibelius creates is like the tsunami described in the text, and the soprano is riding on its crest. … As Erik Tawaststjerna said, “the soprano line is built on the contrast between … the epic and narrative and the atmospheric and magical”. …

Posted in Uncategorized | Leave a comment

How to Warm Up for a Future of Disruption

via How to Warm Up for a Future of Disruption

Quote | Posted on by | Leave a comment