Here is a C++ program that does the same as my Haskell program, only 100s of times faster:

`#include <cmath>`

#include <cstdint>

#include <cstdio>

#include <cstdlib>

template<typename N, typename R> R dim(N qmax)

{

// memoize Euler's totient function

// https://stackoverflow.com/a/1019389

uint64_t bytes = sizeof(N) * qmax;

N *totient = (N *) malloc(bytes);

if (! totient)

{

fprintf(stderr, "Out of memory allocating %llu bytes.\n", bytes);

abort();

}

for (N q = 0; q < qmax; ++q)

{

totient[q] = 1;

}

for (N i = 2; i < qmax; ++i)

{

if (totient[i] == 1)

{

const N i1 = i - 1;

#pragma omp parallel for if(qmax / i > 1000000)

for (N j = i; j < qmax; j += i)

{

totient[j] *= i1;

N k = j / i;

while (k % i == 0)

{

totient[j] *= i;

k /= i;

}

}

}

}

// bisection

const N bits = 64;

R lo = 0;

R hi = 2;

R s = -1;

for (N bit = 0; bit < bits; ++bit)

{

const R s_old = s;

s = (lo + hi) * 0.5;

if (s == s_old) break;

const R minus2s = -2 * s;

R f = 0;

#pragma omp parallel for if(qmax > 1000000) reduction(+:f)

for (N q = 2; q < qmax; ++q)

{

f += totient[q] * std::pow((R) q, minus2s);

}

if (1 > f) hi = s;

else

if (f > 1) lo = s;

else

break;

}

free(totient);

return s;

}

int main(int argc, char **argv)

{

if (! (argc > 1)) return 1;

const uint64_t terms = std::atoll(argv[1]);

const uint64_t qmax = terms + 2;

double s = 0;

#define CASE(N) \

if (uint64_t(N(qmax)) == qmax) s = dim<N, double>(N(qmax)); else

CASE(uint8_t)

CASE(uint16_t)

CASE(uint32_t)

CASE(uint64_t)

{

std::fprintf(stderr, "ERROR: too big\n");

return 1;

}

std::printf("%.18f\n", s);

return 0;

}

output data

`#terms dimension`

1 0.000000000000000028

10 1.060777931192381729

100 1.201406411704652566

1000 1.229033087191561346

10000 1.236215602473136332

100000 1.238361746259634799

1000000 1.239043488255890502

10000000 1.239265789608039459

100000000 1.239339074831453225

1000000000 1.239363342275131341

2000000000 1.239366746491320281

3000000000 1.239368275599665337

4000000000 1.239369175035730519

I think I could go to 6000000000 with 32GB RAM if I adjusted the totient array to use 32bit for the first 4G entries and 64bit for the rest..

**However**, I think I'm computing something different to "the dimension of the boundary of all the child bulbs". I think it's more like "the dimension of the union of all the parabolic bond points between the child bulbs, plus Feigenbaum points after infinitely bifurcated cascades", which could be less. The parabolic bond points are countable so their contribution to the dimension is 0. I think there might be other points that I'm not counting? If I'm not undercounting after all, I think it would equal the dimension of the boundary of all the hyperbolic components in all minis, because minis are countable and there is no overlap (or is there?) so dim(boundary x minicenters) = dim(boundary) + dim(minicenters) = dim(boundary) + 0.

The dimension of the boundary of the Mandelbrot Set minus the interior of all the hyperbolic components (discs and cardioids) is 2, because interior is not in boundary. Also subtract all the strictly-preperiodic Misiurewicz points and you still get 2, because there are only countably many of them (so their dimension is 0). I'm not sure if also subtracting the boundary of the hyperbolic components will still give 2, if the dimension of the boundary of the hyperbolic components is less than 2 you will get 2, otherwise you could get anything.

**EDIT** seems there is an altogether better way:

https://math.stackexchange.com/a/3594890