Python – Arbitrary-precision algorithms in JIT compilation functions

Arbitrary-precision algorithms in JIT compilation functions… here is a solution to the problem.

Arbitrary-precision algorithms in JIT compilation functions

When I use numba in python, I know that if I try to jit compile a function with arbitrary-precision float (mpmath) in a loop, it won’t compile in nopython mode and will be as fast as the normal python version. My question is about the Julia package DifferentialEquations.jl. On their homepage, they say it supports BigFloats and ArbFloats. I know this package also uses a loop compiled by Julia by default. So my question is, when I pass a differential equation using a BigFloat number, is the DifferentialEquations.jl function jit-compiled.

Solution

Yes, they are implemented through automatic specialization of functions. In Julia, functions automatically focus on concrete types at JIT compile time. This is true for all numbers, in fact, even something like Float64 is just a type defined by Julia itself and uses these same mechanisms. This blog post describes this pattern in more detail

Related Problems and Solutions