!pip install ivy
Compile code
Turn your Ivy code into an efficient fully-functional graph, removing wrappers and unused parts of the code.
⚠️ If you are running this notebook in Colab, you will have to install Ivy
and some dependencies manually. You can do so by running the cell below ⬇️
If you want to run the notebook locally but don’t have Ivy installed just yet, you can check out the Get Started section of the docs.
Firstly, let’s pick up where we left off in the last notebook, with our unified normalize
function:
import ivy
import torch
def normalize(x):
= torch.mean(x)
mean = torch.std(x)
std return torch.div(torch.sub(x, mean), std)
= ivy.unify(normalize, source="torch") normalize
For the purpose of illustration, we will use jax
as our backend framework:
# set ivy's backend to jax
"jax")
ivy.set_backend(
# Import jax
import jax
# create random jax arrays for testing
= jax.random.PRNGKey(42)
key = jax.random.uniform(key, shape=(10,)) x
As in the previous example, the Ivy function can be executed like so (in this case it will trigger lazy unification, see the Lazy vs Eager section for more details):
normalize(x)
ivy.array([ 0.55563945, -0.65538704, -1.14150524, 1.46951997, 1.30220294,
-1.14739668, -0.57017946, -0.91962677, 0.51029003, 0.59644395])
When calling this function, all of ivy
’s function wrapping is included in the call stack of normalize
, which adds runtime overhead. In general, ivy.compile
strips any arbitrary function down to its constituent functions in the functional API of the target framework. The code can be compiled like so:
"jax")
ivy.set_backend(= ivy.compile(normalize) # compiles to jax, due to ivy.set_backend comp
The compiled function can be executed in exactly the same manner as the non-compiled function (in this case it will also trigger lazy compilation, see the Lazy vs Eager section for more details):
comp(x)
Array([ 0.5556394 , -0.655387 , -1.1415051 , 1.4695197 , 1.3022028 ,
-1.1473966 , -0.5701794 , -0.91962665, 0.51028997, 0.5964439 ], dtype=float32)
With all lazy compilation calls now performed (which all increase runtime during the very first call of the function), we can now assess the runtime efficiencies of each function:
%%timeit
normalize(x)
985 µs ± 6.76 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
%%timeit
comp(x)
69.5 µs ± 1.24 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
As expected, we can see that normalize
is slower, as it includes all ivy
wrapping overhead. On the other hand, comp
has no wrapping overhead and it’s more efficient!
Round Up
That’s it, you can now compile ivy
code for more efficient inference! However, there are several other important topics to master before you’re ready to unify ML code like a pro 🥷. Next, we’ll be learning how to transpile code from one framework to another in a single line of code 🔄