Table of Contents

As we announced recently, my team at Google has started a new effort to build production-worthy engineering tools for Fully Homomorphic Encryption (FHE). One focal point of this, and one which I’ll be focusing on as long as Google is willing to pay me to do so, is building out a compiler toolchain for FHE in the MLIR framework (Multi-Level Intermediate Representation). The project is called Homomorphic Encryption Intermediate Representation, or HEIR.

The MLIR community is vibrant. But because it’s both a new and a fast-moving project, there isn’t a lot in the way of tutorials and documentation available for it. There is no authoritative MLIR book. Most of the reasoning around things is in folk lore and heavily technical RFCs. And because MLIR is built on top of LLVM (the acronym formerly meaning “Low Level Virtual Machine”), much of the documentation that exists explains concepts by analogy to LLVM, which is unhelpful for someone like me who isn’t familiar with the internals of how LLVM works. Finally, the “proper” tutorials that do exist are, in my opinion, too high level to allow one to really get a sense for how to write programs in the framework.

I want people interested in FHE to contribute to HEIR. To that end, I want to lower the barrier to entry to working with MLIR. And so this series of blog posts will be a detailed introduction to MLIR in general, with some bias toward the topics that show up in HEIR and that I have spent time studying and internalizing.

This first article describes a typical MLIR project’s structure, and the build system that we use in HEIR. But the series as a whole will be built up along with a GitHub repository that breaks down each step into clean, communicative commits, similar to my series about the Riemann Hypothesis. To avoid being broken by upstream changes to MLIR (our project will be “out of tree”, so to speak), we will pin the dependency on MLIR to a specific commit hash. While this implies that the content in these articles will eventually become stale, I will focus on parts of MLIR that are relatively stable.

A brief history of MLIR and LLVM

The first thing you’ll notice about MLIR is that it lives within the LLVM project’s monorepo under a folder called mlir/. LLVM is a sort of abstracted assembly language that compiler developers can target as a backend, and then LLVM itself comes packaged with a host of optimizations and “real” backend targets that can be compiled to. If you’re, say, the Rust programming language and you want to compile to x86, ARM, and WebAssembly without having to do all that work, you can just output LLVM code and then run LLVM’s compilation suite.

I don’t want to get too much into the history of LLVM (see this interview for more details), and I don’t have any first hand knowledge of it, but from what I can gather LLVM (formerly standing for “Low Level Virtual Machine”) was the PhD project of Chris Lattner in the early 2000’s, aiming to be a next-generation C compiler. Chris moved to Apple, where he worked on LLVM and languages like Swift which build on LLVM. In 2017 he moved to Google Brain as a director of the TensorFlow infrastructure team, and he and his team built MLIR to unify the siloed tooling in their ecosystem.

We’ll talk more about what exactly MLIR is and what it provides in a future article. For a high level overview, see the MLIR paper. In short, it’s a framework for building compilers, with the underlying philosophy that a big compiler should be broken up into lots of small compilers between sub-languages (which compiler folks call “intermediate representations” or “IR”s), where each sub-language is designed to make a particular kind of optimization more natural to express. Hence the MLIR acronym standing for Multi-Level Intermediate Representation.

MLIR is relevant for TensorFlow because training and inference can both be thought of as programs whose instructions are things like “2d convolution” and “softmax.” And the process for optimizing those instructions, while converting them to lower level hardware instructions (especially on TPU accelerators) is very much a compilers problem. MLIR breaks the process up into IRs at various levels of abstraction, like Tensor operations, linear algebra, and lower-level control flow.

But LLVM just couldn’t be directly reused as a TensorFlow compiler. It was too legacy and too specialized to CPU, operated at a much lower abstraction layer, and had incidental tech debt. But LLVM did have lots of reusable pieces, like data structures, error handling, and testing infrastructure. And combined with Lattner’s intimate familiarity with a project he’d worked on for almost 20 years, it was probably just easier to jumpstart MLIR by putting it in the monorepo.

Build systems

The rest of this article is going to focus on setting up the build system for our tutorial project. It will describe each commit in this pull request.

Now, the official build system of LLVM and MLIR is CMake. But I’ll be using Bazel for a few reasons. First, I want to induct interested readers into HEIR, and that’s what HEIR uses because it’s a Google-owned project. Second, though one might worry that the Bazel configuration is complicated or unsupported, because MLIR and LLVM have become critical to Google’s production infrastructure, Google helps to main a Bazel “overlay” in parallel with the CMake configuration, and Google has on call engineers responsible for ensuring that both Google’s internal copy of MLIR stays up to date with the LLVM monorepo, and that any build issues are promptly fixed. The rough edges that remain are simple enough for an impatient dummy like me to handle.

So here’s an overview of Bazel (with parts repeated from my prior article). Bazel is the open source analogue of Google’s internal build system, “Blaze”, and Starlark is its Python-inspired scripting language. There are lots of opinions about Bazel that I won’t repeat here. You can install it using the bazelisk program.

First some terminology. To work with Bazel you do the following.

Generally, bazel builds targets in two phases. First—the analysis phase—it loads all the BUILD files and imported .bzl files, and scans for all the rules that were called. In particular, it runs the macros, because it needs to know what rules are called by the macros (and rules can be guarded by control flow, or their arguments can be generated dynamically, etc.). But it doesn’t run the build rules themselves. In doing this, it can build a complete graph of dependencies, and report errors about typos, missing dependencies, cycles, etc. Once the analysis phase is complete, it runs the underlying rules in dependency order, and caches the results. Bazel will only run a rule again if something changes with the files it depends on or its underlying dependencies.

The WORKSPACE and llvm-project dependency

The commits in this section will come from https://github.com/j2kun/mlir-tutorial/pull/1.

After adding a .gitignore to filter out Bazel’s build directories, this commit sets up an initial WORKSPACE file and two bazel files that perform an unusual two-step dance for configuring the LLVM codebase. The workspace file looks like this:

workspace(name = "mlir_tutorial")

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:utils.bzl", "maybe")

load("//bazel:import_llvm.bzl", "import_llvm")

import_llvm("llvm-raw")

load("//bazel:setup_llvm.bzl", "setup_llvm")

setup_llvm("llvm-project")

This is not a normal sort of dependency. A normal dependency might look like this:

http_archive(
    name = "abc",
    build_file = "//bazel:abc.BUILD",
    sha256 = "7fa5a448a4309fb4d6cf856c3fe4cc4be46b09dd552a05d5cfacd75f8d9504ad",
    urls = [
        "https://github.com/berkeley-abc/abc/archive/eb44a80bf2eb8723231e72bb095c97d1e4834d56.zip",
    ],
)

The above tells bazel: go pull the zip file from the given URL, double check it’s hashsum, and then (because the dependent project is not build with bazel) I’ll tell you where in my repository to find the BUILD file that you should use to build it. If the project had a BUILD file, we could omit build_file and it would just work.

Now, LLVM has bazel build files, but they are hidden in the utils/bazel subdirectory of the project. Bazel requires its special files to be in the right places, plus the bazel configuration is designed to be in sync with the CMake configuration. So the utils/bazel directory has an llvm_configure bazel macro which executes a python script that symlinks everything properly. More info about the upstream system can be found here.

So to run this macro we have to download the LLVM code as a repository, which I put into the import_llvm.bzl file, as well as call the macro, which I put into setup_llvm.bzl. Why two files? An apparent quirk of bazel is that you can’t load() a macro from a dependency’s bazel file in the same WORKSPACE file in which you download the dependency.

It’s also worth mentioning that import_llvm.bzl is where I put the hard-coded commit hash that pins this project to a specific LLVM version.

Getting past some build errors

In an ideal world this would be enough, but trying to build MLIR now gives errors. In the following examples I will try to build the @llvm-project//mlir:IR build target (arbitrarily chosen).

Side note: some readers of early drafts have had trouble getting these steps to work exactly. Despite bazel aiming to be a perfectly hermetic build system, it has to store temporary files somewhere, and that can lead to inconsistencies and permission errors. If you’re not able to get these steps to work, check out these links:

For starters, the build fails with

$ bazel build @llvm-project//mlir:IR
ERROR: Skipping '@llvm-project//mlir:IR': error loading package '@llvm-project//mlir':
Unable to find package for @bazel_skylib//rules:write_file.bzl:
The repository '@bazel_skylib' could not be resolved:
Repository '@bazel_skylib' is not defined.

Bazel complains that it can’t find @bazel_skylib, which is a sort of extended standard library for Bazel. The MLIR Bazel overlay uses it for macros like “run shell command.” And so we learn another small quirk about Bazel, that each project must declare all transitive workspace dependencies (for now).

So in this commit we add bazel_skylib as a dependency.

Now it fails because of two other dependencies, llvm_zlib and llvm_std. This commit adds them.

$ bazel build @llvm-project//mlir:IR
ERROR: /home/j2kun/.cache/bazel/_bazel_j2kun/fc8ffaa09c93321753c7c87483153cea/external/llvm-project/llvm/BUILD.bazel:184:11:
no such package '@llvm_zlib//':
The repository '@llvm_zlib' could not be resolved:
Repository '@llvm_zlib' is not defined and referenced by '@llvm-project//llvm:Support'

Now when you try to build you get a bona-fide compiler error.

$ bazel build @llvm-project//mlir:IR
INFO: Analyzed target @llvm-project//mlir:IR (41 packages loaded, 1495 targets configured).
INFO: Found 1 target...
ERROR: <... snip ...>
In file included from external/llvm-project/llvm/lib/Demangle/Demangle.cpp:13:
external/llvm-project/llvm/include/llvm/Demangle/Demangle.h:35:28: error:
'string_view' is not a member of 'std'
   35 | char *itaniumDemangle(std::string_view mangled_name);
      |                            ^~~~~~~~~~~
external/llvm-project/llvm/include/llvm/Demangle/Demangle.h:35:28: note: 'std::string_view' is only available from C++17 onwards

note: ‘std::string_view’ is only available from C++17 onwards” suggests something is still wrong with our setup, and indeed, we need to tell bazel to compile with C++17 support. This can be done in a variety of ways, but the way that has been the most reliable for me is to add a .bazelrc file that enables this by default in every bazel build command run while the working directory is underneath the project root. This is done in this commit. (also see this extra step that may be needed for MacOS users)

# in .bazelrc
build --action_env=BAZEL_CXXOPTS=-std=c++17

Then, finally, it builds.

At this point you could build ALL of the LLVM/MLIR project by running bazel build @llvm-project//mlir/…:all. However, while you will need to do something similar to this eventually, and doing it now (while you read) is a good way to eagerly populate the build cache, it will take 30 minutes to an hour, make your computer go brrr, and use a few gigabytes of disk space for the cached build artifacts. (After working one three projects that each depend on LLVM and/or MLIR, my bazel cache is currently sitting at 23 GiB).

But! If you try there’s still one more error:

$ bazel build @llvm-project//mlir/...:all
ERROR: /home/j2kun/.cache/bazel/_bazel_j2kun/fc8ffaa09c93321753c7c87483153cea/external/llvm-project/mlir/test/BUILD.bazel:591:11:
no such target '@llvm-project//llvm:NVPTXCodeGen':
target 'NVPTXCodeGen' not declared in package 'llvm' defined by
/home/j2kun/.cache/bazel/_bazel_j2kun/fc8ffaa09c93321753c7c87483153cea/external/llvm-project/llvm/BUILD.bazel
(Tip: use `query "@llvm-project//llvm:*"` to see all the targets in that package) and referenced by '@llvm-project//mlir/test:TestGPU'

This is another little bug in the Bazel overlays that I hope will go away soon. It took me a while to figure this one out when I first encountered it, but here’s what’s happening. In the bazel/setup_llvm.bzl file that chooses which backend targets to compile, we chose only X86. The bazel overlay files are supposed to treat all backends as optional, and only define targets when the chosen backend dependencies are present. This is how you can avoid compiling a bunch of code for doing GPU optimization when you don’t want to target GPUs.

But, in this case the NVPTX backend (a GPU backend) is defined whether or not you include it as a target. So the simple option is to just include it as a target and take the hit on the cold-start build time. This commit fixes it.

Now you can build all of LLVM, and in particular you can build the main MLIR binary mlir-opt.

$ bazel run @llvm-project//mlir:mlir-opt -- --help
OVERVIEW: MLIR modular optimizer driver

Available Dialects: acc, affine, amdgpu, amx, arith, arm_neon, arm_sve, async, bufferization, builtin, cf,
complex, dlti, emitc, func, gpu, index, irdl, linalg, llvm, math, memref, ml_program, nvgpu, nvvm, omp, pdl,
pdl_interp, quant, rocdl, scf, shape, sparse_tensor, spirv, tensor, test, test_dyn, tosa, transform, vector,
x86vector
USAGE: mlir-opt [options] <input file>

OPTIONS:
...

mlir-opt is the main entry point for running optimization passes and lowering code from one MLIR dialect to another. Next time, we’ll explore what some of the simpler dialects look like, run some pre-defined lowerings, and learn about how the end-to-end testing framework works.

Thanks to Patrick Schmidt for feedback on a draft of this article.


Want to respond? Send me an email, post a webmention, or find me elsewhere on the internet.

DOI: https://doi.org/10.59350/26739-vkc67