Currently rules_ml_toolchain defines and registers a custom CC toolchain but doesn't easily supports using a custom registered CC toolchain.
From what I experienced, this comes from internal assumptions about which clang will be available and forced cuda wrappers headers from a provided sysroot.
|
if _use_hermetic_toolchains(repository_ctx): |
|
if _is_linux_x86_64(repository_ctx): |
|
clang_major_version = _llvm_x86_64_hermetic_version |
|
elif _is_linux_aarch64(repository_ctx): |
|
clang_major_version = _llvm_aarch64_hermetic_version |
|
else: |
|
print("This OS or architecture isn't supported by Hermetic C++.") |
|
|
AND
|
if _is_linux_x86_64(repository_ctx): |
|
hermetic_wrappers_headers = "@llvm_linux_x86_64//:cuda_wrappers_headers" |
|
elif _is_linux_aarch64(repository_ctx): |
|
hermetic_wrappers_headers = "@llvm_linux_aarch64//:cuda_wrappers_headers" |
|
else: |
|
print("This OS or architecture isn't supported by Hermetic C++.") |
|
|
|
# Set up BUILD file for cuda/ |
|
repository_ctx.template( |
This aligns well with the hermetically provided clang but forbids using your own hermetic CC toolchain.
With a few patches, I was able to use the fully hermetic @llvm BCR module (I am a maintainer) to compile both CPU and GPU (CUDA) XLA PJRT plugins via rules_ml_toolchain, in remote cross-builds (bazel from my mac, linux remote-executors).
Would you be interested in discussing how to make that a first class option in rules_ml_toolchain ?
Currently
rules_ml_toolchaindefines and registers a custom CC toolchain but doesn't easily supports using a custom registered CC toolchain.From what I experienced, this comes from internal assumptions about which clang will be available and forced cuda wrappers headers from a provided sysroot.
rules_ml_toolchain/gpu/cuda/cuda_configure.bzl
Lines 312 to 319 in 6d19ed7
AND
rules_ml_toolchain/gpu/cuda/cuda_configure.bzl
Lines 626 to 634 in 6d19ed7
This aligns well with the hermetically provided clang but forbids using your own hermetic CC toolchain.
With a few patches, I was able to use the fully hermetic @llvm BCR module (I am a maintainer) to compile both CPU and GPU (CUDA) XLA PJRT plugins via
rules_ml_toolchain, in remote cross-builds (bazel from my mac, linux remote-executors).Would you be interested in discussing how to make that a first class option in
rules_ml_toolchain?