Distributed Training and Inference Engineer
SciforiumFull Description
Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by multi-million-dollar funding and direct sponsorship from AMD with hands-on support from AMD engineers the team is scaling rapidly to build the full stack powering frontier AI models and real-time applications.
About the role
Sciforium is seeking a highly skilled Distributed Training and Inference Engineer to build, optimize, and maintain the critical software stack that powers our large-scale AI training and serving workloads. In this role, you will work across the entire machine learning infrastructure from low-level CUDA/ROCm runtimes to high-level frameworks like JAX and PyTorch to ensure our distributed training systems are fast, scalable, stable, and efficient.
This position is ideal for someone who loves deep systems engineering, debugging complex hardware–software interactions, and optimizing performance at every layer of the ML stack. You will play a pivotal role in enabling the training and deployment of next-generation LLMs and generative AI models.
What you'll do
* Software Stack Maintenance: Maintain, update, and optimize critical ML libraries and frameworks including JAX, PyTorch, CUDA, and ROCm across multiple environments and hardware configurations.
* End-to-End Stack Ownership: Build, maintain, and continuously improve the entire ML software stack from ROCm/CUDA drivers to high-level JAX/PyTorch tooling.
* Distributed System Optimization: Ensure all model implementations are efficiently sharded, partitioned, and configured for large-scale distributed training and serving.
* System Integration: Continuously integrate and validate modules for runtime correctness, memory efficiency, and scalability across multi-node GPU/accelerator clusters.
* Profiling & Performance Analysis: Conduct detailed profiling of compilation graphs, training workloads, and runtime execution to optimize performance and eliminate bottlenecks.
* Debugging & Reliability: Troubleshoot complex hardware–software interaction issues, including vLLM compilation failures on ROCm, CUDA memory leaks, distributed runtime failures, and kernel-level inconsistencies.
* Collaborate with research, infrastructure, and kernel engineering teams to improve system throughput, stability, and developer experience.
Ideal candidate profile
* 5+ years of industry experience in ML systems, distributed training, or related fields.
* Bachelor’s or Master’s degree in Computer Science, Computer Engineering, Electrical Engineering, or related technical fields.
* Strong programming experience in Python, C++, and familiarity with ML tooling and distributed systems.
* Deep understanding of profiling tools (e.g., Nsight, ROCm Profiler, XLA profiler, TPU tools).
* Deep expertise with partitioning configuration on the modern ML frameworks such as PyTorch and JAX.
* Experience with multi-node distributed training systems and orchestration frameworks (DTensor, GSPMD, etc.).
* Hands-on experience maintaining or building ML training stacks involving CUDA, ROCm, NCCL, XLA, or similar technologies.
Nice-to-have
* Extensive experience with the XLA/JAX stack, including compilation internals and custom lowering paths.
* Familiarity with distributed serving or large-scale inference frameworks (e.g., vLLM, TensorRT, FasterTransformer).
* Background in GPU kernel optimization or accelerator-aware model partitioning.
* Strong understanding of low-level C++ building blocks used in ML frameworks (e.g., XLA, CUDA kernels, custom ops).