AI Systems Engineer – AI Model (Training & Inference)
AMDFull Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
The Role/Person
The AMD AI Group is looking for a Senior Software Development Engineer to own the end-to-end model execution stack on AMD Instinct GPUs - spanning training infrastructure at scale and high-performance inference serving. This role demands someone who has shipped LLMs on real hardware, written GPU kernels that moved production metrics, and built the systems infrastructure (orchestration, storage, monitoring) that keeps thousands of GPUs productive. You will be instrumental in ensuring AMD GPUs are first-class citizens for frontier model training and inference across current and next-generation Instinct accelerators.
Key Responsibilities
Training Infrastructure & Enablement
* Enable and optimize large-scale model training (LLMs, VLMs, MoE architectures) on AMD Instinct GPU clusters, ensuring correctness, reproducibility, and competitive throughput.
* Build and maintain training infrastructure: job orchestration, distributed checkpointing, data loading pipelines, and storage optimization for multi-thousand GPU clusters on Kubernetes.
* Debug and resolve training-specific issues including gradient norm explosions, non-deterministic behavior across GPU generations, and compute-communication overlap in distributed training (FSDP, DeepSpeed, Megatron-LM).
* Optimize RCCL collective communication patterns for training workloads, including all-reduce, all-gather, and reduce-scatter across multi-node topologies.
* Develop monitoring, alerting, and compliance infrastructure to ensure training cluster health, data security, and SLA adherence at scale.
* Design and build end-to-end validation and testing infrastructure using proxy workloads, synthetic benchmarks, and configurable workload generators to systematically validate platform readiness across AMD Instinct GPU generations.
Inference Optimization & Serving
* Write and optimize high-performance GPU kernels (GEMM, attention, quantized matmul, GPTQ/AWQ) in HIP, Triton, and MLIR targeting AMD Instinct architectures, with demonstrated ability to outperform open-source baselines.
* Drive end-to-end inference enablement on new AMD GPU silicon - be among the first to get frontier models running on each new Instinct generation, creating reproducible guides and reference implementations.
* Optimize inference serving frameworks (vLLM, SGLang, TorchServe) for AMD GPUs: batching strategies, KV-cache management, speculative decoding, and continuous batching for production throughput/latency targets.
* Develop novel approaches to inference acceleration, including bio-inspired algorithms, SLM-assisted batching, and custom scheduling strategies that exploit AMD hardware characteristics.
* Build quantization pipelines (FP8, FP6, FP4, GPTQ, AWQ) for production model deployment, ensuring quality-performance tradeoffs are well-characterized across AMD GPU generations.
Cross-Cutting
* Collaborate with AMD silicon architecture and pre-silicon teams to provide software feedback and validate software stack integration on next-generation Instinct GPU designs for both training and inference workloads.
* Build observability and automated analysis tooling: log analysis pipelines, anomaly detection, performance baselining, regression detection, and diagnostic workflows for large-scale GPU clusters.
* Contribute to the open ROCm ecosystem and AMD's developer experience — SDKs, CI dashboards, documentation, and developer cloud enablement.
Required Experience
Industry experience shipping production AI/ML infrastructure, with hands-on work spanning both training and inference.
Preferred Experience
* Direct experience enabling frontier models (GPT-4 class) on AMD Instinct hardware end-to-end.
* Background in building anomaly detection, log analysis, or observability systems for large-scale distributed GPU infrastructure.
* Familiarity with AMD Instinct MI-series architectures (MI300X, MI350X, MI355X) and RCCL communication library.
* Contributions to open-source AI frameworks (PyTorch, vLLM, SGLang, DeepSpeed, Megatron-LM).
* Experience designing validation frameworks, proxy benchmarks, or synthetic workload suites for GPU infrastructure at scale.
* Experience with pre-silicon software validation or hardware-software co-verification workflows.
* Publications or patents in HPC, ML systems, or GPU kernel optimization.
Preferred Academic Credentials
* Bachelor’s or Master’s degree or Ph.D in Computer/Software Engineering, Computer Science, or related technical discipline
This role is not eligible for visa sponsorship.
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.