Back to jobs

AI Software Intern

Oxmiq Labs
Campbell, CA (On-site)
Internship
AI tools:
PyTorch
Applications go directly to the hiring team

Full Description

We are seeking highly motivated AI Software Interns to join our engineering team. In this role, you will contribute to the development, optimization, and validation of the software stack that maps AI workloads onto custom silicon. You will gain hands-on experience building compiler backends, operator libraries, and inference runtimes, and collaborate closely with hardware and ML teams.

Responsibilities

* Implement and optimize operator kernels and dispatch logic for AI accelerator backends

* Maintain compatibility of the software stack with evolving hardware SDK and runtime releases

* Extend compiler and runtime infrastructure for tensor type handling, shape analysis, and memory management

* Integrate and validate ML framework frontends (e.g., PyTorch) for model inference on target hardware

* Develop and maintain automated test suites covering operator correctness, model accuracy, and performance regression

* Contribute to CI/CD infrastructure, build systems, and developer tooling

* Triage and debug cross-layer issues spanning the compiler, runtime, and device stack

Minimum Qualifications

* Pursuing a Bachelor's or Master's degree in Computer Science (CS), Computer Engineering (CE), Electrical Engineering (EE), or related field

* Completed coursework in:

- Data Structures & Algorithms

- Computer Architecture or Operating Systems

- Programming (Python, C/C++)

* Hands-on experience through relevant class projects involving systems programming, compilers, or ML frameworks

Preferred Qualifications

* Familiarity with PyTorch internals (ATen ops, dispatch, custom backends)

* Experience with AI inference frameworks (vLLM, TensorRT, ONNX Runtime)

* Exposure to compiler or code-generation toolchains (MLIR, LLVM, TOSA, or similar IRs)

* Understanding of tensor data types and low-precision formats (BFloat16, FP8, MXFP4)

* Experience with CI/CD systems (GitHub Actions, Docker, make-based build systems)

* Knowledge of AI accelerator concepts (tiling, memory hierarchy, device mesh sharding)

What You’ll Gain

* Real-world experience building the software stack for AI accelerators

* Mentorship from experienced compiler and runtime engineers

* Opportunity to work on cutting-edge LLM inference optimization

* Exposure to the full vertical: from PyTorch operator dispatch to on-device execution

Applications go to the hiring team directly