Back to jobs

Senior Data Engineer

Jobot
Charlotte, NC
Full-time
AI tools:
Python
TensorFlow
PyTorch

Want to learn more about this role and Jobot? Click our Jobot logo and follow

our LinkedIn page!

Job details:

Join a World Class Team Operating at the Intersection of Technology, Data, and Digital Products

This Jobot Job is hosted by: Amanda Preston

Are you a fit? Easy Apply now by clicking the "Easy Apply" button

and sending us your resume.

Salary: $155,000 - $175,000 per year

A bit about us:

Competitive salary and comprehensive benefits

Long-term stability with continued investment in technology and engineering

High-visibility work with real-world impact

A collaborative, engineering-driven culture focused on quality and continuous improvement

Why join us?

Work on data and machine learning platforms operating at significant scale

Own and influence core systems that power critical business capabilities

Collaborate with experienced engineers and data scientists in a highly technical environment

Tackle complex engineering challenges with modern cloud and MLOps tooling

Enjoy the stability of a mature organization combined with the opportunity to modernize and innovate

Job Details

Senior Data Engineer

Role Summary

Join a high-performing engineering team at a large, global organization operating at the intersection of technology, data, and digital products. We are seeking a highly motivated Senior Data Engineer to lead the architecture, deployment, and operation of next-generation, data-driven platforms.

In this role, you will bridge the gap between Data Science and Production Engineering, ensuring that datasets, machine learning models, and core services are deployed reliably, scalably, and securely in the cloud. This is a high-impact position requiring deep expertise in data architecture, backend engineering, and the full machine learning lifecycle in production environments.

Key Responsibilities

Data Pipeline Design & Orchestration

Design, build, and maintain robust data ingestion and transformation pipelines

Leverage modern orchestration tools to ensure reliable, observable data flows supporting machine learning workloads

Core Development

Write clean, efficient, and well-tested Python code for automation, infrastructure tooling, and service integration

Develop shared libraries and glue services connecting cloud-native components

API & Service Deployment

Design, develop, and deploy high-performance Python APIs (FastAPI / Flask) to serve machine learning predictions and core application logic

MLOps Pipeline Ownership

Own end-to-end pipelines for continuous training, deployment, versioning, and monitoring of production ML models (e.g., recommendation or personalization systems)

Infrastructure Management

Architect and maintain scalable, fault-tolerant infrastructure using Kubernetes (GKE) within Google Cloud Platform

Ensure reliability, performance, and cost efficiency across environments

Collaboration & Mentorship

Partner closely with data scientists, software engineers, and platform teams

Provide technical leadership and mentorship to junior engineers

Qualifications

Must-Have (Engineering Excellence)

5+ years of professional experience in Data Engineering, Software Engineering, or Cloud Engineering

Deep expertise in Python for application development, data processing, and automation

Proven experience building and deploying production-grade backend services and APIs (FastAPI, Flask, or Django)

Strong SQL skills with experience designing and optimizing schemas for relational and analytical data stores (e.g., BigQuery, Cloud SQL)

Hands-on experience with data orchestration tools such as Dagster or Airflow

Extensive experience designing and operating services within Google Cloud Platform (BigQuery, Pub/Sub, Vertex AI, Compute Engine)

Expert-level knowledge of Docker and Kubernetes, including Helm-based deployments

Nice-to-Have (DevOps & MLOps)

Experience with Infrastructure as Code tools such as Terraform or Crossplane

CI/CD experience using GitHub Actions or similar tooling

Familiarity with observability stacks (Prometheus, Grafana, Cloud Logging)

Understanding of cloud security principles and enterprise compliance requirements

Direct experience supporting production MLOps workflows (model monitoring, drift detection, automated retraining)

Interested in hearing more? Easy Apply now by clicking the "Easy Apply" button.

Want to learn more about this role

and Jobot?

Click our Jobot logo and follow our LinkedIn page!\n

Applications go to the hiring team directly