Back to jobs

Senior Data Engineer

Objectways
Phoenix, AZ
Full-time
Applications go directly to the hiring team

Full Description

About the Role

We are seeking an experienced Data Engineer to join our Enterprise Data & AI Platform team. You will be responsible for building and maintaining the data pipelines, storage layers, and processing frameworks that form the backbone of a large-scale cloud data lakehouse. Your work will directly enable analytics, AI/ML, and self-service data capabilities across a complex, multi-domain enterprise organization.

What You'll Do

* Design, build, and maintain scalable data pipelines supporting batch, near-real-time, API, and streaming ingestion patterns from enterprise and external data sources

* Develop and optimize ETL/ELT workflows across raw, enriched, and curated data layers within a cloud-based lakehouse environment

* Implement and manage data storage solutions for structured, unstructured, and semi-structured data

* Collaborate with data architects to translate platform designs into reliable, production-grade implementations

* Apply and enforce data governance standards including data quality checks, lineage tracking, classification tagging, and access controls

* Build and maintain reusable data products that serve AI/ML and analytics consumers across multiple business domains

* Monitor pipeline health and implement observability, logging, and alerting practices to ensure data reliability

* Work with orchestration and scheduling tools to automate and manage complex data workflows

* Partner with security teams to ensure pipelines adhere to data classification and compliance requirements

* Contribute to platform documentation, code reviews, and engineering best practices

What You Bring

* 5+ years of experience in data engineering, ETL development, or a closely related role

* Strong hands-on experience building data pipelines on cloud platforms (AWS preferred)

* Proficiency with ETL/ELT frameworks and workflow orchestration tools

* Experience working with lakehouse or data lake architectures and multi-zone storage patterns (landing, raw, enriched/curated)

* Solid programming skills in Python and/or SQL; familiarity with CLI and infrastructure-as-code tools is a plus

* Experience with structured, unstructured, and semi-structured data processing

* Familiarity with data governance, data quality, and metadata management concepts

* Understanding of streaming and real-time data processing patterns

* Ability to work effectively in a large, cross-functional, and matrixed enterprise environment

* Strong problem-solving skills and attention to detail

Applications go to the hiring team directly