Back to jobs

Senior Data Engineer – Databricks (GCP)

KData AI
Montreal, Quebec, Canada
Full-time
AI tools:
Databricks
Apache Spark
GitHub Copilot
Applications go directly to the hiring team

Full Description

Role Overview

As a Senior Data Engineer – Databricks (GCP), you will design, build, and operate scalable and reliable data pipelines and data products on Google Cloud Platform. You will work closely with data architects, analytics teams, and business stakeholders to deliver high-quality datasets that support analytics, BI, and advanced data use cases.

This is a remote-first role, with occasional visits to client offices in Toronto or Montreal as required.

Key Responsibilities

* Design, develop, and maintain scalable data pipelines and data products using Databricks on GCP.

* Implement data ingestion, transformation, and curation pipelines using Apache Spark (PySpark preferred).

* Build and manage Delta Lake architectures (Bronze, Silver, Gold layers).

* Ensure data quality through automated controls, validation checks, and robust error handling.

* Optimize workloads for performance, reliability, and cost efficiency (Spark tuning, cluster configuration).

* Implement monitoring, logging, and alerting to ensure stable and predictable operations.

* Apply software engineering best practices: version control, CI/CD pipelines, automated testing, and documentation.

* Collaborate with data architects, analytics, BI, and business teams to translate requirements into technical solutions.

* Participate in production support, incident analysis, and continuous improvement initiatives.

* Contribute to data platform evolution, standards, and best practices within KData.

Requirements

Requirements

Technical Skills

* 8+ years of experience in data engineering roles delivering production-grade data solutions.

* Strong hands-on experience with Databricks in real-world projects.

* Solid expertise in Apache Spark, with PySpark as a strong preference.

* Strong SQL skills with a focus on performance and data correctness.

* Experience working with Azure or AWS.

* Good understanding of Delta Lake concepts (ACID transactions, schema evolution, time travel).

* Familiarity with CI/CD, automated testing, and version-controlled deployments.

* Experience working with data orchestration and monitoring tools is a plus.

Language & Communication

* Fluent English (mandatory) – professional working proficiency required.

* French is not required.

Nice to Have

* Experience working with Google Cloud Platform (GCP), including services such as GCS, IAM, and related data services.

* Experience with data architecture patterns (Lakehouse, Medallion architecture).

* Exposure to data governance, data quality frameworks, and security best practices.

* Experience using AI-assisted development tools (e.g., GitHub Copilot, Databricks Assistant).

* Consulting or client-facing experience.

Applications go to the hiring team directly