Data Engineer
Persistent SystemsFull Description
ABOUT PERSISTENT:
We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 20 Fortune 50 companies and 4 of the 5 top banks in both the US and India, and numerous innovators across the healthcare ecosystem.
Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum. Persistent has been recognized across top industry platforms for innovation, leadership, and inclusion. We have delivered 23 sequential quarters of growth with $422.5M in Q3 FY26 revenue, up 4.0% Q-o-Q and 17.3% Y-o-Y growth. Our 26,500+ global team members, located in 18 countries, have been instrumental in helping the market leaders transform their industries. We won the 2025 ISG Star of Excellence™ Award for AI and Data Excellence and were named a Leader in the Everest Group Talent Readiness for Next-generation Data, Analytics and AI Services PEAK Matrix® Assessment 2025.
Job description :
Data Engineer (Azure + Python + Snowflake) We are looking for an experienced Data Engineer with strong hands-on expertise in building scalable data pipelines on Microsoft Azure, particularly using Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS).
The ideal candidate should possess solid Python development skills and extensive experience working with Snowflake, including data modeling, ELT/ETL processes, and performance optimization.
Key Responsibilities Design, build, and maintain end-to-end data pipelines using Azure ADF, including pipelines, dataflows, triggers, and orchestrations. Develop ingestion and transformation processes using ADF + ADLS + Python, ensuring robust, reusable, and scalable solutions. Work extensively with Snowflake — database design, schema creation, ELT/ETL workflows, performance tuning, and data sharing. Build Python-based components such as data processing scripts, automation utilities, and API integrations. Optimize data pipelines for cost, performance, and scalability across Azure and Snowflake. Implement and follow best practices for data quality, metadata management, logging, and error handling. Collaborate with data architects, BI teams, and business stakeholders to translate business needs into technical design. Support pipeline monitoring, troubleshooting, and production deployments. Ensure compliance with security standards, access management (IAM/Role-based), and cloud governance guidelines. ?? Required Skills & Experience Strong experience with Azure ADF (pipelines, dataflows, triggers) and ADLS. Proficient in Python for data processing, automation, and orchestration. Hands-on experience with Snowflake (SnowSQL, Streams, Tasks, Stages, Warehouses). Strong SQL skills for complex queries, transformations, and performance tuning. Experience in building ETL/ELT pipelines using cloud-native tools. Understanding of data modeling, data warehousing concepts (Star/Snowflake schema). Familiarity with CI/CD pipelines, Git-based workflows, and DevOps practices. ? Good to Have Knowledge of Databricks or PySpark Experience with monitoring tools (Azure Monitor / Log Analytics) Exposure to API development or integration Experience with Airflow or similar orchestration tools Knowledge of cloud cost optimization
Our company fosters a values-driven and people-centric work environment that enables our employees to:
* Accelerate growth, both professionally and personally
* Impact the world in powerful, positive ways, using the latest technologies
* Enjoy collaborative innovation, with diversity and work-life wellbeing at the core
* Unlock global opportunities to work and learn with the industry’s best
Let’s unleash your full potential at Persistent !