Data Engineer
VericenceFull Description
About Vericence
Vericence is a digital engineering and technology consulting firm helping enterprises build AI-driven platforms, modernize legacy systems, and scale innovation through cloud, data, and intelligent engineering. We partner with global organizations to deliver high-impact technology solutions and build world-class engineering teams.
Core Skills & Experience
* Strong hands‑on experience with Python and PySpark, including deep understanding of Spark concepts, performance tuning, and scalable data processing
* Experience with modern data platforms such as Snowflake, Databricks, or similar cloud‑based solutions
* Solid understanding of data pipeline design, development, orchestration, and monitoring for batch and streaming workloads
* Familiarity with Medallion Architecture (bronze, silver, gold layers) and best practices for data transformation
* Strong knowledge of autoscaling, cluster optimization, and cost‑efficient data processing in distributed/cloud environments
* Experience with CI/CD, DevOps practices, and version control; exposure to Terraform or Infrastructure‑as‑Code tools is preferred
* Ability to collaborate effectively with business, analytics, and engineering teams to deliver scalable and reusable data solutions
Preferred Skills: Data Engineer - Oracle APEX & OCI
* Hands‑on experience with Oracle APEX and PL/SQL for building and maintaining database‑driven applications
* Experience creating APEX forms, reports, dashboards, and workflows
* Strong understanding of Oracle Database objects, including tables, views, procedures, packages, and scheduled jobs
* Experience supporting workflow or orchestration applications, including job tracking, retries, error handling, and status monitoring
* Ability to integrate Oracle APEX / Oracle Database with REST APIs
* Familiarity with Oracle Autonomous Database (preferred)
* Working knowledge of OCI services such as Object Storage, Data Flow, Functions, Logging, and Monitoring
* Experience troubleshooting cloud‑based data processing jobs, including configuration issues, logs, and data I/O problems
* Basic understanding of data pipelines, file ingestion, batch processing, data validation, and operational monitoring
* Exposure to Python, PySpark, Spark SQL, Git, CI/CD, or Terraform is a plus
* Strong collaboration skills across business, operations, and engineering teams to deliver reliable data orchestration solutions