Data Engineers x 5
cloudandthings.ioFull Description
About cloudandthings.io
At cloudandthings.io, we are an engineering-led consultancy focused on building modern Data and AI Platforms for enterprise clients.
We don’t just design architectures, we build real systems that power analytics, decision-making, and AI at scale. Our team acts as a force multiplier, accelerating delivery, improving quality, and unlocking measurable business value. We operate with high standards, strong ownership, and a bias for execution.
If you care about solving meaningful problems and building production-grade systems, you’ll fit right in.
We’re looking for Data Engineers across all levels (Junior to Principal) to join our growing Data, Analytics and AI capability.
Overview
As a Data Engineer, you will design, build, and operate modern, cloud-native data platforms across AWS and Azure, leveraging Databricks and Microsoft Fabric. You will work across the full data lifecycle: ingestion, transformation, modelling, and serving, enabling real-time analytics, reporting, and AI use cases.
Key Responsibilities
While the list below is long, an ideal candidate should have working knowledge and experience covering many of the tools and services. The requirements for each project differ over time, and these skills provide an overview of what may typically be required of a Data Engineer.
Key Responsibilities
Software Engineering Foundations
* Strong grounding in software engineering fundamentals (data structures, algorithms, design patterns).
* Proficiency in Python and SQL (additional languages advantageous).
* Experience with Git, CI/CD pipelines, and modern development practices.
* Familiarity with Terraform or Bicep for Infrastructure as Code.
* Comfortable working in Linux-based environments.
Data Platform Engineering (Multi-Cloud)
Data Ingestion and Streaming
* Build scalable ingestion pipelines across hybrid and cloud environments.
* Real-time streaming: AWS Kinesis / MSK (Kafka), Azure Event Hubs / Kafka
* Batch ingestion: AWS DataSync, DMS, Azure Data Factory / Synapse Pipelines / Fabric Pipelines
* Integration via APIs, JDBC/ODBC, and CDC pipelines.
Storage, Lakehouse and Fabric
* Design and manage data lakes using: Amazon S3, Azure Data Lake Storage Gen2 (ADLS)
* Implement lakehouse architectures using: Databricks (Delta Lake, Unity Catalog), Microsoft Fabric (OneLake, Lakehouse, Warehouse)
* Work with modern data formats: Parquet, Avro, JSON
* Experience with: Relational databases (Postgres, SQL Server, Aurora), NoSQL (DynamoDB, Cosmos DB), Caching (Redis)
Data Processing and Transformation
* Build scalable ETL/ELT pipelines using: Databricks (PySpark, Delta Live Tables, Workflows), AWS Glue / Lambda, Azure Databricks / Synapse Spark / Fabric Data Engineering
* Implement medallion architecture (Bronze/Silver/Gold).
* Develop reusable, testable, and production-grade data pipelines.
Analytics and AI Enablement
* Design platforms that support: Business Intelligence and advanced analytics, Machine learning and AI use cases
* Work with: Amazon Redshift / Athena, Microsoft Fabric (Semantic Models, Direct Lake), Power BI, Databricks SQL & ML capabilities
* Support: Feature engineering, Data science workflows, Real-time decisioning systems
* Implement data quality, observability, and lineage frameworks.
Security, Governance & Compliance
* Implement secure, enterprise-grade data platforms: AWS IAM / Azure Entra ID (AAD), RBAC, Managed Identities
* Governance: Databricks Unity Catalog, Microsoft Purview, AWS Lake Formation
* Networking: VPC / VNets, Private Endpoints, Direct Connect / ExpressRoute
* Encryption: KMS / Key Vault / TLS
Orchestration and Operations
* Build orchestrated pipelines using: Databricks Workflows, AWS Step Functions / MWAA (Airflow), ADF / Synapse / Fabric Pipelines
* Monitoring & observability: Cloud-native monitoring tools (CloudWatch, Azure Monitor)
* Apply best practices across: Reliability, Performance optimisation, Cost optimisation (FinOps)
Requirements
* Bachelor’s degree in Engineering, Computer Science, or related field.
* Proven track record of designing and implementing data solutions.
* Knowledge of and experience with Azure Cloud infrastructure and services.
* Certifications, such as: AWS Certification, Microsoft Certification, Databricks Certification
* Any other data-related experience, e.g. working with Hadoop, databases, analytics software, etc.
* Experience with Docker/Containers/Kubernetes/CICD pipelines for data.
* Knowledge of data security and compliance standards.
* Willingness to learn and expand knowledge related to Cloud and Data Technologies.
* Strong problem-solving and analytical skills.
* Self-organising with the ability to prioritise and manage multiple tasks simultaneously.
* Excellent verbal and written communication skills.
* Ability to work collaboratively with clients and team members.
* Willingness to travel to clients as and when required.
What We Offer
* A culture of engineering and an environment where ideas are heard, and builders can build.
* Competitive compensation and bonus structure.
* A flexible and supportive work environment that values diversity, work-life balance, and personal growth.
* Opportunities for career advancement and ongoing professional development.
* Ongoing learning and development opportunities to enhance your skills.
* Engaging with cutting-edge technologies and awesome client projects.
* Access to a talented team of professionals and mentors.
*If you have not heard back from us within 30 days, please consider your application unsuccessful. However, we'd love for you to keep an eye out for future opportunities and please continue to apply.