
Data Engineer
On site
Windsor, United Kingdom
£ 500 / day
Freelance
20-02-2025
Job Specifications
We are partnered with a leading global consultancy that is searching for contractors with the following skillsets to work on a LONG-TERM contract within the ENERGY sector:
ROLE 1:
Role: Data Engineer (Spark, Kafka)
Location: Windsor
Style: Hybrid
Rate: up to £500 per day (inside IR35)
Duration: 6 months (initially – view to extend)
Key responsibilities:
Design, implement, and manage Kafka-based data pipelines and messaging solutions to support critical business operations and enable real-time data processing.
Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability to maximize uptime and support business growth.
Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow, enhancing decision-making and operational efficiency.
Collaborate with development teams to integrate Kafka into applications and services.
Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases,
NoSQL data stores, and cloud storage, enabling faster data insights.
Implement security measures to protect Kafka clusters and data streams, safeguarding sensitive information and maintaining regulatory compliance
Key Skills:
Design, build, and maintain reliable, scalable data pipelines. Data Integration, Data Security and Compliance
Monitor and manage the performance of data systems and troubleshoot issues.
Strong knowledge of data engineering tools and technologies (e.g. SQL, ETL, data warehousing), Experience in tools like Azure ADF, Apache Kafka, Apache Spark SQL, Proficiency in programming languages such as Python, PySpark
Good written and verbal communication skill
Experience in managing business stakeholders for requirement clarification
ROLE 2:
Role: Hadoop Big Data Developer
Location: Windsor
Style: Hybrid
Rate: up to £400 per day (inside IR35)
Duration: 6 months (initially – view to extend)
Key responsibilities:
Work closely with the development team to assess existing Big Data infrastructure
Design and code Hadoop applications to analyze data compilations
Create data processing frameworks
Extract and isolate data clusters
Test scripts to analyze results and troubleshoot bugs
Create data tracking programs and documentation
Maintain security and data privacy
Key Skills:
Build, Schedule and maintain data pipelines. Good expertise in Pyspark, Spark SQL, Hive, Python, kafka.
Strong experience in Data Collection and Integration, Scheduling, Data Storage and Management, ETL (Extract, Transform, Load) Processes
Knowledge of relational and non-relational databases (e.g., MySQL, PostgreSQL, MongoDB).
Good written and verbal communication skill
Experience in managing business stakeholders for requirement clarification
If you are interested and have the relevant experience, please apply promptly and we will contact you to discuss it further.
Yilmaz Moore
Senior Delivery Consultant
London | Bristol | Amsterdam
About the Company
We are an independent recruitment consultancy driven to provide an excellent service to our clients and candidates. We take pride in our service and delivery and have built strong, transparent relationships with all our customers. How we can help you: If you're looking to resource an entire project or to fill one strategic position, please get in touch on 020 3887 8700 / 0117 403 8888 or email your requirements to hello@macleanmoore.com - we pride ourselves on our speed of response and are looking forward to hearing from... Know more
Related Jobs


- Company Name
- Revolution Technology Ltd
- Job Title
- Technical Data Engineer (London Market Insurance)
- Job Description
- Our client are a global consultancy, working on a project in the London Market Insurance space. They are on the lookout for a Data Engineer to come in on a contract basis. Key Skills/Requirements: R, Python & SQL experience Excel, Tableau & Power BI experience for Data Visualisation Experience designing and implementing API integrations Ideally experience in the London Market Insurance space Contract is running for 3 months initially with likely extensions, paying up to £425p/day (Inside IR35 via Umbrella) and will be 2 days per week in Central London.


- Company Name
- Robert Walters
- Job Title
- AWS Data Engineer
- Job Description
- AWS Data Engineer Location: Merseyside (Hybrid, office 2-3 times a week) Outside IR35 Duration: 6mths Rate: £400-500 per day AWS Data Engineer required for 6 months to drive the strategic data landscape, design complex data pipelines & collaborate with business functions to ensure value-driven data solutions. AWS Data Engineer Responsibilities Design and maintain scalable data pipelines for processing and transforming large data sets. Oversee data integration, ETL processes, and ensure data consistency across multiple sources Implement data quality standards and governance policies, addressing any data issues. Collaborate with various stakeholders and 3rd party providers, and mentor junior engineers, providing technical expertise. Lead data projects, ensuring efficient execution and alignment with business goals. The Ideal Candidate Will Have Extensive experience in data engineering, with a strong track record of designing and building complex data pipelines. Strong expertise in databases (SQL and NoSQL), data processing (dbt) and data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). Familiarity with cloud platforms AWS and their data services. Strong problem-solving skills and attention to detail. Excellent communication and leadership skills to collaborate effectively with team members and stakeholders. If interested in this AWS Data Engineer please apply with your latest CV for immediate consideration - applicants who live a commutable distance to Merseyside are preferred. Robert Walters Operations Limited is an employment business and employment agency and welcomes applications from all candidates


- Company Name
- Access Computer Consulting Plc
- Job Title
- Data Architect Lead
- Job Description
- I am recruiting for a Data Architect Lead to be based in London 3 days per week, 2 days remote. The role falls inside IR35 so you will be required to work through an umbrella company. You will be responsible creating a data strategy across Front and Back Office. The expectation will be to document and understand the current data architecture, create a future state design and road map to get there, considering data governance, categorisation, tooling and data platforms. You must have substantial experience of working as a Data Architect. Experience of migrating and building enterprise data platforms in AWS is essential. You must have a background in data development or engineering. SQL expertise is required for this role. You will have knowledge of the full trade lifecycle. Any experience of the following is advantageous - Spring, SpringBoot, JPA, Hibernate, Maven and Jenkins. Please apply ASAP to find out more! Desired Skills and Experience Data Architect SQL AWS


- Company Name
- eTeam
- Job Title
- Data Engineer
- Job Description
- Job Title: Data Engineer Location: UK (Remote) Duration: Full-time contract, 6 Months Summary: • We are looking for a Data Engineer with expertise in Python development, who is passionate about cloud- based data engineering using AWS services and loves to build data solutions as part of multi-disciplinary team. • You would be working closely with digital product professionals, data scientists, cloud engineers and others. • You’ll be a member of a global team working on GenAI initiative, based in one of our European offices. Our Client’s Tech Ecosystem function is responsible for developing and delivering all technology solutions for the firm’s internal use. • You will work in a team of data engineers to develop data ingestion pipelines, create and mature data processing capabilities that ingest data into a data system used by GenAI applications. • Your work would include but won't be limited to creation of the python code, tests, creation and modification of GitHub Action CICD pipelines, working with AWS-based infrastructure and docker containers. Responsibilities: • You will work in a team of data engineers to develop data ingestion pipelines, create and mature data processing capabilities that ingest data into a data system used by GenAI applications. Work includes but not limited to creation of the python code, tests, creation and modification of GitHub Action CICD pipelines, working with AWS-based infrastructure and docker containers Skills: • 3+ years of professional experience as a data engineer, with a strong focus on cloud- based data engineering using AWS services • Expertise with Python development • Practicing high coding standards with clean code, modularity, error handling, testing automation and more • Strong experience with relational databases • Very driven, superstrong on execution and output orientation, likes to get job done attitude and ability to figure things out independently. Able to work in complex and very fast paced environment • Hands-on experience with Docker • Solid and demonstrable background in data pipeline performance and diagnostics • Interest in Generative AI and other ML topics • Kedro framework experience as a plus • Holds their ground, opinionated, not afraid to speak up at any level • Familiarity with agile principles and product development • Excellent problem-solving skills and the ability to analyze and resolve complex data engineering challenges • Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment