cover image
Optima Dev

Senior Data Engineer (95% Remote)

Hybrid

Oxford, United Kingdom

£ 82,000 / year

Full Time

16-01-2025

Share this job:

Score my CV

Job Specifications

You’ll probably be the kind of Engineer who loves rolling up their sleeves, and still getting stuck into coding. You’ll probably like the sound of having no legacy tech to worry about – and you’re probably looking for the chance to actually influence decisions.

Well, this is the role for you.

You’d be joining a team who’ve built their data infrastructure from scratch over the last few years. Even better, they’re having no issues and things are going well – but they don’t want to stand still.

But as they’re continuing to grow, they’re looking to take it to the next level and make their infrastructure more mature – so you’ll come in to help with reliability and stability.

They already have plans to form a Central Data Hub (which you’ll play a big part in) – and establishing a larger data mesh.

Your focus will be on all things data processing within Databricks, ingestion pipelines, and DataOps/DevOps.

Tech wise, you’d surround yourself with PySpark/Python, Azure, Kubernetes, Terraform and IaaC. Of course, you’ll ideally have exposure with most of it.

As it stands, their Data team is only small – just one other Data Engineer at the mo. So you’ll get the chance to put your own stamp on things, and take ownership of your own work.

Salary wise, they’ll pay anywhere from £70,000-£82,000 DOE. It’s majority remote – heading into Oxford once every couple of months or so.

They can interview this side of Christmas too.

Get in touch with Jack Leeming @ Optima Dev for a chat.

You need to be UK-based, and they can't offer sponsorship.

About the Company

We set up Optima Dev, because quite simply, we were tired of agencies overcomplicating things. Recruitment should be a pain free, transparent process for everybody and that's what we're striving towards. So, what's the best way to achieve this? Keep things simple. When it comes to our advertising roles, instead of copying and pasting a shopping list of requirements (which tells you nothing about the role), we'll do our best to put together an engaging advert, which actually tells you why you should apply - and in what ma... Know more

Related Jobs

Company background Company brand
Company Name
Saragossa
Job Title
Senior Data Engineer – Commodities Trading – £130,000 Salary + Bonus
Job Description
Get stuck in immediately to a database migration using Snowflake. This company work across various energy and commodities markets across the world. We appreciate that not everyone wants to work within Oil and Gas trading, however, part of the role of the data team is to look at more sustainable options for trading all kinds of commodities products. You’re going to be getting involved with a number of newly launched data projects, with your initial project being to work on this migration. You’ll face off with the business (Heads of Desk, Traders, Analysts), understanding what they need, discussing solutions with the Data Science team, then building out the best solution possible, whether it be with an off the shelf product, or building it completely from scratch using primarily Python and SQL. The data team has grown over the past 12-18 months, with data engineering still being built out in London. There’s a strong opportunity to take on leadership responsibilities, so if management is in your sights and ambitions, you’ll be able to achieve that here. The team are using more advanced technology as time goes on and you’ll be able to suggest potential tools to use. Snowflake is one of the examples of this, as it’s recently been brought into the team on suggestion of one of the team and is now being widely used. Alternatively, if there’s a ready customised tool within AWS that you feel is a better option, then you can use that. There really is plenty of technical freedom here. In terms of your technical experience you’ll need to have worked in a commercial data engineering role for a few years, this is a mid-level position. Strong Python, Snowflake and SQL experience will be required and any experience of working with tools like Docker/Kubernetes and AWS would be a huge preference. Commodities experience/knowledge is not required but would be a plus. This is a global commodities firm with a strong history of performance and revenue. Your starting salary will be up to £130,000 plus a performance related bonus. Benefits include medical, dental and life insurance, wellness programs, pension, generous parental leave and various other perks. Want to make sure data has an impact on the future of commodities trading? Get in touch. No up-to-date CV required.
London, United Kingdom
On site
Full Time
16-01-2025
Company background Company brand
Company Name
nisien.ai
Job Title
Data Scientist
Job Description
Make the Internet Safer: Join Nisien as a Data Scientist Help build cutting-edge AI solutions to combat online harms and shape the future of digital safety. We are looking for a proactive, passionate, and highly collaborative Data Scientist to help us build greenfield products aimed at safeguarding the internet. About Us Nisien is an AI startup with a mission to harness artificial intelligence as a force for good, specifically to detect and counter online harms. We’re a newly established company spun out of Cardiff University’s HateLab, led by founders with expertise in combating hate, AI, and cybersecurity. With a leadership team boasting a proven track record in securing funding, scaling operations, and successfully exiting startups, we are well-positioned to drive meaningful growth and innovation in this critical space. The advent of the UK Online Safety Act presents a unique opportunity for us to make a tangible impact, addressing urgent challenges in online safety and fostering a healthier digital ecosystem. About the Role As a Data Scientist at Nisien.ai, you'll develop AI models to detect and prevent online harms, working with engineers and policy experts to build scalable tools for content analysis and insights. Key Responsibilities: Design and optimize ML models to detect online harms (misinformation, hate speech, harmful behavior) Analyze data and create visualizations to uncover online harm patterns Develop NLP solutions for text classification and content moderation Handle large datasets from social media and digital platforms Work with teams to integrate AI solutions and define requirements Track AI research advances in online safety and digital ethics Help build scalable data infrastructure for rapid tool development This is an exciting opportunity to make a significant impact on a project from the ground up while working in a supportive and innovative environment. Requirements: Bachelor's/Master's in Computer Science, Data Science, Statistics, or related field. Strong Python proficiency with deep learning frameworks (PyTorch, TensorFlow). Experience with large-scale data processing and SQL. Proven track record of deploying ML models to production with cloud platforms (AWS, Azure). Nice to Have PhD in relevant field. Experience working with datasets related to social media, content moderation or cyber risk. Ready to Make an Impact? Apply now to join our mission of making the internet safer through AI innovation.
Cardiff, United Kingdom
On site
Full Time
17-01-2025
Company background Company brand
Company Name
Future Talent Group
Job Title
Senior Ruby Engineer - (SaaS / Tech4Good / GIS Data) - Remote-First Team
Job Description
Senior Ruby Engineer - (SaaS / Tech4Good / GIS Data) - Remote-First Team Salary: £80,000 - £85,000 Location: Remote, with team meet-ups in the UK several times per year Industry: GIS Data and PropTech Would you like to join a growing scale-up that is not only disrupting an entire industry in a positive way but also creating real benefits for its platform users? You’ll work alongside leading specialists in GIS data, Data Engineering, and SaaS to build cutting-edge solutions. Responsibilities: Develop and maintain robust, scalable, and secure Ruby-based applications. Build APIs and integrations to power our platform and deliver seamless user experiences. Collaborate with cross-functional teams, including Product, Design, and DevOps, to ship features quickly and efficiently. Write clean, maintainable, and testable code following best practices and coding standards. Technical Skills: Proven expertise as a Ruby/Ruby on Rails developer in a fast-paced environment. Strong understanding of RESTful APIs, database design, and system architecture. Experience working as part of an agile team, utilising TDD and DevOps practices. Proficiency in modern development tools and practices (e.g., Git, CI/CD pipelines). Knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Familiarity with cloud platforms, ideally AWS. If this opportunity excites you, please apply or feel free to reach out directly at scott@futuretalent.io
Bristol, United Kingdom
Hybrid
Full Time
17-01-2025
Company background Company brand
Company Name
Infinity Quest
Job Title
Data Engineer
Job Description
Job Description: As a Data Engineer with Iceberg experience, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives. Key Responsibilities: Data Integration: Develop and maintain data pipelines to extract, transform, and load (ETL) data from various sources into AWS data stores for both batch and streaming data ingestion. AWS Expertise: Utilize your expertise in AWS services such as Amazon EMR , S3, AWS Glue, Amazon Redshift, AWS Lambda, and more to build and optimize data solutions. Data Modeling: Design and implement data models to support analytical and reporting needs, ensuring data accuracy and performance. Data Quality: Implement data quality and data governance best practices to maintain data integrity. Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and storage solutions to ensure optimal performance. Documentation: Create and maintain comprehensive documentation for data pipelines, architecture, and best practices. Collaboration: Collaborate with cross-functional teams, including data scientists and analysts, to understand data requirements and deliver high-quality data solutions. Automation: Implement automation processes and best practices to streamline data workflows and reduce manual interventions. Experience working with bigdata ACID file formats to build delta lake, particularly with Iceberg file formats and loading methods of Iceberg. Good knowledge on Iceberg functionalities to use the delta features to identify the changed records, optimization, and housekeeping on Iceberg tables in the data lake. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python, Good to have: Cloudera – Spark, Hive, Impala, HDFS , Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin Qualifications: Bachelor's or master’s degree in computer science, Data Engineering, or a related field. 5 to 8 years of experience in data engineering, including working with AWS services. Proficiency in AWS services like S3, Glue, Redshift, Lambda, and EMR. Knowledge on Cloudera based hadoop is a plus. Strong ETL development skills and experience with data integration tools. Knowledge of data modeling, data warehousing, and data transformation techniques. Familiarity with data quality and data governance principles. Strong problem-solving and troubleshooting skills. Excellent communication and teamwork skills, with the ability to collaborate with technical and non-technical stakeholders. Knowledge of best practices in data engineering, scalability, and performance optimization. Experience with version control systems and DevOps practices is a plus.
Milton Keynes, United Kingdom
Hybrid
Full Time
16-01-2025