cover image
Robert Walters

Backend Engineer - Typescript

On site

London, United Kingdom

£ 85,000 / year

Full Time

24-04-2025

Job Specifications

A leading asset management firm is seeking a Senior Backend Engineer with expertise in Typescript to join their dynamic cybersecurity team. This role offers the opportunity to work on cutting-edge cloud-based cybersecurity services, protecting hundreds of thousands of clients across the UK and Europe.
What you'll do:
Specialise in Typescript with expertise level
Develop and enhance modern cloud-based cybersecurity services across all layers
Build solutions to detect and prevent account takeover attempts by bad actors
Improve the efficiency and resilience of cloud services
Drive continuous improvement of technical standards, tools, and processes
Collaborate with the Product Owner to transform business needs into technical requirements
Manage deployment and operations across development, testing, and production environments
What you bring:
Expertise in TypeScript and JavaScript building and consuming web services with node.js; familiarity with NestJS and microservices-based architectures is a plus
Experience in Cloud Computing and Cloud Services (preferred: AWS) and terraform
Solid understanding of REST APIs and good understanding of cloud security principles
Excellent communication skills, with the ability to bridge the gap between technical and non-technical stakeholders
Knowledge of agile development methodologies; experience with the scrum framework is preferred
A passion for continuous learning and development, both technically and non-technically
Robert Walters Operations Limited is an employment business and employment agency and welcomes applications from all candidates

About the Company

Robert Walters is the world’s most trusted talent solutions business. Across the globe, we deliver recruitment, recruitment process outsourcing and advisory services to organisations of all shapes and sizes, opening doors for people with diverse skills, ambitions, and backgrounds.    The businesses we partner with want to make things happen. And they need people to do it. They have goals. They have challenges. They want answers.   We deliver the talent solutions they need to reach their goals. That might mean recruiting ... Know more

Related Jobs

Company background Company brand
Company Name
Roku
Job Title
Senior Software Engineer, Python Automation
Job Description
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the role With so many people using our products globally, we’ve become well-known for products that “just work” right out of the box and integrate almost by magic. That doesn’t happen by accident, which is why we are committed to making sure our products aren’t just intuitive, they’re obvious. To support that commitment, the Roku Ads Test Automation Team focuses on testing Ad products and features for the industry’s most reliable streaming media platform. Our goal is to help people find what they want and make it easier for people to stream. We accomplish this using state-of-the-art technology and engineering to make it happen, with consideration for the customer as the centre of all that we do. We are seeking an experienced and versatile Software Development Engineer in Test to own quality of ad features on Roku Platform. You will be responsible for end-to-end execution of ad product which includes cross-team collaboration for feature testing, developing test plans, coordinate testing with manual QA, create an automation strategy, deployment of library/feature and more. You should be able to represent automation and QA concerns in meetings with cross-functional, project team members and provide valuable end-user feedback to improve the customer experience. This position requires a solid understanding of software development life-cycle experience with a variety of testing techniques, strong debugging, written and organizational skills, and automation experience. About The Team Our team works on qualifying all Ads products and features on Roku Platform. You will be joining a talented, high-performance team of SDETs with a history of delivery. We are looking for someone who can help us keep up this pace and continue delivering high quality as we grow. What you'll be doing Own and execute the feature testing, create test plan documentation, collaborate with developers, product lead and other manual QA. Develop Automated tests that run on Roku players and TVs Convert manual test cases into reliable, repeatable automated tests Contribute to the Continuous Integration pipeline by running component builds, creating and running Deployment jobs on individual stages on Jenkins, and running automated functional tests. Debug failing tests to improve product and automated test quality Promote coding conventions and standards for code re-usability and cleanliness Conduct code reviews for improved code quality and optimization We are excited if you have 5+ years of Software Engineering. 3+ years of hands-on experience with automation systems and unit testing (Python) Strong problem-solving, analytical and technical troubleshooting skills Solid knowledge and experience developing test plans and test cases Strong debugging skills Excellent verbal and written communication skills Research and documentation skills Ability to learn new technologies quickly Ability to work independently and be self-directed Bachelor's degree in Computer Science or related Experience with big data analytics: Splunk, ELK, Hive, Redshift, etc. (nice to have) In-depth knowledge of streaming back-ends and formats (nice to have) Experience working with Smart/Digital TV (HDMI), Setup Boxes, Wi-Fi (2.4Ghz & 5GHz), TV remote controllers (nice to have) International product experience (nice to have) Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Cambridge, United Kingdom
Hybrid
Full Time
01-05-2025
Company background Company brand
Company Name
Octopus Energy
Job Title
Data Engineer (Mid-Level)
Job Description
London, UK Octopus Energy UK – Procurement / Full-time / Hybrid The Energy Markets team at Octopus Energy is responsible for making sure that we always have the electricity and gas we need to support our customer demand whilst also supporting the grid to enable the Net Zero transition. To achieve this mission across all Octopus international regions, we have sub-teams focused on forecasting energy demand and generation, hedging and shaping our trade position, tracking and reporting the ongoing risk to Octopus, and driving the proportion of our supply directly from generators via PPA agreements. The Engineering sub-team owns our global technical platform that supports these different processes and drives forward long-term solutions to enhance Group capabilities. We are looking for a Data Engineer to help achieve this goal - ideally someone who is comfortable diving into different tasks to support each team using a variety of coding languages across our platform setup, who enjoys developing relationships across the company while explaining technical processes in the most appropriate way, and who keeps an eye on scalable solutions to support data growth. This is therefore an exciting opportunity to take on a role that combines complex data engineering, visual analytics and business critical need. What You'll Do... Supporting different Energy Markets teams to design and build key operational and reporting pipelines across all Octopus Energy regions; Taking responsibility for the maintenance of these critical data pipelines supporting core trading, forecasting, risk and PPA processes; Developing automations and alerts to quickly debug where these pipelines are failing or showing unprecedented trends; Setting up and maintaining processes for capturing, preparing and loading valuable new data into the data lake; Designing and building dashboards that cover operational processes and reporting requirements; Working with international teams across the Octopus Energy Group to ensure everyone shares the best possible practises and code is standardised where possible; Taking ownership of data platform improvements that enhance the capabilities for all Energy Markets teams and drives trust in the stability of the setup; Sharing, enhancing and upskilling team members on available tools and best practices. What You'll Need... Strong aptitude with SQL, Python and Airflow; Experience in kubernetes, docker, django, Spark and related monitoring tools for devops a big plus (e.g. Grafana, Prometheus); Experience with dbt for pipeline modelling also beneficial; Skilled at shaping needs into a solid set of requirements and designing scalable solutions to meet them; Able to quickly understand new domain areas and visualise data effectively; Team player excited at the idea of ownership across lots of different projects and tools;Passion for driving towards Net Zero; Drives knowledge sharing and documentation for a more effective platform; Open to travelling to Octopus offices across Europe and the US. Our Data Stack: SQL-based pipelines built with dbt on Databricks Analysis via Python jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on kubernetes and docker Django for custom app/database development Kubernetes for container management, with Grafana/Prometheus for monitoring Hugo/Markdown for data documentation
London, United Kingdom
Hybrid
Full Time
01-05-2025
Company background Company brand
Company Name
Abound
Job Title
Data Scientist
Job Description
About The Role We’re on a mission to make affordable loans available to more people. Using the power of Open Banking, we have built state-of-the-art technology that allows us to look beyond traditional credit scores and offer fairer credit to people ignored by traditional lenders. We have two parts of our business. On the consumer side, we have Abound. Abound has proven that our approach works at scale, with over £650 million lent to-date. While other lenders only look at your credit score, we use Open Banking to look at the full picture – what you earn, how you spend, and what’s left at the end. On the B2B side, we have Render. Render is our award-winning software-as-a-service platform that allows Abound to make better, less risky lending decisions. And less risky decisions mean we can offer customers better rates than they can usually find elsewhere. We’re taking Render global so that more companies, from high-street banks to other fintechs, can offer affordable credit to their customers. The data science team, currently 8 members, focuses on pricing, classification of open banking data and credit decisioning. All data scientists actively contribute to building Render, by being embedded in the tech team. What You'll Be Doing Develop implement and maintain advanced AI and machine learning models to improve credit decisioning, risk and affordability assessments Analyse large datasets of Open Banking data to extract insights on customer financial behaviour and affordability Collaborate with cross-functional teams to transform data insights into pioneering solutions, addressing complex technical challenges that set new industry standards and drive product strategy and growth Design and implement scalable data analytics infrastructure to support Abound's rapid growth Contribute to the development and refinement of the Render technology platform Stay abreast of industry trends in AI, machine learning, and fintech to drive innovation Who You Are You have an advanced degree (Master's or Ph.D.) in Data Science, Machine Learning, Statistics, or a related field You possess 1-2 years of experience in a data science role, preferably related to credit risk or finance You're proficient in SQL and Python You have a strong background in statistical modelling, machine learning algorithms, and data mining techniques (NLP is a plus) You're passionate about leveraging AI and data to improve financial inclusion and access to fair credit You have excellent communication skills and can translate complex data insights for both technical and non-technical stakeholders You're adaptable, innovative, and thrive in a fast-paced, high-growth environment Experience with AWS is a plus What We Offer Everyone owns a piece of the company - equity Hybrid with 3 days a week in the office 25 days’ holiday a year, plus 8 bank holidays 2 paid volunteering days per year One month paid sabbatical after 4 years Employee loan Free gym membership Save up to 60% on an electric vehicle through our salary sacrifice scheme with Loveelectric Team wellness budget to be active together - set up a yoga class, a tennis lesson or go bouldering
London, United Kingdom
Hybrid
Full Time
01-05-2025
Company background Company brand
Company Name
hackajob
Job Title
Data Scientist
Job Description
hackajob is collaborating with Zaizi to connect them with exceptional tech professionals for this role. Job Title: Data Scientist Location: London - Hybrid however some flexibility (1 day) Employment: [Perm / Contract] Salary Brackets £45k-65k Introduction Zaizi is a software consultancy specialising in building bespoke digital solutions using open source software and cloud platforms. We primarily work with central government agencies and adhere to the Government Digital Service standard. We are looking for a product-focused Data Scientist to join our engineering team, passionate about delivering high-quality data-driven solutions. Role Overview As a Data Scientist at Zaizi, you will be responsible for analysing data to find and bring value to our clients. This involves a mixture of business acumen and technical expertise in data science disciplines. You will be digging through data to deliver data-driven, machine learning solutions. Responsibilities Take ownership of projects and initiatives. Deliver at pace and have a "get things done" attitude. Develop scrappy exploratory code and code that adheres to best practices and production-ready standards. Design and implement workflows Optimising data pipelines to improve clients performance and scalability whilst maintaining their data integrity Monitor and ensure observability of deployed solutions. Collaborate with technical and non-technical team members. Apply various machine learning techniques, understanding their benefits and drawbacks. Supporting Zaizi in response to ML bid requirements Choose the appropriate machine learning technique for the task at hand. Understand when simpler solutions are more effective than complex ones. Work within a multidisciplinary team and collaborate with other teams. Implement and follow ML Ops best practices with deep frameworks Deploy models to production, ideally as serverless applications. Understand and implement data protection and security practices. Familiarity with cloud platforms Familiarity with CI/CD pipelines and cloud-based deployment. Requirements Action-oriented and eager to take ownership of projects and initiative. Start-up mindset. A problem solver with the ability to be client facing Ability to switch from exploratory code to production-ready code. Passion for monitoring and observability of deployed solutions. Proactive and collaborative approach to problem-solving. Ability to effectively communicate technical concepts. Broad knowledge of various Machine Learning techniques. Understanding of when simpler solutions are more effective. Team player who loves working within a multidisciplinary team. Experience Strong proficiency in Python and Data Science libraries like scikit-learn or equivalent. Experience with SQL/Databricks for data exploration. Ability to explain data science concepts and solutions to broad audiences. Proven record of deploying models to production, ideally as serverless applications. Desirable Experience with containerization and orchestration technologies like Docker and Kubernetes (K8s). Experience with LLMs. Nobody checks every box, and we don’t expect you to! If this role excites you, we encourage you to apply, even if your experience doesn’t perfectly align with every requirement.
London, United Kingdom
Hybrid
Full Time
01-05-2025