cover image
Ascentia Partners

Data Analyst (Quality and Governance) - £60-£70k - London

Hybrid

London, United Kingdom

£ 70,000 / year

Full Time

07-03-2025

Job Specifications

Data Analyst (Quality & Governance) – Insurance

Are you a detail-driven Data Analyst with experience in the Insurance industry and a passion for turning data into insights, to improve processes?

London - Hybrid

What You’ll Do:
Analyze and interpret complex data, to create insights that will help improve internal processes and stakeholder engagement
Develop Power BI dashboards (full stack: data modeling, DAX, visualization)
Write efficient SQL queries for data extraction and analysis
Work with data quality tools like Informatica to support implementing new processes

What We’re Looking For:
Proven experience in data analytics within UK Insurance (General Insurance, Re Insurance, MGA's and Consultancy will all be considered)
Strong skills in SQL, Power BI (full stack), and Excel
Familiarity with data quality tools (e.g., Informatica)
Analytical mindset with great attention to detail

Ready to take the next step in your career? Apply today below.

#Hiring #DataAnalyst #InsuranceJobs #SQL #PowerBI

About the Company

Welcome to Ascentia Partners, preferred recruitment partners to FCA & PRA regulated industries. We foster a collaborative, consultative approach and believe in building strong relationships with our clients, understanding their unique needs, and tailoring solutions that align with their long-term goals. We're not just a service provider; we are your strategic partner, invested in your success as much as you are. We believe in delivering future-proof solutions and our offerings are underpinned by significant reach into th... Know more

Related Jobs

Company background Company brand
Company Name
scrumconnect ltd
Job Title
Software Engineer (SC Cleared)- Azure Data & DevOps
Job Description
About the Role Scrumconnect Consulting is looking for a Software Engineer to work on a Strategic Data Platform with expertise in Azure Data Factory (ADF), Python, PySpark, Java, Terraform, and Azure DevOps . The ideal candidate will have experience in cloud-based data engineering, automation, and infrastructure provisioning while working in an Azure environment . You will be responsible for developing scalable data pipelines, integrating cloud services, automating deployments, and supporting DevOps workflows . Key Responsibilities: 1. Software Engineer Develop, test, and deploy scalable data pipelines using Azure Data Factory (ADF), Python, PySpark, and Java . Implement data transformation, ETL/ELT workflows, and data integration solutions . Optimize data flow and performance for cloud-based data processing. 2. Cloud & Infrastructure Automation Use Terraform to provision and manage Azure resources . Implement infrastructure as code (IaC) best practices for automated cloud deployments. Ensure efficient resource scaling and cost optimization . 3. DevOps & CI/CD Automation Collaborate with DevOps teams to build automated CI/CD pipelines in Azure DevOps . Deploy and manage containerized applications using Docker & Kubernetes . Monitor and troubleshoot build, deployment, and infrastructure issues . 4. Performance Optimisation & Security Optimise data pipeline performance in Azure. Implement cloud security best practices and ensure compliance with data governance policies. Troubleshoot data and infrastructure-related performance issues . 5. Collaboration & Documentation Work closely with data engineers, cloud architects, and DevOps teams to design solutions. Participate in agile ceremonies, sprint planning, and technical discussions . Maintain technical documentation and best practices . Required Skills & Experience: ? Software Development & Data Engineering Strong experience in: Python, PySpark, Java, Azure Data Factory (ADF) . Data Processing & Pipelines: ETL/ELT development, big data frameworks. Cloud Services: Hands-on experience with Azure Data Lake, Synapse, and Azure Functions . ? Infrastructure as Code (IaC) & Automation Terraform expertise for Azure resource provisioning. Experience in cloud infrastructure automation and DevOps workflows . ? DevOps & CI/CD Experience with Azure DevOps, Git, YAML pipelines . Ability to work with Docker, Kubernetes, and containerized applications. ? Other Skills Strong problem-solving and debugging skills. Experience working in Agile/Scrum environments . Excellent communication and collaboration skills. Nice to Have: Experience with Databricks, Apache Spark, or ML workloads . Knowledge of security best practices in cloud environments . Azure, Terraform, or DevOps-related certifications .
London, United Kingdom
Hybrid
Full Time
12-03-2025
Company background Company brand
Company Name
Gain Theory
Job Title
Principal Data Analyst
Job Description
Principal Data Analyst required to drive our analytics strategy and ensure our data insights align with business objectives. In this role, you will lead complex analytical projects, mentor a team of analysts, and collaborate across departments to deliver actionable recommendations. You will work as part of a client team supporting a Data Communications Lead on the collection, ingestion, and processing of marketing data, delivering cleaned data to the modeling team for analysis. The Principal Data Analyst is expected to be proficient in data processing techniques including SQL, ETL, and Python. You will leverage these skills for data interrogation, manipulation, and cleaning, as well as building and maintaining data pipelines. You will utilize our internal data automation tools to ensure efficient execution of these pipelines. The role also includes updating and creating new data processes and pipelines as required. Additionally, you will work with the Data Communications Lead to coordinate with client and agency contacts regarding the continued flow of data from relevant sources. You will interact with the wider data community within Gain Theory, especially with members of the Data Centre of Excellence (DCOE), to share best practices and provide and receive support. The Principal Data Analyst will also mentor and support junior analysts on their projects, helping them learn processes, best practices, and specific tools used by Gain Theory. Responsibilities: Data Management & Analysis: Manage data extraction, manipulation, validation, and interrogation using SQL, Python, and other relevant tools. Build data insights relevant to the project. Ensure all data is systematically checked and passes all QA steps. Data Architecture: Execute and update data pipelines. Build data ingestion and transformation pipelines using available tools, including Python scripting for data processing and automation. Work with fellow data analysts to build scalable solutions using ETL/ELT pipelines. Python Development: Develop and maintain Python scripts for data interrogation, cleaning, processing, and automation. Contribute to the development and improvement of our internal data processing tools and libraries. Research & Development: Propose better approaches to improve internal procedures, including new methodologies. Share techniques and ideas with the wider data community. Meetings: Organize and participate in internal project meetings, ensuring agendas are set and action points are shared. Lead internal meetings as required. Mentorship: Guide and support junior analysts, providing training on tools and best practices. Experience: Comfortable working with large amounts of data in a cloud ecosystem. Proficient with SQL and ETL processes, and experience driving robust QA processes. Experience with data interrogation, cleaning, and processing using Python. Snowflake experience a plus. Experience with data manipulation/visualization tools (e.g., Excel, Tableau). R experience a plus. Extreme attention to detail a must. Understanding or experience of business marketing and media a plus. Strong interpersonal and communication (written and oral) skills. Team-oriented attitude. Capacity to learn new skills and master new tools. Ability and desire to lead junior team members through mentorship and example setting. Qualifications Background (3-4 years+) in Computer Science, Data Science, Data Engineering Information Science, or related quantitative field In depth experience with all things data including ability to work with a variety of datasets from multiple sources, familiar with standard data processing tools/concepts (e.g. SQL, NoSQL ETL), and experience driving robust QA processes In depth experience of the advertising ecosystem (e.g. ad trafficking, Ad servers, DSPs, Media Strategy and Activation, etc.) and a working knowledge of appropriate metrics, measurement, and reporting Required skillsets: Snowflake, Python, GIT, AWS/Azure Can lead requirements gathering, project planning, and implementation of projects developed with DCOE leads Has project management skills including planning tasks and deliverables, managing timelines and risks, managing team resource allocation, and overseeing multiple simultaneous projects Ability to manage and motivate Gain Theory team members and to teach concepts or technologies that are developed Organized, detail-oriented, QA-focused Experience with DBT is highly valuable Excellent written, verbal, and presentation skills Values and Behaviors I Demonstrate Joining Gain Theory means joining a group of people who live, breath and behave by our values: Be Curious: continuously asking, understanding, learning, and developing. Be Positive: approaching everything we do with a positive mindset and making positive impact on each other. Act with Consideration: seeing things from someone else’s perspective; respecting and embracing diverse thinking. Make it Better: continuous improvement and stretching our abilities, being honest with ourselves and each other. Gain Theory is committed to actively building a diverse, equitable and inclusive workplace where everyone feels welcomed, valued and heard, and is treated with dignity and respect. As leaders and creative partners across industries, it is our responsibility to cultivate an environment reflective of our greatest asset; our people. We believe that this commitment inspires growth and delivers equitable outcomes for everyone as well as the clients and communities we serve. Gain Theory is a WPP-owned consultancy. For more information, please visit please visit our website and follow Gain Theory on our social channels via LinkedIn and Twitter.
London, United Kingdom
Hybrid
Full Time
12-03-2025
Company background Company brand
Company Name
Kantar
Job Title
Data Analyst
Job Description
We’re the world’s leading data, insights, and consulting company; we shape the brands of tomorrow by better understanding people everywhere. Kantar’s Profiles division is home to the world’s largest audience network. With access to 170m+ people in over 100 global markets, we offer unrivalled global reach with local relevancy. Validated by industry leading anti-fraud technology, Kantar’s Profiles Audience Network delivers the most meaningful data with consistency, accuracy, and accountability – all at speed and scale. Job Details Join our Data Science team at Kantar Profiles as a Mid-Level Data Analyst! If you are enthusiastic about transforming data into actionable insights and have a strong programming background (Python and SQL), we encourage you to apply. This role offers a distinctive chance to collaborate with our Senior Data Scientists and contribute significantly to our innovative projects. What You’ll Do... Lead all aspects of alerts to ensure the ecosystem's functionality, working with existing models for day-to-day operations and performance. Analyze user acquisition and retention data, identifying weaknesses and bugs in existing models for resolution. Run analytics to extract statistics, patterns, and design machine learning models to improve existing technologies. Contribute to data/statistics tasks for improving user engagement, working closely with the development team. Work together with different team members to offer valuable insights and analyses that contribute to decision-making processes. Communicate technical information with both technical and non-technical team members and collaborators. Transform data using ETL tools like DBT to make it more accessible to the broader business. Build visualizations, monitor trends, and identify patterns using time series graphing services like Grafana. Define critical metrics to drive improvements in user acquisition and engagement performance. What You’ll Bring... Experience with database queries, programming, data mining/wrangling, analysis, and reporting. Strong proficiency in SQL, with the ability to read, write, and query optimally. A keen curiosity about data, statistics, machine learning, and data science. Strong problem-solving skills with an emphasis on product development, logical thinking, and critical analysis. Experience with statistical computer languages such as Python, Scala, R, MATLAB. Knowledge of statistical techniques and concepts, including regression, properties of distributions, statistical tests, and accurate usage. Experience using web services and languages, including AWS, EC2, S3, Redshift, DigitalOcean, etc. Meticulous and committed, with a good work ethic and the capacity to collaborate effectively with diverse teams. Experience in Excel and Power BI is a plus. Benefits include 25 days annual leave (increasing with tenure), private medical health cash plan, income protection, life assurance, enhanced employer pension contribution; plus award-winning voluntary flexible benefits (lifestyle, health, wealth, wellbeing). We offer a hybrid working arrangement with an office presence of at least 2 days a week in Reading. Why join Kantar? We shape the brands of tomorrow by better understanding people everywhere. By understanding people, we can understand what drives their decisions, actions, and aspirations on a global scale. And by amplifying our in-depth expertise of human understanding alongside ground-breaking technology, we can help brands find concrete insights that will help them succeed in our fast-paced, ever shifting world. And because we know people, we like to make sure our own people are being looked after as well. Equality of opportunity for everyone is our highest priority and we support our colleagues to work in a way that supports their health and wellbeing. While we encourage teams to spend part of their working week in the office, we understand no one size fits all; our approach is flexible to ensure everybody feels included, accepted, and that we can win together. We’re dedicated to creating an inclusive culture and value the diversity of our people, clients, suppliers and communities, and we encourage applications from all backgrounds and sections of society. Even if you feel like you’re not an exact match, we’d love to receive your application and talk to you about this job or others at Kantar.
Reading, United Kingdom
Hybrid
Full Time
12-03-2025
Company background Company brand
Company Name
Tenth Revolution Group
Job Title
Databricks Data Engineer
Job Description
Databricks Data Engineer: £60,000 I am looking for a data engineer who has experience in Databricks, Azure, SQL, Python and Spark to join a well-established organisation who are currently expanding their data team. Our client is partnered with both Databricks and Microsoft and they deliver data solutions for a diverse range of clients. They operate with a hybrid working model, where employees are expected to go to the client site when required on a basis of 2-3 times a month. Our client has been growing massively, meaning this is a great opportunity to develop professionally and work with top level data engineers. You will be working directly with clients and work on a variety of different projects in an array of industries. Requirements: Strong Databricks experience as well as Python and SQL Azure or AWS experience Benefits: Bonus Flexible working Annual salary review 25 days annual leave and bank holidays And more! Contact
Stirling, United Kingdom
Hybrid
Full Time
12-03-2025