
Senior Data Scientist (Dynamic Pricing)
Hybrid
Manchester, United Kingdom
£ 675 / day
Freelance
19-03-2025
Job Specifications
Position: Senior Data Scientist (Dynamic Pricing)
Length: 4 months
Rate: £675 per day
Start date: Monday 24th March or before
We’re partnering with one of Europe’s leading eCommerce companies to find an experienced Data Scientist (Contract) who will have a direct impact on revenue optimization, customer satisfaction, and cutting-edge pricing strategies.
Reporting to the Director of Data, you’ll work embedded in the company's dynamic Pricing product team, developing advanced pricing algorithms to optimize pricing across its entire lifecycle.
What You’ll Do:
Develop machine learning models and statistical algorithms to enhance pricing strategies
Optimize revenue management through data-driven insights
Enhance customer value propositions with smarter pricing solutions
Work closely with cross-functional teams to drive real business impact
What we’re looking for
We’re seeking a highly analytical and commercially aware Data Scientist who understands how pricing affects both business performance and customer behaviour.
You should have:
Proven experience in pricing optimization – Strong background in pricing models, demand forecasting, and revenue optimization.
Familiarity with dynamic pricing models, reinforcement learning, and time-series forecasting, predictive modelling, time-series forecasting, and optimization techniques.
Proven experience in developing machine learning models that drive business impact and improve user behaviour.
Strong Python and SQL skills, with experience using AWS to run models in production.
Strong mathematical foundation – Expertise in econometrics, game theory, and behavioural economics to ensure pricing algorithms align with our pricing strategies.
Experience working in the e-commerce industry with a focus on pricing highly beneficial
Interviews to take place asap. Please send over your CV asap to be considered!!
About the Company
Here at MRJ We power startups, scaleups and global giants across the UK & Europe with great talent, enabling them to scale their technology & product teams, quickly. Why work with us? Three products. One result. We're more than just your average agency. What matters to you, matters to us and we tailor our solutions to fit with your hiring needs. Our Talent Partners live and breathe your brand, creating a genuine partnership with you. We replace the "us" and "them" divisions of traditional recruitment with a single, un... Know more
Related Jobs


- Company Name
- Vitalograph
- Job Title
- Data Quality Assistant
- Job Description
- Job DescriptionVitalograph is a leading manufacturer of medical respiratory diagnostic devices and software, used in the diagnosis of respiratory disorders, and advanced high-reliability equipment and software systems used in respiratory end-point clinical drug trials. We have designed, developed and manufactured respiratory diagnostic devices, software, and consumables for healthcare professionals, for 60 years. Headquartered in the UK, Vitalograph has operations in Ireland, Germany, and the USA. We are now recruiting for a Data Quality Assistant to join our growing team in Buckingham on a 6-month fixed term contract. In this role, you will work with project teams and the data management team to improve the efficiency of the receipt, processing, and turnaround times of data corrections. With celebrating over 60 years in business, expansion plans and growth in the market, now is a fantastic time to join Vitalograph! Responsibilities As a Data Quality Assistant Complete data entry for manual import of data across projects QC checks of relevant documentation Monitoring and management of Data Quality mailbox Assist in creation and processing of documentation Gathering data across report monitoring, DCFs and reconciliation for generating metrics Checking DCFs for completeness Support Clinical Trials Admin team with relevant tasks Perform all other duties as assigned by manager Educational Requirements 5 GCSEs grade 4-9 (or equivalent) including Maths & English


- Company Name
- Mphasis
- Job Title
- SAP Analytics Consultant
- Job Description
- Job Title: SAP Analytics Cloud Location: Warwick, UK (Hybrid 3 Days onsite in a week) Day Rate: 450GBP/Day Inside IR35 Contract Inside IR35 – 6 Months Contract Role Purpose We are seeking an experienced SAP Analytics Cloud (SAC) Consultant with experience on SAP DataSphere, S4HANA Embedded Analytics to assess our current SAC utilization, provide training to key business users, and contribute to the strategic roadmap for future enhancements and new cloud data warehouses. The ideal candidate will have expertise in cloud-based data warehousing and reporting, ensuring seamless integration and optimized performance of SAC analytics in a cloud environment. Qualifications & Experience: Proven experience as an SAC Consultant, SAC Developer, or similar role and experience on S4HANA applications (On-premise or RISE or Grow) Strong expertise in SAP Analytics Cloud, including data modelling, story building, and planning functionalities. Hands-on experience with cloud data warehouses (SAP Datasphere, Snowflake, Big Query, Redshift, Azure Synapse, etc.). Ability to assess business needs and translate them into SAC or any new-age cloud data warehouse solutions. Excellent training and communication skills to engage with business stakeholders effectively. Knowledge of data governance, security, and best practices in cloud analytics.


- Company Name
- N Consulting Global
- Job Title
- GCP Data Modeller
- Job Description
- GCP Data Modeller Location : London Contract Role Job Summary: We are seeking a highly skilled GCP Data Modeller to join our cloud data engineering team. You will be responsible for designing and implementing logical and physical data models on Google Cloud Platform (GCP) to support scalable, secure, and high-performance analytics solutions. This role requires deep expertise in data modelling, cloud-native data warehousing, and big data ecosystem on GCP. Key Responsibilities: Design and maintain conceptual, logical, and physical data models for cloud-based data platforms on GCP. Translate business requirements into optimal data models to support data warehousing, lakehouse, and real-time analytics use cases. Develop data models for tools such as BigQuery, Dataflow, Pub/Sub, Looker, Dataplex, and others in the GCP stack. Collaborate with data engineers, architects, and business stakeholders to define data architecture, data flows, and integration patterns. Support data governance, metadata management, and lineage tracking using tools like Dataplex, Collibra, or Informatica. Establish and enforce modelling standards and naming conventions across domains. Optimize data models for performance, cost-efficiency, and scalability. Work with Data Mesh or Domain-Driven Design principles where applicable. Required Skills & Qualifications: 5+ years of experience in data modelling, with at least 2+ years on GCP. Deep understanding of dimensional modelling, 3NF, data vault, and data lakehouse architectures. Strong SQL skills and experience in BigQuery, Cloud Storage, and Data Catalog. Familiarity with ELT/ETL frameworks using Dataflow, Apache Beam, or dbt. Knowledge of data governance, security, and compliance best practices on GCP. Proficiency with metadata management and lineage tracking tools. Excellent communication and stakeholder management skills. Preferred Qualifications: Experience with Looker, Power BI, or Tableau for downstream data consumption. Exposure to Machine Learning or AI-driven data products is a plus. GCP certifications (e.g., Professional Data Engineer, Cloud Architect) are desirable. Familiarity with Terraform, CI/CD pipelines, and DevOps for Data is a bonus.


- Company Name
- TSG - The Sculptors Group
- Job Title
- Data Engineer
- Job Description
- Job Summary We are seeking a highly skilled Data Engineer with expertise in Databricks, Apache Spark, and Scala/Python to join our dynamic team in London on a contingent worker basis. The ideal candidate will have hands-on experience in data engineering, along with experience or a strong understanding of data governance. You will play a key role in designing and implementing robust data pipelines and ensuring compliance with data governance best practices. Key Responsibilities: Develop, optimize, and maintain big data pipelines using Databricks and Apache Spark. Write efficient, scalable, and maintainable Scala/Python code for data processing and transformation. Collaborate with data architects, analysts, and business teams to understand data requirements. Ensure data quality, lineage, and security within data platforms. Work with structured and unstructured data from multiple sources, integrating them into a unified data lake. Optimize performance of big data workflows and troubleshoot data processing issues. Document technical processes, best practices, and governance policies. Key Requirements: 5+ years of experience in data engineering with a focus on Databricks and Apache Spark. Strong programming skills in Scala and/or Python. Experience or a strong understanding of data governance, metadata management, and regulatory compliance. Knowledge of cloud platforms (Azure, AWS, or GCP) for big data processing. Experience with Delta Lake, Lakehouse architecture, and data cataloging. Strong understanding of ETL/ELT pipelines, SQL, and NoSQL databases. Familiarity with data security, privacy standards (GDPR, ISO, etc.), and access controls. Excellent problem-solving and communication skills. Note : Banking Experience must