cover image
GSK

Business Data Analyst

On site

London, United Kingdom

Full Time

13-03-2025

Job Specifications

Site Name: GSK HQ, Stevenage

Posted Date: Mar 13 2025

Come join us as we supercharge GSK’s data capability!

At GSK we are building a best-in-class data and prediction powered team that is ambitious for patients. As R&D enters a new era of data driven science, we are building a data engineering capability to ensure we have high quality data captured with context and aligned data models, so that the data is useable and reusable for a variety of use cases.

GSK R&D and Digital and Tech’s collective goal is to deliver business impact, including the acceleration of the discovery and development of medicines and vaccines to patients. GSK Development and Development Digital & Tech are in the final stages of delivering on a multiyear investment aimed at modernizing the clinical development landscape. In order to ensure we maximize the value of the data we generate using these new systems in combination with the data assets we acquire externally, new teams in both the R&D Development and Development Digital & Tech organizations are being established to focus on the governance and design.

Role Overview:

The Business Data Analyst contributes significantly to the mission to supercharge our data and is responsible for ensuring data domains and products are defined and delivered with findability, accessibility, interoperability supportability, usability and quality in mind. As a Business Data Analyst, you will provide guidance around information architecture, data standards, and quality of the data products on the Development Data Fabric (DDF) in alignment with the data mesh architectural framework.

Key responsibilities:


Strategy: Work with data product owners, peers and colleagues in Development Tech to define a framework for consistently and efficiently capturing data models, data dictionaries, business and technical metadata and requirements for moving processing protecting and using data within DDF. Actively seek out opportunities to refine this framework and automate the creation/maintenance of information and artefacts. Keep abreast of emerging trends in data management technologies and integrated them into the target designs where appropriate
Analysis: Understand the systems and processes in Pharmaceutical R&D where data is generated, used and reused to enable operations, inform decisions and drive scientific innovations. Reverse engineer analytical use cases into data flows, target data models and data processing requirements. Partner with data product owners in the business and tech teams to document data quality checks and access management requirements. Partner with data quality and governance product owners to surface opportunities to leverage reference data, master data and/or ontology management platforms to drive standardization and interoperability.
Modelling: Build data models and associated artefacts to inform requirement and development activities. Work with data product owners and subject matter experts to capture and maintain the metadata required to ensure data released to the Development Data Fabric is Findable, Accessible Interoperable and Reusable (FAIR).
Lifecycle Management: Support the product and engineering teams during design and build by liaising with the business and technical team to answer key questions and deliver pertinent information. Partner with data stewards and technical support teams during use to investigate data quality and lineage issues. Enable the drive towards and adaptive and automated approaches to data governance and data management.


Basic Qualifications:

We are looking for professionals with these required skills to achieve our goals:


Bachelor’s degree in computer science, engineering, or similar discipline
5+ years Pharmaceutical R&D experience and / or exposure to enterprise architecture in an IT organization
Experience documenting user requirements for data and analytics products
Proficient with data modelling and data quality/profiling tools
Understanding of the data mesh framework and its application in Pharmaceutical R&D analytics including but not limited to Data integration, governance, quality, security, lineage, cataloguing, discovery, access, sharing, collaboration, etc.
Experience with Pharmaceutical R&D industry data standards
Track record in delivering business impact through data and analytics enabled solutions
Excellent relationship management, strong influencing and communication skills
Experience with Agile and DevOps Framework


Preferred Qualifications:

If you have the following characteristics, it would be a plus:


Life Sciences or Tech/Engineering related Master’s or Doctorate
Experience of building Data Mesh (or similar architecture) in Pharmaceutical R&D
Understanding of where emerging data technologies can drive increased automation in the creation and management of data products
Familiar with design and architecture of BI and analytic environments
Enablement of AI/ML users and applications


Closing Date for Applications – Thursday 27th March 2025 (COB)

Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the ‘cover letter’ of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application.

During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact 0808 234 4391. This will help us to understand any modifications we may need to make to support you throughout our selection process.

Why GSK?

Uniting science, technology and talent to get ahead of disease together.

GSK is a global biopharma company with a special purpose – to unite science, technology and talent to get ahead of disease together – so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns – as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology).

Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it’s also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can...

About the Company

We are uniting science, technology and talent to get ahead of disease together. Our community guidelines: https://gsk.to/socialmedia Know more

Related Jobs

Company background Company brand
Company Name
European Tech Recruit
Job Title
Senior Research Engineer - Databases/Distributed Systems
Job Description
Senior Research Engineer - Databases / Distributed Systems We're partnering with a global tech leader at the forefront of database innovation. This elite team is building a ground-breaking, next-generation transactional database from the ground up. We're seeking brilliant minds with expertise in systems, distributed systems, operating systems, and compilers to contribute to core research and development. Your Impact: Conduct cutting-edge systems research and rigorous empirical science to shape the future of data management and processing. Deeply analyze and understand the evolving demands of next-generation database storage and query processing. Design, implement, and deploy critical technical components for revolutionary data management and processing systems. Explore and advance the latest data management and processing frameworks for both cloud and edge environments. Ideal Candidate: MSc or PhD in Computer Science or a closely related field Proficiency in Systems-Level Programming using C/C++ and/or Rust Proven Experience in one or more of the following areas: Data Management Systems (e.g., transactional, graph, NoSQL) Query Processing Storage Engines Indexing Engines Distributed Computing Programming Languages Hardware-Software Co-design Compilers Fault-Tolerant Computing Demonstrated Experience in developing and implementing low-level systems software (e.g., operating systems, distributed workflow systems, compilers, databases) Contributions to foundational or peer-reviewed research is a significant plus By applying to this role you understand that we may collect your personal data and store and process it on our systems. For more information please see our Privacy Notice (https://eu-recruit.com/about-us/privacy-notice/)
Edinburgh, United Kingdom
On site
Full Time
17-03-2025
Company background Company brand
Company Name
Pro5.ai
Job Title
Senior Data Engineer
Job Description
*Do take note that this is an on-site role based in Kuala Lumpur, Malaysia. *Candidates can be from anywhere in Europe ideally or any part of the world, as long as they are willing to relocate to KL, Malaysia. Are you passionate about using data to drive innovative solutions in a fast-paced environment? We're looking for a Senior Data Engineer to join a cutting-edge technology company based in Kuala Lumpur! As a Senior Data Engineer, your mission will be to support data scientists, analysts, and software engineers by providing maintainable infrastructure and tooling for end-to-end solutions. You’ll work with terabytes to petabyte-scale data, supporting multiple products and data stakeholders across global offices. Key Responsibilities Design, implement, operate and improve the analytics platform Design data solutions using various big data technologies and low latency architectures Collaborate with data scientists, business analysts, product managers, software engineers and other data engineers to develop, implement and validate deployed data solutions. Maintain the data warehouse with timely and quality data Build and maintain data pipelines from internal databases and SaaS applications Understand and implement data engineering best practices Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed Mentor and provide guidance to junior engineers on the job Qualifications Expert at writing and optimising SQL queries Proficiency in Python, Java or similar languages Familiarity with data warehousing concepts Experience in Airflow or other workflow orchestrators Familiarity with basic principles of distributed computing Experience with big data technologies like Spark, Delta Lake or others Proven ability to innovate and leading delivery of a complex solution Excellent verbal and written communication - proven ability to communicate with technical teams and summarise complex analyses in business terms Ability to work with shifting deadlines in a fast-paced environment Desirable Qualifications Authoritative in ETL optimisation, designing, coding, and tuning big data processes using Spark Knowledge of big data architecture concepts like Lambda or Kappa Experience with streaming workflows to process datasets at low latencies Experience in managing data - ensuring data quality, tracking lineages, improving data discovery and consumption Sound knowledge of distributed systems - able to optimise partitioning, distribution and MPP of high-level data structures Experience in working with large databases, efficiently moving billions of rows, and complex data modelling Familiarity with AWS is a big plus Experience in planning day to day tasks, knowing how and what to prioritise and overseeing their execution Competitive salary and benefits We'll cover visas, tickets, and 1-2months of accommodation to help you settle in. What’s Next: Interview with our Talent Acquisition team (virtual or face-to-face) Technical sample test (discussed in the technical round) Final interview with the Hiring Manager (virtual or face-to-face)
England, United Kingdom
On site
Full Time
17-03-2025
Company background Company brand
Company Name
Known Talent
Job Title
Data Engineering Team Lead (Python) - £100-110k
Job Description
The Highlights! We’re thrilled to be working with this impressive fintech business, focused on delivering insight products and services to the finance sector. After receiving significant investment, they are growing their product and engineering functions as they look to realise their ambition of building a market leading platform. This Data Engineering Team Lead position, a newly created role within the team, will be a senior figure within a highly skilled group of Data Engineering and Scientists, responsible for building solutions that meet the demands of a dynamic, data driven organisation. Our client’s working arrangements require three days per week in their office in central London. This is a deliberate and thoughtful choice that is aligned with an emphasis on collaboration between the product, analysis and engineering functions. About the Company This Fintech business has demonstrated an admirable track record of adapting to meet the needs of its customers, evolving its product portfolio and establishing itself as one of the leaders in the provision of insight products and services to the financial sector. Through adopting a customer centric approach they have been able to demonstrate impressive growth domestically and internationally. It is the breadth of services, combined with investment in technology, and crucially the people that deliver it, that has seen the business go from strength to strength. About the Opportunity Working directly for the Head of Data Science and Engineering, you’ll play a lead role in a growing team that is central to the success of the business. Working as either an individual contributor in a functional lead capacity, or as a hands on technical lead, you will actively contribute to the design and implementation of scalable data architectures. A hands on position you will develop, maintain and optimise ETL processes for data extraction, transformation and loading. Leading from the front and showcasing your commitment to effective data engineering techniques, you will create and manage data models and data warehousing solutions. Working side by side with the Data Scientists, you’ll develop and deploy AI & ML pipelines, optimising them for performance and efficiency. Supporting the team by developing better ways of working and introducing tools and techniques to support their output is a key outcome of this position. You’ll be their champion for best practice, bringing your ideas and experience to the table in a way that creates a fantastic engineering environment and providing them with a platform to deliver their best work. About You A skilled Data Engineer, you’ll have substantial experience in designing and building scalable data architectures. Bringing experience of working as a senior or lead figure in Data Engineering teams, you’ll be comfortable in technically supporting Data Engineers and Scientists to achieve their goals. In addition, you’ll demonstrate the following experience: In depth knowledge of, and experience of working with, ETL processes, data modelling and data warehousing. This business uses Airflow, dbt and Redshift so any experience in these technologies would be super helpful. Hands-on Python skills are a must for this position as is an understanding of Infrastructure-as-code technologies (they use Terraform) and DevOps techniques. Exceptional communication skills that support your ability to work collaboratively with stakeholders across the business. Technical or people leadership skills The ability to coordinate and manage technical projects. Deep expertise in database technologies, both NoSQL and relational. Strong understanding of Cloud platforms and their platform of choice here is AWS so experience in this would be useful. Solid experience of data security measures and compliance standards. Background in optimising data pipelines for performance and efficiency. Any previous experience of working with Machine Learning technologies will be a significant advantage given the expected project workload of the team. This is an Agile environment so you'll likely bring experience of working in a SCRUM/Kanban environment. Inclusion Statement We are dedicated to providing reasonable adjustments to applicants throughout the recruitment process and we work closely with our clients to encourage them to do the same. If you require any accommodations to participate in the job application or interview process, please let us know. Your needs will be treated confidentially and with respect. We strive to eliminate biases and create an environment where all candidates are evaluated based on their skills, qualifications, and potential contributions. If you have any feedback on how we can improve our process in any way, please don't hesitate to reach out.
London, United Kingdom
On site
Full Time
16-03-2025
Company background Company brand
Company Name
FactSet
Job Title
Principal Software Engineer - Hybrid -Datafeed Server
Job Description
Datafeed Server Software Engineer FactSet is seeking a talented software engineer with strong C/C++ skills to join the Datafeed Engineering Team. This team is responsible for the development and maintenance of servers that deliver streaming real-time market data, and multiple SDKs that target several different programming languages, on both the Windows and Linux platforms. Our software allows internal and external engineers to programmatically access real-time market data content across a wide variety of platforms and languages. With just a few lines of code, users can easily integrate FactSet’s wide range of streaming market-data into their applications. Responsibilities: Develop, test, and deploy software and services to end users. Design and implement user-friendly APIs that deliver streaming real-time market data. Participate in and contribute to design reviews, code reviews, and brainstorming sessions. Communicate and collaborate with product developers, QA, API users, and other stake holders. Respond to bug reports and feature requests. Participate in an on-call rotation. Minimum Requirements: 8+ years of professional software engineering experience Critical Skills: 5+ years of experience developing software in C or C++ in a Linux environment. Fundamental understanding of network programming Ability to communicate effectively with peers within FactSet and with external users. Track record of success developing and shipping software on time. Additional / Desired Skills Familiarity with ZeroMQ Experience writing software for Windows Experience developing software intended to run in a public cloud environment such as AWS or Azure Experience developing APIs Understanding of real-time market data and the requirements for processing large amounts of input with low latency Ability to work in groups and independently Experience developing software in any of the following languages Java .NET Core / C# / COM Golang Python Education: Bachelor’s Degree or higher in Computer Science or equivalent.
London, United Kingdom
On site
Full Time
16-03-2025