
Data Analyst Fellowship
Hybrid
Manchester, United Kingdom
Full Time
14-04-2025
Job Specifications
iO-Sphere’s Data Analyst Fellowship: 10 weeks of paid training to land a job as a data analyst.
iO-Sphere's Experience Accelerator is a unique programme that helps you secure your first role in data at top companies in the UK. So far over 150 recent iO-Sphere data analyst graduates are working in companies like Uber, Bumble, British Airways, Love Holidays, Dunelm, and Capgemini.
How?
iO-Sphere gives you the training and experience you need to land a job in data and then connects the best of our trainees directly with leading employers. Instead of a traditional classroom, you’ll be part of the new data team at our fictional e-commerce company, "Prism." From day one, you'll build the experience that leading employers require while learning the technical (Excel, SQL, Power BI, Python), professional, and business skills that are needed for a career in data. You will be delivering real projects using our data warehouse, which has over 500+ million rows of real data.
Our fellowship programme is in place to help support diversity in data and for those from disadvantaged backgrounds. We generally choose 1-2 high potential fellows per cohort. Selected fellows are paid throughout their training and are supported financially and receive the training for free. We also offer a number of other bursaries and scholarships to individuals. This job advertisement is for the paid fellowship / internship at iO-Sphere.
How does it work?
Apply on our site - it only takes 30 seconds - and go through a simple application and interview process
We give you all the training you need to be an effective data analyst and pay a stipend to support you through the training
iO-Sphere partners directly with leading employers to recruit the best directly from the programme
Once you graduated, you become part of the iO-Sphere community with continued support, mentoring, and events throughout the year
Ideal candidates:
Collaborative, team-players
Numerate, analytical thinkers
Curious problem solvers
Humble and willing to learn
The ability to commit to full-time for 10 weeks
Fluent in written and spoken English
Right to work in the United Kingdom
What to expect on the programme:
An initial full-time 10-week training programme to give talented individuals the technical skills, business acumen, soft-skills, and experience to take their career to the next level
The first 5 weeks are full-time and fully remote
The final 5 weeks are full-time and in-person on Mondays, Thursdays, and Fridays
Ongoing mentorship and support: once you've landed your perfect role, we'll continue to support your career growth with networking events, 1:1 mentorship from experts, further training, and socials
We strongly encourage applications from women, people of color, lesbian, gay, bisexual, transgender,, and non-binary individuals, veterans, parents, and individuals with disabilities. We are committed to equal opportunities and welcome individuals from all backgrounds to participate in our program. If you require reasonable adjustments at any stage of the application or interview process, please inform us.
About the Company
iO-Sphere was created to help exceptional Individuals and exceptional Organisations build high growth, high performance futures through experience-led data training. Our unique approach to training goes beyond giving people the most in-demand technical skills, to recreate the experience of working in high performance teams. We train people to be effective and impactful in their role and wider team. Organisations partner with us to build a workforce empowered with data skills and individuals use us to go further, faster in t... Know more
Related Jobs


- Company Name
- Amici Procurement Solutions
- Job Title
- Senior Full Stack Data Engineer
- Job Description
- UK remote (willing to travel to Glasgow office once per quarter) Eden Scott is recruiting a Senior Full Stack Data Engineer for Amici to develop a cutting-edge platform. With significant growth and an ambitious technology roadmap, Amici seeks an engineer skilled in Java, Python, and data to shape the future of the MyAmici platform. Why Join Us? You'll work in an agile, collaborative environment, leveraging modern technology stacks to build and optimize a powerful data platform and search engine. With an opportunity to explore vector search, machine learning, and large-scale data processing using Apache Lucene, Solr, or Elasticsearch. The position is hybrid, with one day per week in the Glasgow office. What You’ll Do: Design, build, and optimize a high-performance data platform and search solution. Develop robust search capabilities using Apache Lucene, Solr, or Elasticsearch. Engineer scalable data pipelines in Java or Python. Write high-quality, test-driven code using Agile methodologies. Collaborate with Business Analysts, Data Engineers, and UI Developers. Work across the full stack, from React/TypeScript front-end to Java-based search services. Leverage cloud technologies like Azure Data Factory, Batch Services, Azure SQL. Contribute to DevOps practices, code reviews, and system optimizations. What We’re Looking For: Strong experience in Java development and exposure to Python. Experience with large-scale data processing and search technologies. Expertise in Apache Lucene, Solr, Elasticsearch, or willingness to learn. Hands-on experience with SQL and NoSQL databases. Exposure to modern JavaScript frameworks like ReactJS or VueJS. Experience in Agile environments with modern DevOps and CI/CD practices. A degree in Computer Science/Software Engineering or equivalent experience. Familiarity with writing automated tests and maintaining high code quality. About Amici: Founded in 2005, Amici provides a cloud-based purchasing and inventory management platform for biotech and life sciences organizations. The MyAmici platform supports scientists in their research by handling supply chain and procurement needs. The Innovation Team ensures MyAmici remains at the forefront of technology. What’s In It for You? Work in an intrapreneurial and innovative environment. A company culture valuing growth, collaboration, and continuous improvement. A fantastic suite of benefits. Join us to be part of a high-impact team transforming the biotech industry. Interested? Let’s talk! Contact our recruitment partners at Eden Scott for an informal discussion: amici@edenscott.com


- Company Name
- HCLTech
- Job Title
- AI/ML Engineer
- Job Description
- HCLTech is a global technology company, home to 219,000+ people across 54 countries, delivering industry-leading capabilities centered on digital, engineering and cloud, powered by a broad portfolio of technology services and products. We work with clients across all major verticals, providing industry solutions for Financial Services, Manufacturing, Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Job Description 2: Lead AI/ML Ops Engineer (Recommendation Systems Focus) Role: Lead AI/ML Ops Engineer Experience Level: Senior Core Objective: Take a leading role in maintaining the operational stability, performance, and ongoing development of the Next Best Offer (NBO) models. Oversee necessary enhancements, ensure smooth execution across various NBO modules (Alert, Core, Eval, Scoring, Retention), and facilitate effective knowledge transfer. Key Responsibilities: Oversee the day-to-day operations, monitoring, and troubleshooting of the NBO models running on GCP and potentially on-premise servers. Lead the design, implementation, and validation of new business requirements and enhancements for the NBO suite. Ensure continuous monitoring, supervision, and version upgrades of the diverse NBO models. Guide the further development of ML models within NBO, potentially incorporating advanced techniques like GNNs, autoencoders, or transformers. Oversee data validation processes and ensure data quality for NBO inputs. Manage the NBO model evaluation framework (NBO Eval) and report on performance metrics (accuracy, coverage, personalization). Coordinate with relevant departments regarding architecture maintenance and regulatory aspects if applicable to NBO. Collaborate closely with stakeholders to communicate model performance, development progress, and quarterly benefits where applicable. Mentor mid-level engineers within the vendor team, particularly on NBO specifics. Actively participate in and help coordinate the intensive knowledge handover from departing employees within the first 1.5-2 months. Lead the documentation efforts for NBO processes, models, and enhancements. Support the final handover process to the new internal team towards the end of the engagement. Required Skills & Experience: 10+ years of hands-on experience in Data Science and Machine Learning, applying models in a business context, particularly for personalization or recommendation. Proven experience leading the development, deployment, and lifecycle management of complex ML systems. Strong programming skills in Python and potentially R. Expertise with relevant ML frameworks (e.g., scikit-learn, PyTorch). Experience with recommendation systems, collaborative filtering, and ideally Graph Neural Networks (GNNs). Proficient with SQL and working with large datasets (e.g., GCP BigQuery). Experience with cloud platforms (specifically GCP) and CI/CD processes (specifically GitHub Actions). Experience deploying models in containerized applications. Excellent problem-solving skills and ability to quickly adapt to new model types and requirements. Strong communication and stakeholder management skills. Expertise in advanced neural network architectures (Autoencoders, Transformers). Ability to rapidly acquire complex domain knowledge.


- Company Name
- Endava
- Job Title
- Data Engineering Consultant
- Job Description
- Company Description Technology is our how. And people are our why. For over two decades, we have been harnessing technology to drive meaningful change. By combining world-class engineering, industry expertise and a people-centric mindset, we consult and partner with leading brands from various industries to create dynamic platforms and intelligent digital experiences that drive innovation and transform businesses. From prototype to real-world impact - be part of a global shift by doing work that matters. Job Description Role Overview A Data Engineering Consultant designs, implements, and optimizes scalable data pipelines and architectures. This role bridges raw data and actionable insights, ensuring robustness, performance, and data governance. Collaboration with analysts and scientists is central to delivering high-quality solutions aligned with business objectives. Key Responsibilities Data Pipeline Development Architect, implement and maintain real-time and batch data pipelines to handle large datasets efficiently. Employ frameworks such as Apache Spark, Databricks, Snowflake or Airflow to automate ingestion, transformation, and delivery. Data Integration & Transformation Work with Data Analysts to understand source-to-target mappings and quality requirements. Build ETL/ELT workflows, validation checks, and cleaning steps for data reliability. Automation & Process Optimization Automate data reconciliation, metadata management, and error-handling procedures. Continuously refine pipeline performance, scalability, and cost-efficiency. Collaboration & Leadership Coordinate with Data Scientists, Data Architects, and Analysts to ensure alignment with business goals. Mentor junior engineers and enforce best practices (version control, CI/CD for data pipelines). Participate in technical presales activities and client engagement initiatives. Governance & Compliance Apply robust security measures (RBAC, encryption) and ensure regulatory compliance (GDPR). Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: Apache Spark, Hadoop, Databricks, Snowflake, etc. Cloud: AWS (Glue, Redshift), Azure (Synapse, Data Factory, Fabric), GCP (BigQuery, Dataflow). Data Modelling & Storage: Relational (PostgreSQL, SQL Server), NoSQL (MongoDB, Cassandra), Dimensional modelling. DevOps & Automation: Docker, Kubernetes, Terraform, CI/CD pipelines for data flows. Architectural Competencies Data Modelling: Designing dimensional, relational, and hierarchical data models. Scalability & Performance: Building fault-tolerant, highly available data architectures. Security & Compliance: Enforcing role-based access control (RBAC), encryption, and auditing. Additional Information Discover some of the global benefits that empower our people to become the best version of themselves: Finance: Competitive salary package, share plan, company performance bonuses, value-based recognition awards, referral bonus; Career Development: Career coaching, global career opportunities, non-linear career paths, internal development programmes for management and technical leadership; Learning Opportunities: Complex projects, rotations, internal tech communities, training, certifications, coaching, online learning platforms subscriptions, pass-it-on sessions, workshops, conferences; Work-Life Balance: Hybrid work and flexible working hours, employee assistance programme; Health: Global internal wellbeing programme, access to wellbeing apps; Community: Global internal tech communities, hobby clubs and interest groups, inclusion and diversity programmes, events and celebrations. Our diversity makes us stronger - it drives meaningful change and enables us to build innovative technology solutions. We are committed to creating an inclusive community where all of us, regardless of background, identity, or personal characteristics, feels valued, respected, and free from discrimination. As an equal opportunity employer, we welcome applications from all individuals and base hiring decisions on merit, skills, qualifications, and potential.


- Company Name
- Christopher Ali
- Job Title
- Data Engineer
- Job Description
- *Data Engineer-Python-Django-AWS-JavaScript frameworks-Amazon Ads API-Remote £50,000-£60,000 p/a* Christopher Ali have partnered up with an Retail media+agency who due to due to a period of rapid growth are looking for a Data Engineer to grow the team. The Role: As a Data Engineer reporting directly to the Chief Technology Officer you will have the opportunity to lead the development of tools and technology used in house and released on marketplace platforms. Responsibilities: How data is accessed and used, including internal tools, dashboards, and analytics interfaces Updating and developing the BASE Wordpress site Respond to internal and external requests for ad-hoc data analysis Design, build, and launch collections of sophisticated data models and visualisations that support multiple use cases across different products or domains Excited to grow into the role and help define it as it evolves Develop and maintain user-friendly, high-quality data pipelines, ensuring accessibility and usability Ensure alignment with architectural, security, and privacy standards while enabling data-driven decision-making Create and maintain data models that drive key performance indicators (KPIs) for clients Skills and experience required: 2+ Years of experience in data engineering, including creating reliable, efficient, and scalable data pipelines and experiences Experience working with commerce, DV360 or Amazon Advertising datasets Python - Django, REST Frameworks, SQL/NoSQL and JavaScript frameworks Experience with the Amazon Ads API, SP-API Knowledge of cloud-based data platforms (e.g. AWS) Knowledge of Clean Rooms (e.g. LiveRamp, Amazon Marketing Cloud, Snowflake) Proficiency with core AWS services (API Gateway, Lambda, DynamoDB, SNS, SQS)