cover image
AI Security Institute

AI Security Institute

www.aisi.gov.uk

5 Jobs

126 Employees

About the Company

We’re building a team of world leading talent to advance our understanding of frontier AI and strengthen protections against the risks it poses – come and join us: https://www.aisi.gov.uk/.

The AISI is part of the UK Government's Department for Science, Innovation and Technology.

Listed Jobs

Company background Company brand
Company Name
AI Security Institute
Job Title
Research Engineer - Post-Training
Job Description
About The AI Security Institute

The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.

The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.

About The Team

The Post-Training Team is dedicated to optimising AI systems to achieve state-of-the-art performance across the various risk domains that AISI focuses on. This is accomplished through a combination of scaffolding, prompting, supervised and RL fine-tuning of the AI models which AISI has access to.

One of the main focuses of our evaluation teams is estimating how new models might affect the capabilities of AI systems in specific domains. To improve confidence in our assessments, we make significant effort to enhance the model's performance in the domain s of interest.

For many of our evaluations, this means taking a model we have been given access to and embedding it as part of a wider AI system—for example, in our cybersecurity evaluations, we provide models with access to tools for interacting with the underlying operating system and repeatedly call models to act in such environment. In our evaluations which do not require agentic capabilities, we may use elicitation techniques like fine-tuning and prompt engineering to ensure assessing the model at its full capacity / our assessment does not miss capabilities that might be present in the model.

About The Role

As a member of this team, you will use cutting-edge machine learning techniques to improve model performance in our domains of interest. The work is split into two sub-teams: Agents and Finetuning. Our Agents sub-team focuses on developing the LLM tools and scaffolding to create highly capable LLM-based agents, while our fine-tuning team builds out fine-tuning pipelines to improve models on our domains of interest.

The Post-Training team is seeking strong Research Engineers to join the team. The priorities of the team include both research-oriented tasks—such as designing new techniques for scaling inference-time computation or developing methodologies for in-depth analysis of agent behaviour—and engineering-oriented tasks—like implementing new tools for our LLM agents or creating pipelines for supporting and fine-tuning large open-source models. We recognise that some technical staff may prefer to span or alternate between engineering and research responsibilities, and this versatility is something we actively look for in our hires.

You’ll receive mentorship and coaching from your manager and the technical leads on your team, and regularly interact with world-class researchers and other exceptional staff, including alumni from Anthropic, DeepMind, OpenAI.

In addition to junior roles, we offer Senior, Staff, and Principal Research Engineer positions for candidates with the requisite seniority and experience.

Person Specification

You may be a good fit if you have some of the following skills, experience and attitudes:

Experience conducting empirical m achine l earning research (e.g. PhD in a technical field and/or papers at top ML conferences), particularly on LLMs.
Experience with machine learning engineering, or extensive experience as a software engineer with a strong demonstration of relevant skills/knowledge in the machine learning.
An ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.

Particularly Strong Candidates Also Have The Following Experience

Building LLM agents in industry or open-source collectives, particularly in areas adjacent to the main interests of one of our workstreams e.g. in-IDE coding assistants, research assistants etc. (for our Agents subteam)
Leading research on improving and measuring the capabilities of LLM agents (for our Agents sub - team)
Building pipelines for fine-tuning (or pretraining LLMs). Finetuning with RL techniques is particularly relevant (for our Finetuning subteam) .
Finetuning or pretraining LLMs in a research context, particularly to achieve increased performance in specific domain s (for our Finetuning subteam).

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.

Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230

There are a range of pension options available which can be found through the Civil Service website.

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

Selection Process

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required Experience

We select based on skills and experience regarding the following areas:

Research problem selection
Research Engineering
Writing code efficiently
Python
Frontier model architecture knowledge
Frontier model training knowledge
Model evaluations knowledge
AI safety research knowledge
Written communication
Verbal communication
Teamwork
Interpersonal skills
Tackle challenging problems
Learn through coaching

Desired Experience

We ad...
London, United Kingdom
On site
28-03-2025
Company background Company brand
Company Name
AI Security Institute
Job Title
Research Scientist/Research Engineer- Safeguards
Job Description
About The AI Security Institute

The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.

The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.

Role Description

The AI Safety Institute research unit is looking for exceptionally motivated and talented people to join its Safeguard Analysis Team.

Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Safety Institute’s Safeguard Analysis Team researches such interventions, which it refers to as 'safeguards', evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future.

The Safeguard Analysis Team takes a broad view of security threats and interventions. It's keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models, but is also keen to hire security researchers who have historically worked outside of AI, such as in - non-exhaustively - computer security, information security, web technology policy, and hardware security. Diverse perspectives and research interests are welcomed.

The Team seeks people with skillsets leaning in the direction of either or both of Research Scientist and Research Engineer, recognising that some technical staff may prefer work that spans or alternates between engineering and research responsibilities. The Team's priorities include research-oriented responsibilities – like assessing the threats to frontier systems and developing novel attacks – and engineering-oriented ones, such as building infrastructure for running evaluations.

In this role, you’ll receive mentorship and coaching from your manager and the technical leads on your team. You'll also regularly interact with world-famous researchers and other incredible staff, including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge.

In addition to Junior roles, Senior, Staff and Principle RE positions are available for candidates with the required seniority and experience.

Person Specification

You may be a good fit if you have some of the following skills, experience and attitudes:

Experience working on machine learning, AI, AI security, computer security, information security, or some other security discipline in industry industry, in academia, or independently.
Experience working with a world-class research team comprised of both scientists and engineers (e.g. in a top-3 lab).
Red-teaming experience against any sort of system.
Strong written and verbal communication skills.
Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine tuning LLMs.
Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.
Ability to work in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.
Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government.
Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.
Writing production quality code.
Improving technical standards across a team through mentoring and feedback.
Designing, shipping, and maintaining complex tech products.

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.

Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230

There are a range of pension options available which can be found through the Civil Service website.

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

Selection Process

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required Experience

This job advert encompasses a range of possible research and engineering roles within the Safeguard Analysis Team. The 'required' experiences listed below should be interpreted as examples of the expertise we're looking for, as opposed to a list of everything we expect to find in one applicant:

Writing production quality code
Writing code efficiently
Python
Frontier model architecture knowledge
Frontier model training knowledge
Model evaluations knowledge
AI safety research knowledge
Security research knowledge
Research problem selection
Research science
Written communication
Verbal communication
Teamwork
Interpersonal skills
Tackle challenging problems
Learn through coaching

Additional Information

Internal Fraud Database

T...
London, United Kingdom
On site
01-04-2025
Company background Company brand
Company Name
AI Security Institute
Job Title
Research Scientist - Post-Training
Job Description
About The AI Security Institute

The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.

The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.

About The Team

The Post-Training Team is dedicated to optimising AI systems to achieve state-of-the-art performance across the various risk domains that AISI focuses on. This is accomplished through a combination of scaffolding, prompting, supervised and RL fine-tuning of the AI models which AISI has access to.

One of the main focuses of our evaluation teams is estimating how new models might affect the capabilities of AI systems in specific domains. To improve confidence in our assessments, we make significant effort to enhance the model's performance in the domains of interest.

For many of our evaluations, this means taking a model we have been given access to and embedding it as part of a wider AI system—for example, in our cybersecurity evaluations, we provide models with access to tools for interacting with the underlying operating system and repeatedly call models to act in such environment. In our evaluations which do not require agentic capabilities, we may use elicitation techniques like fine-tuning and prompt engineering to ensure assessing the model at its full capacity / our assessment does not miss capabilities that might be present in the model.

About The Role

As a member of this team, you will use cutting-edge machine learning techniques to improve model performance in our domains of interest. The work is split into two sub-teams: Agents and Finetuning. Our Agents sub-team focuses on developing the LLM tools and scaffolding to create highly capable LLM-based agents, while our Finetuning Team builds out finetuning pipelines to improve models on our domains of interest.

The Post-Training Team is seeking strong Research Scientists to join the team. The priorities of the team include both research-oriented tasks—such as designing new techniques for scaling inference-time computation or developing methodologies for in-depth analysis of agent behaviour—and engineering-oriented tasks—like implementing new tools for our LLM agents or creating pipelines for supporting and fine-tuning large open-source models. We recognise that some technical staff may prefer to span or alternate between engineering and research responsibilities, and this versatility is something we actively look for in our hires.

You’ll receive mentorship and coaching from your manager and the technical leads on your team, and regularly interact with world-class researchers and other exceptional staff, including alumni from Anthropic, DeepMind and OpenAI.

In addition to junior roles, we offer Senior, Staff, and Principal Research Engineer positions for candidates with the requisite seniority and experience.

Person Specification

You may be a good fit if you have some of the following skills, experience and attitudes:

Experience conducting empirical machine learning research (e.g. PhD in a technical field and/or papers at top ML conferences), particularly on LLMs.
Experience with machine learning engineering, or extensive experience as a software engineer with a strong demonstration of relevant skills/knowledge in the machine learning.
An ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.

Particularly Strong Candidates Also Have The Following Experience

Building LLM agents in industry or open-source collectives, particularly in areas adjacent to the main interests of one of our workstreams e.g. in-IDE coding assistants, research assistants etc. (for our Agents subteam)
Leading research on improving and measuring the capabilities of LLM agents (for our Agents sub-team)
Building pipelines for fine-tuning (or pretraining LLMs). Finetuning with RL techniques is particularly relevant (for our finetuning subteam) .
Finetuning or pretraining LLMs in a research context, particularly to achieve increased performance in specific domains (for our finetuning subteam).

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.

Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230

There are a range of pension options available which can be found through the Civil Service website.

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

Selection Process

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required Experience

We select based on skills and experience regarding the following areas:

Research problem selection
Research Engineering
Writing code efficiently
Python
Frontier model architecture knowledge
Frontier model training knowledge
Model evaluations knowledge
AI safety research knowledge
Written communication
Verbal communication
Teamwork
Interpersonal skills
Tackle challenging problems
Learn through coaching

Desired Experience

We additi...
London, United Kingdom
On site
01-04-2025
Company background Company brand
Company Name
AI Security Institute
Job Title
Criminal Misuse Research Scientist
Job Description
About The AI Security Institute

The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.

The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.

Criminal Misuse Research Scientist

The AI Safety Institute research unit is looking for an exceptionally motivated and talented Research Scientist to work on our Criminal Misuse team.

Criminal Misuse

The Criminal Misuse team seeks to understand how highly capable AI systems can be misused by criminals or people wishing to cause disruption to society. The team is responsible for developing threat models that the potentially undesirable outcomes that could arise from criminal misuse of frontier AI. Additionally, the team conducts research that studies vulnerabilities to criminal misuse from AI systems already in deployment as well as those soon to be deployed and seeks to develop potential mitigations.

Criminal Misuse is a strongly collaborative research team, led by the Societal Impacts Research Director, Professor Christopher Summerfield. Within this role, you will also have the opportunity to regularly interact with our highly talented and experienced staff across the Institute (including alumni from Anthropic, DeepMind, OpenAI, and ML professors from the Universities of Oxford and Cambridge), as well as with other partners from across government.

Person Specification

We are seeking a talented and ambitious Research Scientist who excited about developing and implementing our research vision for this team. The successful candidate will have strong technical skills, including experience with Frontier AI systems, a visible research track record, and a demonstrable interest in AI Safety. Ideally, the candidate will also have experience in studying the potential misuse of Frontier AI systems, including by criminals, terrorists or other organised groups, or a track record of research relating to understanding the use of AI for topics like financial fraud, scams, influence operations, or radicalisation.

Required Skills And Experience

In this role, you will need work effectively within a team and will be expected to contribute to broader discussions about goals and strategy. However, you will be expected to be self-driven, to champion your own projects, to define the most important questions to answer, and to design and implement those experiments with the support of software engineers, data scientists and delivery staff. The Research Scientist should be prepared not only to lead research projects, but also to present them in a compelling way to decision-makers so that they create real impact. We would be particularly excited to hear from people who have the following skills and experience:

Demonstrable interest in AI Safety, and especially misuse of AI
Previous research experience with frontier AI systems (such as fine-tuning AI models)
Strong research experience in Computer Science, AI, or a related field (PhD-level or equivalent), including in experimental design and research problem selection
A track record of writing production quality code efficiently, especially using Python
Strong quantitative and statistical skills
Excellent verbal communication skills across technical and non-technical audiences
Published and/or forthcoming work on research topics relevant to the role (e.g., AI’s impact on society, multimodal AI, AI and criminal misuse)
Demonstrable experience of running research experiments involving AI models and/ or human participants
A collaborative approach to work, and experience of multi-disciplinary teamwork

Desired Skills And Experience

Experience of running AI capability evaluations
Research or technical experience with multimodal AI – in particular, audio and video
A background in research relating to the (criminal) misuse of AI systems
Experience of utilising public sector datasets

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.

Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

There are a range of pension options available which can be found through the Civil Service website.

Selection Process

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Additional Information

Internal Fraud Database

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for rol...
London, United Kingdom
On site
01-04-2025