This role will give you the opportunity to help research the safe and responsible deployment of AI Foundation Models in critical applications such as healthcare and transport. You will be part of the Centre for Assuring Autonomy (CfAA) that pioneered approaches to assuring AI and Autonomous Systems (AI/AS).
This role offers the opportunity to both accelerate the adoption of these core ideas and to further advance them. Building on our BIG Argument for AI Safety Cases, you will conduct research to advance the safety of general-purpose AI systems as part of complex systems and sociotechnical contexts. You will also have the opportunity to engage with PhD students from the UKRI AI Centre for Doctoral Training in Safe Artificial Intelligence Systems (SAINTS) through joint research projects, training activities and co-supervision.
We are seeking researchers who can demonstrate the capacity to push the boundaries of safety science for the deployment of AI Foundation Models, rather than simply applying existing techniques. Please specify if you wish to apply for this role as a Research Associate or a Research Fellow.
Grade 6 - Research Associate
First degree in Computer Science, Engineering, Psychology or cognate discipline
PhD, or in final stages thereof, in Computer Science, Engineering, Psychology, Safety Science or cognate discipline, or equivalent experience
Knowledge of AI Foundation Models, such as LLMs, LRMs, VLMs and World Models
High quality publications in safety science and system safety venues
Ability to research AI Foundation Models for use in safety-critical applications
Ability to work as part of a diverse and multidisciplinary team
Experience of carrying out both independent and collaborative research
Collaborative ethos
Grade 7 - Research Fellow
First degree in Computer Science, Engineering, Psychology or cognate discipline
PhD in Computer Science, Engineering, Psychology, Safety Science or cognate discipline, or equivalent experience
Knowledge of AI Foundation Models, such as LLMs, LRMs, VLMs and World Models
Ability to research AI Foundation Models for use in safety-critical applications
Ability to lead and/or take responsibility for a small research project or identified parts of a large project
Ability to supervise the work of others, for example in research teams or projects
Ability to work as part of a diverse and multidisciplinary team
Experience of carrying out both independent and collaborative research
Collaborative ethos
Interview date: w/c 18th May
For informal enquiries: please contact (Professor Ibrahim Habli) on (Ibrahim.Habli@york.ac.uk)
The University strives to be diverse and inclusive – a place where we can ALL be ourselves.
We particularly encourage applications from people who identify as Black, Asian or from a Minority Ethnic background, who are underrepresented at the University.
We offer family friendly, flexible working arrangements, with forums and inclusive facilities to support our staff. #EqualityatYork
As a Disability Confident employer, we will ensure that a fair and proportionate number of disabled applicants that meet the minimum (essential) criteria for each position will be offered an interview. Read more about the University of York’s commitments under the Disability Confident scheme.

York is one of the most successful universities in the UK.
With world-class activity across the spectrum from the physical sciences, life sciences, and social sciences to the humanities, we have been recognised as one of the top 100 universities in the world, gaining outstanding results in official assessments of our research and teaching.