Research course

Bias Testing and Correction in Large Language Models

Institution
University of Salford · School of Science, Engineering and Environment
Qualifications
PhD

Entry requirements

Please use this Research Proposal, Personal statement and CV writing guide when preparing an application.

Months of entry

Anytime

Course content

The rapid deployment of Large Language Models (LLMs) in various industries—such as healthcare, finance, and education—has highlighted concerns around bias, fairness, and ethical implications. While LLMs have the potential to transform these sectors, their use often results in biased outcomes due to the data they are trained on. This PhD project proposes the development of a comprehensive framework for testing, identifying, and correcting bias in LLMs, with context-specific metrics and test scenarios.

Research Objectives:

Develop algorithms and techniques to systematically detect bias in LLMs, focusing on sector-specific applications.

Create methodologies to mitigate identified biases without degrading the performance of the LLMs.

Test the framework in real-world scenarios within different sectors and validate its effectiveness.

Fees and funding

This programme is self-funded.

To enquire about University of Salford funding schemes – including the Widening Participation Scholarship – visit this website.

Qualification, course duration and attendance options

  • PhD
    full time
    36 months
    • Campus-based learningis available for this qualification
    part time
    60 months
    • Campus-based learningis available for this qualification

Course contact details

Name
SEE PGR Support
Email
PGR-SupportSSEE@salford.ac.uk