Bias Testing and Correction in Large Language Models
Entry requirements
Months of entry
Anytime
Course content
The rapid deployment of Large Language Models (LLMs) in various industries—such as healthcare, finance, and education—has highlighted concerns around bias, fairness, and ethical implications. While LLMs have the potential to transform these sectors, their use often results in biased outcomes due to the data they are trained on. This PhD project proposes the development of a comprehensive framework for testing, identifying, and correcting bias in LLMs, with context-specific metrics and test scenarios.
Research Objectives:
Develop algorithms and techniques to systematically detect bias in LLMs, focusing on sector-specific applications.
Create methodologies to mitigate identified biases without degrading the performance of the LLMs.
Test the framework in real-world scenarios within different sectors and validate its effectiveness.
Fees and funding
This programme is self-funded.
Qualification, course duration and attendance options
- PhD
- full time36 months
- Campus-based learningis available for this qualification
- part time60 months
- Campus-based learningis available for this qualification
Course contact details
- Name
- SEE PGR Support
- PGR-SupportSSEE@salford.ac.uk