One of the largest artificial-intelligence projects ever undertaken by NHS England has been put on hold amid concerns it may have used the health records of 57 million people without proper consent.
The initiative, known as Foresight, employed an open-source AI model — akin to Llama 2 — to “predict what happens next based on previous medical events.” Researchers fed Foresight with stripped-down NHS data: personal identifiers and addresses were removed before use.
Even so, privacy experts caution that anonymised health records can sometimes still be traced back to individuals — meaning re-identification risks remain even with “de-identified” data.
The project was originally approved under protocols developed during the Covid-19 pandemic, intended to fast-track related research. However, critics note that it is unclear whether the Foresight project bears any direct relation to the pandemic — raising questions over whether the fast-track approval was justified. One researcher even asked, “How is it informing our understanding of Covid?”
Medical authorities such as the Royal College of GPs (RCGP) and the British Medical Association have challenged the legality and ethics of the data use, arguing that the research consortium behind Foresight — led by Health Data Research UK — neither properly informed nor consulted a doctors’ advisory body before proceeding. They warn that this could seriously undermine public trust in the use of AI by the NHS.
“Our patients must be confident their medical data won’t be used beyond what they’ve consented to,” said the Chair of the RCGP council. She emphasised that unless patient trust is maintained, even AI innovations with potential to ease GP workloads and improve care will be compromised.
In response, NHS England confirmed that it had paused research using Foresight. It has stated that clinicians now have the opportunity to review the general data-sharing agreement under which the model was approved — even if they were not consulted about the specific project initially.
