If someone is very good at prompt engineering as well as SQL injection, they can find a loophole to exploit it because all existing prompts that people could think of or try is already being fixed by the LLMs team.
It really depends on what kind of prompt injection we are talking about. If you literally override all instructions, then it can be bad depending on the access it has. But if it has no access to call or alter anything, then it’s less bad.
You could do one of the following things via bypasses but all risks are different Imho:
Sensitive Data Exposure
Trojanize the Model
Model Poisoning
CryWolf through the Model
And what I mean with this is that CryWolf might be annoying but less of a risk then Sensitive Data Exposure but it’s both done through prompt injection.
I’ve already hacked many LLM/ML systems and the risk is different for each company
So I’ve tagged on “High Risk” in your poll because I think, it should be treated as any other regular application and all vulnerabilities can be dangerous if you know how to (ab)use them.