If your screening resumes automatically, you should remove all data that you don't want in the screening criteria - name, sex, age, address etc. (as some organisations will also do before Human screening).
I assume GPT should be able to do that fairly well without bias.
It's important to understand this, but it's very easy to work around this problem. Give the language model the information you want make the decision off of, and not the information you don't want it to make the decision off of. Strip the name and associate an ID that the model can use to associate it's answer with. This can be pipelined to automatically replace names and then reinsert them after, so the person using the system doesn't have to track IDs manually.