I disagree, the answer ought to be whatever was planned in advance (before they saw the initial data). Once they've peeked at the data, it's not good to change the experimental procedure as is being suggested here.
The plan should have included some accommodation for non-response (like a reminder email, a suitable reward, etc), but if it didn't it's too late now. They should analyze the data they collected and apply lessons learned to the next survey.
A single reminder email is probably not a big source of bias, but something like increased rewards really could be. To make it ridiculous, if the school emailed the 30 non-responders and offered them $100,000 cash to fill out the survey, then asked "how do you feel about your school?" they will get very different responses than they got from the students who were offered a $5 bookstore gift card.
Exactly, it's not about always doing X and never doing Y, but designing a procedure that best answers your research question (within the constraints of practicality) and reporting results in a way that they accurately reflect what you really did.
The plan should have included some accommodation for non-response (like a reminder email, a suitable reward, etc), but if it didn't it's too late now. They should analyze the data they collected and apply lessons learned to the next survey.
A single reminder email is probably not a big source of bias, but something like increased rewards really could be. To make it ridiculous, if the school emailed the 30 non-responders and offered them $100,000 cash to fill out the survey, then asked "how do you feel about your school?" they will get very different responses than they got from the students who were offered a $5 bookstore gift card.