You should use non-parametric tests when the most naive distributional assumptions of a parametric test fail and you can’t invoke some theorem that tells you that the distributional assumptions you want will be satisfied asymptotically. So for instance consider a one-sample t-test where you test whether the mean is significantly different from 0. The standard assumption is that you have iid random variables and that your sample mean is Gaussian distributed. One way for this to happen is if your random variables are iid Gaussian. Another way is for the conditions of the central limit theorem to be satisfied, so that for large samples your distribution will be approximately Gaussian.
If you have small samples or your data comes from some pathological distribution where you can’t invoke the central limit theorem, then you won’t know that your mean is approximately Gaussian distributed, so you really shouldn’t apply a t-test. In that case you can look into alternative non-parametric tests.