“对于OLS回归,较高的R平方是否也意味着较高的P值?特别是对于单个解释变量(Y = a + bX + e)”
R2FtR2ptR2 is:
|t|=R2(1−R2)(n−2)−−−−−−−−−−−√
So in this case, once you fix n, the higher the R2 the higher the t statistic and the lower the p-value.
"but would also be interested to know for n multiple explanatory
variables (Y = a + b1X + ... bnX + e)."
The answer is the same, but instead of looking at one variable only, we now look at all variables together -- hence the F statistic, as Glen_b has shown. And here you have to fix both n and the number of parameters. Or, to put it better, fix the degrees of freedom.
Context - I'm performing OLS regression on a range of variables and am
trying to develop the best explanatory functional form (...)
Ok, so this is actually a different problem. If you are looking at the best explanatory functional form, you should also take a look at cross-validation techniques. Even if R2 is the quantity of interest for your problem (it usually isn't), finding the best fit in-sample can be very misleading -- you usually want your findings to generalize out of sample, and proper cross-validation can help you not overfit your data too much.
And here I'm guessing that you want "predictive" power (since you say you want to find "the best explanatory functional form"). If you want to do causal inference, for instance, then the R2 or other predictive performance metrics are of little help without more structural/substantive knowledge of the problem.