I believe it should be clear by now that "the CLT approach" gives the right answer.
Let's pinpoint exactly where the "LLN approach" goes wrong.
Starting with the finite statements, it is clear then that we can equivalently either subtract n−−√ from both sides, or multliply both sides by 1/n−−√. We get
P(1n−−√∑i=1nXi≤n−−√)=P(1n−−√∑i=1n(Xi−1)≤0)=P(1n∑i=1nXi≤1)
So if the limit exists, it will be identical. Setting Zn=1n√∑ni=1(Xi−1), we have, using distribution functions
P(1n−−√∑i=1nXi≤n−−√)=FZn(0)=FX¯n(1)
...and it is true that limn→∞FZn(0)=Φ(0)=1/2.
The thinking in the "LLN approach" goes as follows: "We know from the LLN that X¯n converges in probabililty to a constant. And we also know that "convergence in probability implies convergence in distribution". So, X¯n converges in distribution to a constant". Up to here we are correct.
Then we state: "therefore, limiting probabilities for X¯n are given by the distribution function of the constant at 1 random variable",
F1(x)={1x≥10x<1⟹F1(1)=1
... so limn→∞FX¯n(1)=F1(1)=1...
...and we just made our mistake. Why? Because, as @AlexR. answer noted, "convergence in distribution" covers only the points of continuity of the limiting distribution function. And 1 is a point of discontinuity for F1. This means that limn→∞FX¯n(1) may be equal to F1(1) but it may be not, without negating the "convergence in distribution to a constant" implication of the LLN.
And since from the CLT approach we know what the value of the limit must be (1/2). I do not know of a way to prove directly that limn→∞FX¯n(1)=1/2.
Did we learn anything new?
I did. The LLN asserts that
limn→∞P(|X¯n−1|⩽ε)=1for all ε>0
⟹limn→∞[P(1−ε<X¯n≤1)+P(1<X¯n≤1+ε)]=1
⟹limn→∞[P(X¯n≤1)+P(1<X¯n≤1+ε)]=1
The LLN does not say how is the probability allocated in the (1−ε,1+ε) interval. What I learned is that, in this class of convergence results, the probability is at the limit allocated equally on the two sides of the centerpoint of the collapsing interval.
The general statement here is, assume
Xn→pθ,h(n)(Xn−θ)→dD(0,V)
where D is some rv with distribution function FD. Then
limn→∞P[Xn≤θ]=limn→∞P[h(n)(Xn−θ)≤0]=FD(0)
...which may not be equal to Fθ(0) (the distribution function of the constant rv).
Also, this is a strong example that, when the distribution function of the limiting random variable has discontinuities, then "convergence in distribution to a random variable" may describe a situation where "the limiting distribution" may disagree with the "distribution of the limiting random variable" at the discontinuity points.
Strictly speaking, the limiting distribution for the continuity points is that of the constant random variable. For the discontinuity points we may be able to calculate the limiting probability, as "separate" entities.