是否有比对角化更直观地证明停顿问题的不确定性?


30

我理解基于对角化的停顿问题的不确定性证明(例如,在Papadimitriou的教科书中给出)。

尽管证据令人信服(我理解其中的每个步骤),但从我看不到有人会如何从一个问题出发的角度来看,这对我来说并不直观

在书中,证明是这样的:“假设MH解决了输入的停止问题xM;x,即确定图灵机是否M停止输入x。构造一个图灵机D,将图灵机M作为输入,运行MH(M;M)并反转输出。” 然后继续表明D(D)无法产生令人满意的输出。

它是在看似随意的建设D,饲养的特别的想法M本身,然后D本身,我想有一个直觉。是什么导致人们首先定义那些构造和步骤?

如果有人不知道以哪种类型的论点开头,那么有人是否会解释如何解释对角化论点(或其他证明)?

附录给出了第一轮答案:

因此,第一个答案指出,证明停顿问题的不确定性是基于Cantor和Russell先前的工作以及对角化问题的发展,而“从头开始”将仅意味着必须重新发现该论点。

很公平。但是,即使我们接受对角化论点作为一个很好理解的给定,我仍然发现从它到停止问题还有一个“直觉差距”。我发现Cantor关于实数不可数的证明非常直观。罗素的悖论更是如此。

我仍然看不到是什么会激励某人基于M的“自我应用” M来定义M,然后再次将D应用于自身。这似乎与对角化关系不大(从某种意义上说,Cantor的论点没有类似的东西),尽管一旦定义了对角化显然很有效。D(M)MM;MD

聚苯乙烯

@babou总结了让我比自己更困扰的事情:“许多版本的证明都存在这样的问题,即结构似乎是从魔术帽上拔下来的。”


3
Consider the possibility that any proof of the existence of uncountable sets will have to be somewhat counterintuitive, even if we get used to the fact that they are correct. Consider also the possibility that this question (if properly rephrased) belongs to math.stackexchange.com.
André Souza Lemos

4
Cantor found the diagonalization argument, and now we cannot unlearn it: Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können.
Hendrik Jan

1
After further thought, I have to ask why you think this is so different from Russell's paradox. Russell's paradox even looks the same if we use the notation S(X) to mean XS (i.e. think of sets as being functions whose values are true or false). Then Russell's paradox is to define D(M) = not M(M), and then consider D(D).

1
Diagonalization is a standard technique. Sure there was a time when it wasn't known but it's been standard for a lot of time now, so your argument is simply due to your ignorance (I don't want to be rude, is a fact: you didn't know all the other proofs that use such a technique and hence find it odd the first time you see it. When you've seen it 50 times you'll probably be able to understand how it can be applied in a new situation).
Bakuriu

1
Maybe you would read my exchange of comments with Luke Mathieson (following his answer). His answer explains historically why Turing used self-application (one thing you ask for in your question). That seems to be pretty-much how mathematicians perceived the issues at the time. My own answer tries to give a very simple proof that does not use it (or at least shows it is not essential) which is another thing you ask for, quite different. Possibly, I might make it even simpler than in my answer. Why teachers still use Turing's proof is a sociological and pedagogical (?!) issue. cc @HendrikJan
babou

Answers:


18

In your edit, you write:

What I still don't see is what would motivate someone to define D(M) based on M's "self-application" M;M, and then again apply D to itself. That seems to be less related to diagonalization (in the sense that Cantor's argument did not have something like it), although it obviously works well with diagonalization once you define them.

A common "popular" summarization of Turing's proof goes something like this:

"If we had a machine MH that could decide whether another Turing machine halts or not, we could use this to construct another machine D that, given a Turing machine M, would halt if and only if M did not halt. But then we could pass D as input to itself, and thus obtain a paradox: this machine would halt if and only if it did not halt!"

Now, it's easy to see that the summarization above glosses over an important detail — the halting of the Turing machine M also depends on its input, which we have not specified! But this issue can be fixed easily enough: we just need to have D pick some suitable input xM for each input machine M, before passing them both to MH.

What's a suitable choice for xM, given that we ultimately want to derive a contradiction? Well, a natural choice is suggested directly by the "handwavy" proof above, where we ultimately obtain the contradiction by running the machine D on itself.

Thus, for the behavior of D to really be paradoxical in this case, i.e. when invoked as D(D), what we want is for the halting of D(M) to depend on the behavior of M when invoked as M(M). This way, we'll obtain the contradiction we want by setting M=D.

Mind you, this is not the only choice; we could also have derived the same contradiction by, say, constructing a machine D such that D(M) halts if and only if M(D) (rather than M(M)) does not halt. But, whereas it's clear that the machine D can easily duplicate its input before passing it to MH, it's not quite so immediately obvious how to construct a machine D that would invoke MH with its own code as the input. Thus, using this D instead of D would needlessly complicate the proof, and make it less intuitive.


1
Wow, you really grokked my question! That is exactly the type of story I was looking for! Still reading everything, but this looks like it would be the accepted answer. Thanks!
user118967

18

It may be simply that it's mistaken to think that someone would reason their way to this argument without making a similar argument at some point prior, in a "simpler" context.

Remember that Turing knew Cantor's diagonalisation proof of the uncountability of the reals. Moreover his work is part of a history of mathematics which includes Russell's paradox (which uses a diagonalisation argument) and Gödel's first incompleteness theorem (which uses a diagonalisation argument). In fact, Gödel's result is deeply related to the proof of undecidability of the Halting Problem (and hence the negative answer to Hilbert's Entscheidungsproblem).

So my contention is that your question is in a sense badly founded and that you can't reach the Halting Problem without going past the rest (or something remarkably similar) first. While we show these things to students without going through the history, if you were a working mathematician it seems unlikely that you go from nothing to Turing Machines without anything in between - the whole point of them was to formalise computation, a problem many people had been working on for decades at that point.

Cantor didn't even use diagonalisation in his first proof of the uncountability of the reals, if we take publication dates as an approximation of when he thought of the idea (not always a reliable thing), it took him about 17 years from already knowing that the reals were uncountable, to working out the diagonalisation argument.

In reference to the "self-application" in the proof that you mention, this is also an integral part of Russell's paradox (which entirely depends upon self-reference), and Gödel's first incompleteness theorem is like the high-powered version of Russell's paradox. The proof of the undecidability of the Halting Problem is so heavily informed by Gödel's work that it's hard to imagine getting there without it, hence the idea of "self-application" is already part of the background knowledge you need to get to the Halting Problem. Similarly, Gödel's work is a reworking of Russell's paradox, so you don't get there without the other (note that Russell was not the first to observe a paradox like this, so prototypes of the diagonalisation argument has been around in formal logic since about 600BCE). Both Turing and Gödel's work (the bits we're talking about here that is) can be viewed as increasingly powerful demonstrations of the problems with self-reference, and how it is embedding in mathematics. So once again, it's very difficult to suggest that these ideas at the level Turing was dealing with them came a priori, they were the culmination of millennia's work in parts of philosophy, mathematics and logic.

This self-reference is also part of Cantor's argument, it just isn't presented in such an unnatural language as Turing's more fundamentally logical work. Cantor's diagonalisation can be rephrased as a selection of elements from the power set of a set (essentially part of Cantor's Theorem). If we consider the set of (positive) reals as subsets of the naturals (note we don't really need the digits to be ordered for this to work, it just makes a simpler presentation) and claim there is a surjection from the naturals to the reals, then we can produce an element of the power set (i.e. a real) that is not in the image of the surjection (and hence derive a contradiction) by take this element to be the set of naturals who are not in their own image under the surjection. Once we phrase it this way, it's much easier to see that Russell's paradox is really the naïve set theory version of the same idea.


2
Yes, it seems the whole point of Turing was to recreate circularity (from which comes diagonalization) using machines, for the sake of introducing some abstract idea of time, with which to talk about finiteness in a new way.
André Souza Lemos

Maybe you can enlighten me, as I am not familiar with some of these proofs. I can understand that these proofs can be cunducted using self referencing. I can even believe (though it might need a proof) that there is always some self reference to be found in whatever structure is constructed for the purpose. But I do not see the need to use it explicitly to conduct the proof to its conclusion. You can rephrase Cantor's argument that way, but you do not have to. And I do not see why you have to do it for the halting problem. I may have missed a step, but which?
babou

To make my previous remark clearer, the original question is: "Is there a more intuitive proof of the halting problem's undecidability ...". I am omitting the end, since my feeling is that the OP complains mainly about the lack of intuition. I believe that there is indeed a more intuitive proof, not using self-reference. You may think that using that proof is pedagogically unwise (as not related to Russell's and Gödel's work), but if it answer the question asked, what is the point of rejecting it. You seem to be denying the question rather than answering it.
babou

@babou I think the problem here is that we're answering different questions. The OP was not well phrased in that regard I guess. The repeated question in the body of the OP seem to me to be "how did someone ever think of the diagonalisation argument to prove ..." (paraphrased of course), and that "the constructions seem to be pulled from a magic hat".
Luke Mathieson

@babou, also to elaborate a little, with a proper keyboard, I don't think one way or another is necessarily pedagogically useful (it would depend heavily on context). In fact, for most modern CS courses, it's probably better to do it without the diagonalisation argument, most CS students just aren't mathematically inclined enough any more to know the background that would make it easier to understand, but I was definitely answering the question that ended the original body text: ...
Luke Mathieson

9

Self application is not a necessary ingredient of the proof

In a nutshell

If there is a Turing machine H that solves the halting problem, then from that machine we can build another Turing machine L with a halting behavior (halting characteristic function) that cannot be the halting behavior of any Turing machine.

The paradox built on the self applied function D (called L in this answer - sorry about notation inconsistencies) is not a necessary ingredient of the proof, but a device usable with the construction of one specific contradiction, hiding what seems to be the "real purpose" of the construction. That is probably why it is not intuitive.

It seems more direct to show that there is only a denumerable number of halting behaviors (no more than Turing machines), that can be defined as characteristic halting functions associated with each Turing machine. One can define constructively a characteristic halting function not in the list, and build from it, and from a machine H that solves the halting problem, a machine L that has that new characteristic halting function. But since, by construction, it is not the characteristic halting function of a Turing machine, L cannot be one. Since L is built from H using Turing machine building techniques, H cannot be a Turing machine.

The self-application of L to itself, used in many proofs, is a way to show the contradiction. But it works only when the impossible characteristic halting function is built from the diagonal of the list of Turing permitted characteristic halting functions, by flipping this diagonal (exchanging 0 and 1). But there are infinitely many other ways of building a new characteristic halting function. Then non-Turing-ness can no longer be evidenced with a liar paradox (at least not simply). The self-application construction is not intuitive because it is not essential, but it looks slick when pulled out of the magic hat.

Basically, L is not a Turing machine because it is designed from the start to have a halting behavior that is not that of a Turing machine, and that can be shown more directly, hence more intuitively.

Note: It may be that, for any constructive choice of the impossible characteristic halting function, there is a computable reordering of the Turing machine enumeration such that it becomes the diagonal ( I do not know). But, imho, this does not change the fact that self-application is an indirect proof technique that is hiding a more intuitive and interesting fact.

Detailed analysis of the proofs

I am not going to be historical (but thanks to those who are, I enjoy it), but I am only trying to work the intuitive side.

I think that the presentation given @vzn, which I did encounter a long time ago (I had forgotten), is actually rather intuitive, and even explains the name diagonalization. I am repeating it in details only because I feel @vzn did not emphasize enough its simplicity.

My purpose is to have an intuitive way to retrieve the proof, knowing that of Cantor. The problem with many versions of the proof is that the constructions seem to be pulled from a magic hat.

The proof that I give is not exactly the same as in the question, but it is correct, as far as I can see. If I did not make a mistake, it is intuitive enough since I could retrieve it after more years than I care to count, working on very different issues.

The case of the subsets of N (Cantor)

The proof of Cantor assumes (it is only an hypothesis) that there is an enumeration of the subsets of the integers, so that all such subset Sj can be described by its characteristic function Cj(i) which is 1 if iSj and is 0 otherwise.

This may be seen as a table T, such that T[i,j]=Cj(i)

Then, considering the diagonal, we build a characteristic function D such that D(i)=T[i,i]¯, i.e. it is identical to the diagonal of the table with every bit flipped to the other value.

There is nothing special about the diagonal, except that it is an easy way to get a characteristic function D that is different from all others, and that is all we need.

Hence, the subset characterized by D cannot be in the enumeration. Since that would be true of any enumeration, there cannot be an enumeration that enumerates all the subsets of N.

This is admittedly, according to the initial question, fairly intuitive. Can we make the proof of the halting problem as intuitive?

The case of the halting problem (Turing)

We assume we have an enumeration of Turing machines (which we know is possible). The halting behavior of a Turing machine Mj can be described by its characteristic halting function Hj(i) which is 1 if Mj halts on input i and is 0 otherwise.

This may be seen as a table T, such that T[i,j]=Hj(i)

Then, considering the diagonal, we build a characteristic halting function D such that D(i)=T[i,i]¯, i.e. it is identical to the diagonal of the table with every bit flipped to the other value.

There is nothing special about the diagonal, except that it is an easy way to get a characteristic halting function D that is different from all others, and that is all we need (see note at the bottom).

Hence, the halting behavior characterized by D cannot be that of a Turing machine in the enumeration. Since we enumerated them all, we conclude that there is no Turing machine with that behavior.

No halting oracle so far, and no computability hypothesis: We know nothing of the computability of T and of the functions Hj.

Now suppose we have a Turing machine H that can solve the halting problem, such that H(i,j) always halts with Hj(i) as result.

We want to prove that, given H, we can build a machine L that has the characteristic halting function D. The machine L is nearly identical to H, so that L(i) mimics H(i,i), except that whenever H(i,i) is about to terminate with value 1, L(i) goes into an infinite loop and does not terminate.

It is quite clear that we can build such a machine L if H exists. Hence this machine should be in our initial enumeration of all machines (which we know is possible). But it cannot be since its halting behavior D corresponds to none of the machines enumerated. Machine L cannot exist, which implies that H cannot exist.

I deliberately mimicked the first proof and went into tiny details

My feeling is that the steps come naturally in this way, especially when one considers Cantor's proof as reasonably intuitive.

One first enumerates the litigious constructs. Then one takes and modifies the diagonal as a convenient way of touching all of them to get an unaccounted for behaviour, then gets a contradiction by exhibiting an object that has the unaccounted for behaviour ... if some hypothesis were to be true: existence of the enumeration for Cantor, and existence of a computable halting oracle for Turing.

Note: To define the function D, we could replace the flipped diagonal by any other characteristic halting function, different from all the ones listed in T, that is computable (from the ones listed in T, for example) provided a halting oracle is available. Then the machine L would have to be constructed accordingly, to have D as characteristic halting function, and L(i) would make use of the machine H, but not mimic so directly H(i,i). The choice of the diagonal makes it much simpler.

Comparison with the "other" proof

The function L defined here is apparently the analog of the function D in the proof described in the question.

We only build it in such a way that it has a characteristic halting function that corresponds to no Turing machine, and get directly a contradiction from that. This gives us the freedom of not using the diagonal (for what it is worth).

The idea of the "usual" proof seems to try to kill what I see as a dead fish. It says: let's assume that L is one of the machines that were listed (i.e., all of them). Then it has an index jL in that enumeration: L=MjL. Then if L(jL) halts, we have T[jL,jL]=H(jL,jL)=1, so that L(jL) will loop by construction. Conversely, if L(jL) does not halt, then T[jL,jL]=H(jL,jL)=0 so that L(jL) will halt by construction. Thus we have a contradiction. But the contradiction results from the way the characteristic halting function of L was constructed, and it seems a lot simpler just to say that L cannot be a Turing machine because it is constructed to have a characteristic halting function that is not that of a Turing machine.

A side-point is that this usual proof would be a lot more painful if we did not choose the diagonal, while the direct approach used above has no problem with it. Whether that can be useful, I do not know.


Very nice, thank you! It seems that somehow you managed to go around the self-applying constructions that I found troublesome. Now I wonder why people found them necessary in the first place.
user118967

@user118967 I tried to underscore that using the diagonal is not really important. All you want is to define a characteristic halting function that is different from all those listed in the table, and that is computable from those listed, provided we have a halting oracle. There are infinitely many such characteristic halting functions. Now that seems not so visible in the usual proof, and it may be that some constructs of that proof seem arbitrary simply because they are, like chosing the diagonal in the proof above. It is only simple, not essential.
babou

@user118967 I added and introduction that summarizes the analysis of the various proofs. It complement the comparison between proofs (with and without self application) that is given in the end. I do not know whether I did away with diagonalization as asked :) (I think it would be unfair to say so) but I do hint on how to do away with the obvious diagonal. And the proof does not use self-application, which seems an unnecessary, but slick looking, trick hiding what may seem a more important issue, the halting behavior.
babou

@user118967 To answer your first comment, and after reading the most upvoted answer, it seem that the main motivation is the link with the work of Russell and Gödel. Now I have no idea whether it is really essential for that purpose, and the self-applying constructions variant can certainly be studied for the purpose, but I don't see the point of imposing it on everyone. Furthermore, the more direct proof seems more intuitive, and does give the tool to further analyse the self-applying version. Why then?
babou

Yes, I tend to agree with you on that.
user118967

8

There is also a proof of this fact that uses a different paradox, Berry's paradox, which I heard from Ran Raz.

Suppose that the halting problem were computable. Let B(n) be the smallest natural number that cannot be computed by a C program of length n. That is, if S(n) is the set of natural numbers computed by C programs of length n, then B(n) is the smallest natural number not in S(n).

Consider the following program:

  1. Go over all C programs of length at most n.

  2. For each such program, check if it halts; if it does, add it to a list L.

  3. Output the first natural number not in L.

This is a program for computing B(n). How large is this program? Encoding n takes O(logn) characters, and the rest of the program doesn't depend on n, so in total the length is O(logn), say at most Clogn. Choose N so that ClogNN. Then our program, whose length is at most N, computes B(N), contradicting the definition of B(N).

The same idea can be used to prove Gödel's incompleteness theorems, as shown by Kritchman and Raz.


Perhaps it's in the paper I cite, or in the classic monograph Kolmogorov Complexity by Li and Vitányi.
Yuval Filmus

By the way, do you think that this method provides an attack on the NP vs CoNP problem?
Mohammad Al-Turkistany

No. Such problems are beyond us at the moment.
Yuval Filmus

"and the rest of the program doesn't depend on n" Why?
SK19

The parameter n only appears once in the program. The execution of the program depends on n, but n itself only appears once in its source code.
Yuval Filmus

6

There's a more general idea involved here called the "recursion theorem" that may be more intuitive: Turing machines can use their own description (and thus run themselves). More precisely, there is a theorem:

For any Turing machine T, there is a Turing machine R that computes R(x) = T(R;x).

If we had a Turing machine that could solve the halting problem, then using the idea described above, we can easily construct a variety of "liar" turing machines: e.g. in python-like notation,

def liar():
    if halts(liar):
        return not liar()
        # or we could do an infinite loop
    else:
        return True

The more complicated argument is essentially just trying to do this directly without appealing to the recursion theorem. That is, it's repeating a recipe for constructing "self-referential" functions. e.g. given a Turing machine T, here is one such recipe for constructing an R satisfying

R(x) = T(R; x)

First, define

S(M; x) = T(M(M; -); x)

where by M(M; -), what I really mean is that we compute (using the description of M) and plug in a description of a Turing machine that, on input y, evaluates M(M; y).

Now, we observe that if we plug S into itself

S(S; x) = T(S(S; -); x)

we get the duplication we want. So if we set

R = S(S; -)

then we have

R(x) = T(R; x)

as desired.


The first paragraph does not match the theorem you cite, which I know by the name of s-m-n theorem.
Raphael

@Raphael: It's called the recursion theorem in my textbook. :( My brief attempt at google failed to turn up any alternative names.

No worries; maybe I understand you wrong, or there are different names for the same thing. That said, your sentence "Turing machines can use their own description" is not supported by the theorem you quote. In fact, I think it's wrong: if the function a TM computes depended on its index, what would all the infinitely many TMs that compute the same function look like?
Raphael

Sorry, not following. Shouldn't T be a universal TM? Also why does liar return True in the else case? Is it supposed to answer the question "does 'liar' halt?"? If so, why does is it ok for it to return not liar() in the first case? Shouldn't it be False (or infinite loop)?
user118967

@user: Nope: you're got the quantifiers wrong. The theorem is "for every T, there exists a R such that R(x)=T(R;x)". You are thinking about "There exists a T such that for every R, R(x)=T(R;x).

5

the Turing proof is quite similar to Cantors proof that the cardinality of reals ("uncountable") is larger than the cardinality of the rationals ("countable") because they cannot be put into 1-1 correspondence but this is not noted/ emphasized in very many references (does anyone know any?). (iirc) a CS prof once showed this years ago in class (not sure where he got it himself). in Cantors proof one can imagine a grid with horizontal dimension the nth digit of the number and the vertical dimension the nth number of the set.

the Turing halting proof construction is quite similar except that the contents of the table are Halt/ Nonhalt for 1/ 0 instead, and the horizontal axis is nth input, and the vertical axis is nth computer program. in other words the combination of computer programs and inputs are countable but the infinite table/ array is uncountable based on a universal machine simulator construction that can "flip" a halting to a nonhalting case assuming a halting detector machine exists (hence reductio ad absurdam).

some evidence that Turing had Cantors construction partly in mind is that his same paper with the halting proof talks about computable numbers as (along the lines of) real numbers with computable digits.


addendum, there is indeed a very "intuitive" way to view undecidability but it requires a lot of higher math to grasp (ie intuition of a neophyte is much different than intuition of an expert). mathematicians do consider the halting problem and godels thm identical proofs via a Lawvere fixed point theorem, but this is an advanced fact not much accessible to undergraduates "yet". see halting problem, uncomputable sets, common math problem? Theoretical Computer Science & also linked post for refs
vzn

3

At this point it is worth noting the work by Emil Post who is (justly) credited with being a co-discoverer of the basic results of computability, though sadly was published too late to be considered a co-discoverer of the solution to the Entscheidungsproblem. He certainly participated in the elaboration of the so-called Church-Turing thesis.

Post was motivated by very philosophical considerations, namely the theoretical limitations of the human ability to compute, or even get precise answers in a consistent manner. He devised a system, now called Post canonical systems, the details of which are unimportant, which he claimed could be used to solve any problem which can be solved soely by manipulation of symbols. Interestingly, he explicitly considered mental states to be part of the "memory" explicitly, so it is likely that he at least considered his model of computation to be a model of human thought in it's entirety.

The Entscheidungsproblem considers the possibility of using such a means of computation to say, determine the theoremhood of any proposition expressible in the system of the Principia Mathematica. But the PM was a system explicitly designed to be able to represent all of mathematical reasoning, and, by extension (at least at the time, when Logicism was still in vogue) all of human reasoning!

It's therefore very unsurprising then, to turn the attention of such a system to the Post canonical systems themselves, just as the human mind, via the works of Frege, Russel and logicians of the turn of the century had turned their attention to the reasoning faculty of the human mind itself.

So it is clear at this point, that self-reference, or the ability of systems to describe themselves, was a rather natural subject in the early 1930s. In fact, David Hilbert was hoping to "bootstrap" mathematical reasoning itself, by providing a formal description of all of human mathematics, which then could be mathematically proven to be consistent itself!

Once the step of using a formal system to reason about itself is obtained, it's a hop and a skip away from the usual self-referential paradoxes (which have a pretty old history).

Since all the statements in Principia are presumed to be "true" in some metaphysical sense, and the Principia can express

program p returns result true on input n

if a program exists to decide all theorems in that system, it is quite simple to directly express the liar's paradox:

this program always lies.

can be expressed by

The program p always returns the opposite of what the principia mathematica say p will return.

The difficulty is building the program p. But at this point, it's rather natural to consider the more general sentence

The program p always returns the opposite of what the PM say q will return.

for some arbitrary q. But it's easy to build p(q) for any given q! Just compute what PM predicts it will output, and return the opposite answer. We can't just replace q by p at this point though, since p takes q as input, and q does not (it takes no input). Let's change our sentence so that p does take input:

The program p returns the opposite of what PM says q(r) will return.

Arg! But now p takes 2 pieces of input: q and r, whereas q only takes 1. But wait: we want p in both places anyways, so r is not a new piece of information, but just the same piece of data again, namely q! This is the critical observation.

So we finally get

The program p returns the opposite of what PM says q(q) will return.

Let's forget about this silly "PM says" business, and we get

The program p(q) returns the opposite of what q(q) will return.

This is a legitimate program provided we have a program that always tells us what q(q) returns. But now that we have our program p(q), we can replace q by p and get our liar's paradox.

By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.