CTP Theory is my attempt at creating an AGI. If you aren't already familiar with it, read this blog post, or else the rest of this post won't make sense.
In CTP Theory as it is now, a problem is created whenever there are two directly contradictory claims, and the set of theories involved in a problem includes every theory involved in either of the claim's lineage. This framework is good for capturing factual contradictions (more specifically, contradictions between two statements of fact), and tracking exactly which theories are involved in the contradiction. However, I worry that not all problems can be boiled down to factual contradictions.
Consider the problem "I am hungry". As far as I can tell, this problem cannot be broken down into a contradiction between two statements of fact. This problem could perhaps be understood as a conflict between two statements, perhaps something like "I am hungry" and "I have no way to satiate my hunger", but the conflict here is not a factual contradiction. "I am hungry" and "I have no way to satiate my hunger" are not directly contradictory, because they can both be true at the same time, and I don't think there is any way to derive a direct contradiction from them. To derive a direct contradiction from the two of them you'd need to derive "I do have a way to satiate my hunger" from "I am hungry", or "I am not hungry" from "I have no way to satiate my hunger", neither of which are reasonable (technically you could also derive a claim like "X is true" from "I am hungry" and a claim like "X is not true" from "I have no way to satiate my hunger", but I don't see what X could be such that that would be reasonable). So, it seems like there are some problems which CTP Theory doesn't have a way to handle.
The problems which CTP Theory seems to be unable to handle are, I think, problems that involve unfulfilled desires. "I am hungry" is a problem because it represents a desire, to stop the feeling of hunger, that the mind has no way of satisfying. To recognize these types of problems, CTP Theory needs to include a new type of problem, or at least a new way of detecting problems.
A solution to this problem might also be a solution, or a part of one, to one of the most important open questions about CTP Theory: When a variation (or a set of multiple variations) that solves an A-Problem is found, how does the A-Mind determine whether or not to accept the variation(s)? If the mind has a way to represent desires, and recognize unfulfilled desires as problematic, then a variation that solves a problem could be considered acceptable if the variation does not create any unfulfilled desires.
This new type of problem would represent a pretty fundamental change in CTP Theory. Without this new type of problem, a theory can only become problematic by creating a claim that contradicts another claim, but with the new type of problem, a theory can become problematic by failing to create a certain claim. This is an important difference, because without this new type of problem, reducing the number of claims that a theory produces couldn't possibly cause a problem. The clearest demonstration of this is that you can solve any problem with a theory by making that theory undefined for all inputs (which is essentially equivalent to getting rid of the theory entirely). In fact, you could solve any problem by randomly selecting one of the involved theories, and changing it so that it was undefined for all inputs. This means that if a mind that worked according to CTP Theory understood the theory of general relativity and quantum mechanics, and it noticed that they were contradictory, it might just say "Alright, I guess I'll throw out quantum mechanics". Clearly this isn't a reasonable way to handle things, because quantum mechanics is a really nice theory that explains a lot of things. But without a desire to explain the things that quantum mechanics explains, a mind would have no way of declaring it problematic to throw the theory out. By including the capacity for desire in CTP Theory, we solve this problem.
CTP Theory needs a way to represent desires, and a way to recognize desires as satisfied or unsatisfied. Additionally, the mind needs to be able to come up with new desires on its own, as humans do, so it needs to be able to conjecture theories about desire just as it does theories about facts. I have one incomplete idea for how to accomplish all that, which involves designating a certain type of claim as representing a desire.
Consider the format for A-Claims that consists of a boolean and a list of integers, which I suggested in my initial post about CTP Theory. Under this format, two A-Claims are considered contradictory if they have identical lists of integers, but opposite boolean values. You can think of the list of integers in an A-Claim as representing the content of the claim, while the boolean represents whether or not the content is true or false. For instance, an A-Claim (True, [0, 9, -2]) could be read "[0, 9, -2] is true", while (False, [0, 9, -2]) could be read "[0, 9, -2] is false", and this way of reading A-Claims makes it clear that these two are contradictory (note: the list of integers [0, 9, -2] does not inherently mean anything, but would have some meaning within the context of an A-Mind which had A-Theories that would interpret the list).
We can expand this format to incorporate the capacity for desire by changing the boolean, which has two possible values (true and false), to a variable with three possible values: true, false, and desired. An A-Claim with the "desired" value would represent that the mind desires the content in the list of integers to be true. In the same way that the A-Claim (True, [0, 9, -2]) can be read "[0, 9, -2] is true", an A-Claim (Desired, [0, 9, -2]) could be read "I want [0, 9, -2] to be true".
Whenever an A-Mind contains an A-Claim with the "desired" value, the A-Mind would also contain a corresponding A-Problem unless it also contains an A-Claim that had the same list of integers, but with a "true" value instead of a "desired" value. For instance, if the A-Mind contains an A-Claim (Desired, [0, 9, -2]), and it does not contain another A-Claim (True, [0, 9, -2]), the A-Mind will contain A-Problem designating that it has an unfulfilled desire. The A-Problem would be solved if some A-Theory in the A-Mind generated an A-Claim with the form (True, [0, 9, -2]), or if the A-Theory that generated the A-Claim with the "desired" value was changed such that the A-Claim no longer existed. In other words, the problem will be solved if the mind finds a way to satisfy the desire, or if it decides that it no longer desired what it used to desire.
The A-Problems that tracked unfulfilled desire A-Claims (here I'm using "desire A-Claims" as a shorthand for A-Claims which represent desires) would necessarily be different from normal A-Problems that track contradictions. A-Problems that track contradictions between two A-Claims need to keep track of those two A-Claims, and their records that describe how they were created. A-Problems that tracked unfulfilled desires, on the other hand, would only have one A-Claim to track: the A-Claim representing the unfulfilled desire.
When an A-Theory is changed, and the A-Theory contributed to the creation of an A-Claim that satisfied a desire, the change may prevent A-Theory from creating the satisfying A-Claim. Thus, whenever a variation of an A-Theory was being considered, the A-Mind would need to check whether or not the original A-Theory was involved with the creation of any A-Claims that satisfied a desire A-Claim, and if so it would need to check if the new A-Theory also created A-Claims that satisfied the desires. If the new version failed to satisfy some of the desires that the old one did, it would be rejected.
However, there's one exception, where an A-Mind should accept a new A-Theory as a replacement for an old A-Theory even if the new one didn't satisfy a desire that the old one did. It is possible for the same claim to be created by two separate theories (or sets of theories). So, two separate theories could satisfy the same desire, in which case either of them could be varied without having to worry about whether or not the variation still satisfies the desire, because the desire woulds still be satisfied by the other theory. For instance, if an A-Mind had an A-Claim (Desire, [0, 9, -2]), and it had two separate A-Theories, X and Y, which both produced the A-Claim (True, [0, 9, -2]), X could be varied without needing to check whether or not it would still produce (True, [0, 9, -2]), because Y would still produce it.
Unfortunately as I said before, this solution is incomplete. I have two big problems with this solution.
First, I am not sure that all desires can be represented in the way this solution requires. I gave an example earlier of a mind that worked according to CTP Theory which understood both general relativity and quantum mechanics, and which might immediately throw out quantum mechanics once it noticed the conflict between the two. This can be prevented if the mind has a desire to explain what quantum mechanics explains. For instance, if the mind wants to answer the question "what are the smallest building blocks of matter?", quantum mechanics can answer that question, so getting rid of quantum mechanics would create an unfulfilled desire. However, can this question, or something equivalent to it, really be represented as a claim, in the way that I've suggested? Claims that represent desires, as I've described them so far, are not really questions, but statements of the form "I want X to be true". However, perhaps desires to answer a question can be rephrased as statements that the mind desires a certain statement to be true. For instance, perhaps a desire to have an answer "what are the smallest building blocks of matter?" is functionally equivalent to a statement of the form "I want 'I know what the smallest building blocks of matter are' to be true". But is this really a satisfactory way of rephrasing the question, and is it possible to represent all desires to answer questions in this way?
The second problem is a variant of an issue I mentioned earlier: without desire claims, you can solve any problem by randomly selecting one of the involved theories, and changing it so that it was undefined for all inputs. Desire claims can prevent this from being an issue, because making a theory undefined for all inputs could cause problems by creating unfulfilled desires. However, there is nothing preventing the mind from solving a desire problem by simply making the theory that made the desire claim undefined for all inputs. Considering again the example of a mind that notices a conflict between general relativity and quantum mechanics, it wouldn't be able to just throw out quantum mechanics if it had a desire to answer the question "what are the smallest building blocks of matter?", but there's nothing preventing it from throwing out that desire (and subsequently throwing our quantum mechanics, because that will no longer cause any unfulfilled desires).
Because of these two problems, this solution is clearly incomplete. However, I think that the general approach - having a certain type of claim that represents a desire which must be satisfied in some way by another claim - is a promising path to explore.