# Dialogical logic resolution algorithm with sound negation

To continue on dialogical logic progamming I am going to present an algorithm, similar to SLD-resolution, that forms a symmetric game between the prover and the refuter.

This is really early work in progress. I may have to go on for months before I find out whether this was correct.

Use of game semantics as foundations for logic programming is not a new idea. In 1998, Cosmo, Loddo, Nicolet published "A game semantics foundation for logic programming". The game they presented was asymmetric and it treated disjunction as player's game and conjunction as opponent's game.

I present an algorithm which is completely symmetric. The proof procedure doesn't assume any side at all.

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Popular games in Herbrand Universe   ┃
┠──────────┬─────────────┬─────────────┨
┃          │ disjunctive │ conjunctive ┃
┃ unify    │ _ = _       │ _ ≠ _       ┃
┃ parallel │ _ ∨ _       │ _ ∧ _       ┃
┃ choice   │ _ ⊔ _       │ _ ⊓ _       ┃
┃ simple   │ true        │ false       ┃
┗━━━━━━━━━━┷━━━━━━━━━━━━━┷━━━━━━━━━━━━━┛

The formulas are treated as games that are constructed from smaller games. There are two players, prover (disjunctive) and the refuter (conjunctive). Turns in the game are determined by the logical connectives currently examined.

## Variables

Variables are indexed. Variables with an even index are treated as disjunctive variables and the odd-indexed variables are treated as conjunctive variables.

Variables must implement coroutining.

Variables introduce themselves through quantification. Quantifiers are discarded after the variables have been properly indexed.

∃A ∃B ∀C ∀D ∃E ∀F A = B ∧ C = D ∧ E = F

⊟ ⊞ ⊟ ⊞ A₀ = B₀ ∧ C₁ = D₁ ∧ E₂ = F₃

The scoping markers are left into the expressions to ensure we index the instantiations correctly.

It is not certain whether we need all of these markers, but every time when a scope marker alternates successfully, we increment a scope index.

⊟ X       {scope=0}
───────────────────
X         {scope=0}

⊞ X       {scope=1} ⊡
───────────────────
X         {scope=1} ⊡

⊞ ⊟ ⊞ X   {scope=0}
───────────────────
⊟ ⊞ X     {scope=1} ⊡

⊞ X       {scope=2} ⊡ ⊡
───────────────────
X         {scope=3} ⊡ ⊡ ⊡

Choice operators might have to be also treated like a scope. It doesn't have much function in the algorithm, but I think it's a correct thing to do if we think that variables are choice-quantified.

## Clause normalisation

Before the rules are inserted into the program, the negation rules can be eliminated. This is done by applying De-morgan duality in the rules.

¬(A op B) = ¬A ¬op ¬B

Each rule and its dual is inserted into the program. The dual can be obtained by negating the body and incrementing every variable index once.

For example, the member rule is written as:

member(X₀, [Y₀|T₀]) :- ⊟ X₀ = Y₀ ⊔ member(X₀, T₀)

¬member(X₁, [Y₁|T₁]) :- ⊞ X₁ ≠ Y₁ ⊓ ¬member(X₁, T₁)

The instantiation of these rules use unification, as before. We can take an another example of this here.

member(1, [3,2,4]) {scope=0}
─────────────────────────────────────────────────────────
X₀ = 1 ∧ [Y₀|T₀] = [3,2,4] ∧ (⊟ X₀ = Y₀ ⊔ member(X₀, T₀))

¬member(1, [3,2,4]) {scope=1}
──────────────────────────────────────────────────────────
X₁ ≠ 1 ∨ [Y₁|T₁] ≠ [3,2,4] ∨ (⊞ X₁ ≠ Y₁ ⊓ ¬member(X₁, T₁))

## Unification games

Unification is a game where a player can constrain or bind his variables.

If the constraint is satisfied without effort, the player whose turn it is gets to win immediately.

a = a
─────
true

a ≠ a
─────
false

Compound expressions are rewritten.

f(x0 ... xn) = f(y0 ... yn)
───────────────────────────
x0 = y0 ∧ ... xn = yn

f(x0 ... xn) ≠ f(y0 ... yn)
───────────────────────────
x0 ≠ y0 ∨ ... xn ≠ yn

Variables are reordered in unification such that the variable with lower index is first.

If the higher-indexed variable is not player's variable, the player loses the game. This happens because the opponent constraints the variable.

X₀ = Y₁
─────── Y₁{X₀ ≠ Y₁}
false

X₁ ≠ Y₂
─────── Y₂{X₁ ≠ Y₂}
true

X₀ ≠ Y₁{C}
────────── Y₁ := X₀
false ∨ C

X₁ = Y₂{C}
────────── Y₂ := X₁
true ∧ C

When a variable is bound, the frozen goals it had are reintroduced. Before that though, the occurrence check must be done. Along the occurrence check, the variables that have higher index must be bound to have the same index as the base variable that gets bound. Even here, the player cannot bind variables that are not his.

X₂ := f(Y₀, Z₂, W₄)
~~~~~~~~~~~~~~~~~~~ W₄ := W₂
true

X₂ := f(Y₀, Z₂, U₃)
~~~~~~~~~~~~~~~~~~~ U₃ := U₂ {cannot be done!}
false

The bindings of each player need to be differentiated into separate stacks for backtracking.

## Parallel games

A parallel game is constructed from two subgames that are played in parallel. The player whose turn it is wins the game if he wins either sub-game.

The player chooses which game he focuses to win. This means that a backtracking point is created to choose the other game. The backtracking point is removed if the player loses his own game.

true ∨ B
────────
true

false ∨ B
───────── {backtracking point removed}
B

───────── {backtracking point invoked}
?{B}

?{true}
─────────
true

?{false}
───────── {move favors the opponent, stop}
0

The procedure does not essentially change when the sides are changed:

false ∧ B
─────────
false

true ∧ B
───────── {backtracking point removed}
B

───────── {backtracking point invoked}
!{B}

!{false}
─────────
false

!{true}
───────── {move favors the opponent, stop}
1

We can prune out the backtracking point if the clause completely fails, because it means that if all clauses are false together, then they cannot be true individually.

However we prune out the backtracking point out sooner, because if we examine the truth table.

0 ∨ 0 = 0
0 ∨ 1 = 1
1 ∨ 0 = 1
1 ∨ 1 = 1

Every false we derive adds to the opponent's constraints. Every true we derive adds to our constraints. Skipping the refutation on the left side would not seem to improve situation.

## Choice games

Choice game is constructed just like the parallel game, but it represents a game where the player chooses either sub-game and the game continues in that one.

This forms a backtracking point which is biased. It means that the variables that the opponent binds inside the scope are kept if the player decides to backtrack.

true ⊔ B
───────── {record a backtracking point}
true

false ⊔ B
───────── {backtrack variables}
B

The rules should also make sense when the sides are changed.

false ⊓ B
───────── {record a backtracking point}
false

true ⊓ B
───────── {backtrack variables}
B

If the player loses, then the backtracking is forced. The rationale for this setting is that the situation where 'intentional losing' makes sense is already encoded into the parallel games.

## Simple games

Simple game is.. well. simple. Player whose turn it is wins the game.

When the algorithm proceeds, it eventually reduces the game into a simple game, given that the game itself is computable.

## Resolution

If we get the result true, and there are conjunctive choicepoints, we should backtrack with one such choicepoint. If we get the result false, and there are disjunctive choicepoints, we should backtrack with one such choicepoint. The rationale is that the player would avoid doing a decision that leads to him losing.

When the result no longer contains any conflicting choicepoints, we have found a strategy that an opponent cannot counter.

By iterating through our own choicepoints, it'd seem to give more answers.

If we find a strategy to win the game, it ought mean that we will not find a way for opponent to win it. This is something that should be proven somehow. I'm not sure how I'd do this.

## Conclusions

It is also easy to realise how negation-as-failure works just fine if the head proposition is grounded.

However, all the imperative features seem to have become poisonous. For example, the cut could be disastrous because the search may alternate between 'true' and 'false' results, and definitely the second alternate results have entirely different meaning to the first results.

Mind you, we might not need any of that anymore. The algorithm enables interactive programming with pure logic:

import sdl

:-
@ sdl.init([sdl.init_everything]) = Result:
(Result = 0, (@ print("cannot initialize SDL")));
Result != 0, (@ sdl.quit)).

The search algorithm described here would seem to produce a successful evaluation strategy for the query.

Though we would of course want to consider trying it with something that ought fail and see if it really does.

## Examples

Lets first consider a formula such as:

∀Result. (Result = 0 ∧ false) ⊔ (Result ≠ 0 ∧ true)

It's like the earlier program formula, but we pretend there would be a failure in the branch. Using a Prolog-style syntax here for change.

Result₁ = 0, false; Result₁ != 0, true
------------------------------------------ {choicepoint 1 ∧ }
false, false; Result₁{!=0} != 0, true
------------------------------------------
false; Result₁{!=0} != 0, true
------------------------------------------
Result₁{!=0} != 0, true
------------------------------------------ {choicepoint 2 ∧ }
true, true
------------------------------------------ choicepoint 2 lost.
true

I'm disregarding the prover's choicepoint there as there are no disjunctive variables that'd need treatment here.

The refuter's second choicepoint was lost because the conjunction succeed.

There's still the first choicepoint that we can apply.

------------------------------------------ {at choicepoint 1 ∧ }
!{false}; Result₁ != 0, true
------------------------------------------
Result₁ != 0, true
------------------------------------------ {choicepoint 3 ∧ }
false, true
------------------------------------------
false                                      Result₁ = 0

------------------------------------------ {at choicepoint 3 ∧ }
!{true}
------------------------------------------
1

Due to the 'false' statement, we obtain a strategy to refute the clause. It looks like we got an empty choicepoint there as well, but it does not cause any problems.

Next we try the example from the earlier. This should be interesting, although there's no parallel rules here.

¬member(X₁, [Y₁|T₁]) :- ⊞ X₁ ≠ Y₁ ⊓ ¬member(X₁, T₁)

¬member(1, [3,2,4|A₀])
-------------------------------------------------------------
X₁ ≠ 1 ∨ [Y₁|T₁] ≠ [3,2,4|A₀] ∨ (⊞ X₁ ≠ Y₁ ⊓ ¬member(X₁, T₁)) ⊡
-------------------------------------------------------------
false ∨ [Y₁|T₁] ≠ [3,2,4|A₀] ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
Y₁ ≠ 3 ∨ T₁ ≠ [2,4|A₀] ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
false ∨ T₁ ≠ [2,4|A₀] ∨ (⊞ 1 ≠ 3 ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
false ∨ (⊞ 1 ≠ 3 ⊓ ¬member(1, [2,4|A₀])) ⊡
-------------------------------------------------------------
1 ≠ 3 ⊓ ¬member(1, [2,4|A₀]) ⊡
-------------------------------------------------------------
true ⊓ ¬member(1, [2,4|A₀]) ⊡
-------------------------------------------------------------
¬member(1, [2,4|A₀]) ⊡
-------------------------------------------------------------
X₁ ≠ 1 ∨ [Y₁|T₁] ≠ [2,4|A₀] ∨ (⊞ X₁ ≠ Y₁ ⊓ ¬member(X₁, T₁)) ⊡
-------------------------------------------------------------
false ∨ [Y₁|T₁] ≠ [2,4|A₀] ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
Y₁ ≠ 2 ∨ T₁ ≠ [4|A₀] ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
false ∨ T₁ ≠ [4|A₀] ∨ (⊞ 1 ≠ 2 ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
false ∨ (⊞ 1 ≠ 2 ⊓ ¬member(1, [4|A₀])) ⊡
-------------------------------------------------------------
1 ≠ 2 ⊓ ¬member(1, [4|A₀]) ⊡
-------------------------------------------------------------
true ⊓ ¬member(1, [4|A₀]) ⊡
-------------------------------------------------------------
X₁ ≠ 1 ∨ [Y₁|T₁] ≠ [4|A₀] ∨ (⊞ X₁ ≠ Y₁ ⊓ ¬member(X₁, T₁)) ⊡
-------------------------------------------------------------
false ∨ [Y₁|T₁] ≠ [4|A₀] ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
Y₁ ≠ 4 ∨ T₁ ≠ A₀ ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
false ∨ T₁ ≠ A₀ ∨ (⊞ 1 ≠ 4 ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
false ∨ (⊞ 1 ≠ 4 ⊓ ¬member(1, A₀)) ⊡
-------------------------------------------------------------
1 ≠ 4 ⊓ ¬member(1, A₀) ⊡
-------------------------------------------------------------
true ⊓ ¬member(1, A₀) ⊡
-------------------------------------------------------------
X₁ ≠ 1 ∨ [Y₁|T₁] ≠ A₀ ∨ (⊞ X₁ ≠ Y₁ ⊓ ¬member(X₁, T₁)) ⊡
-------------------------------------------------------------
false ∨ [Y₁|T₁] ≠ A₀ ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
[Y₁|T₁] ≠ A₀ ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
------------------------------------------------------------- {∨ choicepoint}
true ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
true                                                          A₀{≠[Y₁|T₁]}

No conjunctive choicepoints there, which means we got a result. We can get subsequent results by applying the disjunctive choicepoint in the end.

?{(⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁))} ⊡
------------------------------------------------------------- {∧ choicepoint}
?{(⊞ false ⊓ ¬member(1, T₁))} ⊡
-------------------------------------------------------------
?{false} ⊡
-------------------------------------------------------------
0 ⊡

The rules used in the parallel operations seem to work perfectly. You'd be surprised, I did figure out the !{},?{} construct during writing of this post, and that made it work much better than the first version without the rule.

Update: Actually this result looks like wrong. If the A₀ were to be bound to a list, we'd be looking at no backtracking point to recover with. The rules for parallel operators may still need refinement.

The solution might also be something like gobbling up the disjunctive rules into constraint rules. eg.

[Y₁|T₁] ≠ A₀ ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
------------------------------------------------------------- {∨ choicepoint}
true ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)) ⊡
-------------------------------------------------------------
true                                                          A₀{A₀ ≠ [Y₁|T₁] ∨ (⊞ 1 ≠ Y₁ ⊓ ¬member(1, T₁)}

This could be rationalized with the deep inference switching rule:

(A ∨ B) ∧ C
─────────── switch
A ∧ C ∨ B

## Unexplored ideas

A logic programming system supports induction and coinduction in theory. The inductive reasoning requires the occurrence check and recursion. Coinducive reasoning requires occurrence check to be left out. Better handling of these concepts might allow the system to be slightly more useful.

Andorra Prolog has a neater form of cut rule that might be more compatible with this algorithm. They also have methods to select clauses for eliminating large chunks of the search space.

Linear logic programming languages treat the multiplicative connectives differently. They treat them as resources that must be consumed. I like the idea.

Constraint logic programming probably needs some consideration here. Attributed variables are an imperative construct as well, and creating constraint solvers to more complex language such as this may have similar problems as the cut rule.

The exploration into deep inference that I've had going on proposes there's exists a calculus that can be extracted from linear continuations in lambda calculus by closing of the ends of the functions. Composition would become equality operation, eg. (1 -> a) ≡ (a -> 0) ├ (1 -> 0) Calculus formed on this base might fit with the logic programming well.