The reason for writing a version without any real context management was to learn how important
it is in practice to avoid the check-all-possible-split-ups step and to have a model implementation
-that can be consulted when testing approaches that are more complex to program.
+that can be consulted while working on approaches that are more complex to program.
The nice thing about not having to care about any leftover formulas (as it is simply a failure when that happens)
-is that no explicit sequentialization is needed in cases when two subproofs have to be found. The MonadPlus abstraction along
-with operators from the Control.Applicative module can shine here. We can also write the whole proof search as
+is that no explicit sequentialization is needed in cases when two subproofs have to be found. The \verb|MonadPlus| abstraction along
+with operators from the \verb|Control.Applicative| module can shine here. We can also write the whole proof search as
a couple of functions defined by mutual induction (which often is a big modularity no-no).
As an illustration we present the procedure for performing inversion on the right from the full focusing prover
\subsection{Architecture of NaiveLS, WeakLS and FullLS}
-We have implemented basic version of lazy splitting, where judgments are annotated by input ($\Delta_I$)
+We have implemented a basic version of lazy splitting, where judgments are annotated by input ($\Delta_I$)
and output resources ($\Delta_O$):
\Gamma, \Delta_I/ \Delta_O \to J
-Shape of those judgments is handled internally by proof search algorithm, it does not impact
-the proof representation. Each function in prover gets an input resources as a parameter and returns
-a found proof and output resources.
-The NaiveLS prover is developed from prover without resource management, so it is just a enhanced version.
-Here is example, application of the right rule for the $\tensor$ connective.
+Intuitively, the output resources are the resources the have not been used in the derivation.
+To account for this modification of judgments we do not have to alter the representation of the
+proofs - everything can be handled internally in the proof search algorithm. Each function in then prover receives
+the current input resources as a parameter and the output resources are returned along with the proof.
+The NaiveLS prover is developed from the prover without resource management, so it is just an enhanced version.
+To illustrate the difference, let's look at the application of the $\tensor$ right rule.
+First, we note that the version without resource management tries all possible ways to split the linear context...
<*> proofSearch k pctx ctx2 q
-Version without resource management tries all possibilities, while version with lazy splitting
-just passes input and output resources (both provers are written in flavor of continuation passing style).
-On the other hand new algorithm need to compute subproofs in sequential way, while in old algorithm
-those computations are independent and may be executed in parallel.
+...while the version with lazy splitting simply passes the input and output resources to be handled in the recursive calls and the final continuation:
applyRightRule k pctx ctx (p `Tensor` q) cont = do
cont (TensorR prfA prfB, restB)
-We use only basic version of lazy splitting, so our algorithms don't have full resource management
-and for some connectives algorithm is exponential.
+Both provers are written in a flavor of continuation passing style, but the second one has to adhere to a more strict CPS discipline.
+The reason is that the enhanced algorithm needs to compute the subproofs in a sequential way,
+while in the old algorithm these computations are independent (and could even be executed in parallel).
+applyRightRule k pctx ctx (p `Tensor` q) cont = do
+ proofSearch' k pctx ctx p $ \(prfA, restA) -> do
+ proofSearch' k pctx restA q $ \(prfB, restB) -> do
+ cont (TensorR prfA prfB, restB)
+We use only a basic version of lazy splitting, so our provers do not have full resource management
+and for some connectives the algorithm is exponential. The $\top$ right rule is
+the main case in which the simplest idea fails:
-Here is example, algorithms have to try all possibilities for the $\top$ connective.
+We have to try all possible ways of selecting the output resources.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\subsection{Better resource management}
+The results of benchmarking suggest that it would certainly be worth the while to implement the improved resource management
+system as described in the lecture notes. We leave that as a future exercise.
\subsection{Parallelization of the prover}
We tried to use modern Haskell features for parallelization \cite{Marlow}, but we did not achieve gratifying result.