Commits

wjzz committed 611f676

Spellcheck run.

Comments (0)

Files changed (1)

doc/final/index.tex

 
 Since the last time, we have concentrated on making sure that our provers are correct and possibly memory efficient. 
 In this document we describe two families of provers that we have implemented and present some of the issues that have emerged in the process. 
-Next we show the results of benchmarking and explain our approach to testing the implementations. We conclude by disscussing some ideas
+Next we show the results of benchmarking and explain our approach to testing the implementations. We conclude by discussing some ideas
 that didn't work out for us.
 
 The project repository is available at \\
 
 \paragraph{Weak}
 
-The Weak provers are a bit artificial, we intentionally introduce more nondeterminism into algorithms
-to obtain implementation which strongly differ in behaviour from full-focusing provers.
+The Weak provers are a bit artificial, we intentionally introduce more non-determinism into algorithms
+to obtain implementation which strongly differ in behavior from full-focusing provers.
 
 Implementation of weak-focusing prover with lazy splitting was done
 independently from first prover, so some difference of architectures
 
 \subsection{Architecture of Naive, Weak and Full}
 
-The reason for writting a version without any real context management was to learn how important
-it is in practise to avoid the check-all-possible-split-ups step and to have a model implementation
+The reason for writing a version without any real context management was to learn how important
+it is in practice to avoid the check-all-possible-split-ups step and to have a model implementation
 that can be consulted when testing approaches that are more complex to program.
 
 The nice thing about not having to care about any leftover formulas (as it is simply a failure when that happens)
 
 \subsection{Architecture of NaiveLS, WeakLS and FullLS}
 
-We have implemented basic version of lazy splitting, where judgements are annotated by input ($\Delta_I$)
+We have implemented basic version of lazy splitting, where judgments are annotated by input ($\Delta_I$)
 and output resources ($\Delta_O$):
 
 \[
 \Gamma, \Delta_I/ \Delta_O \to J
 \]
 
-Shape of those judgements is handled internally by proof search algorithm, it does not impact
+Shape of those judgments is handled internally by proof search algorithm, it does not impact
 the proof representation. Each function in prover gets an input resources as a parameter and returns
 a found proof and output resources.
 
 \end{verbatim}
 \end{small}
 Version without resource management tries all possibilities, while version with lazy splitting
-just passes input and output resources (both provers are written in flavour of continuation passing style).
+just passes input and output resources (both provers are written in flavor of continuation passing style).
 On the other hand new algorithm need to compute subproofs in sequential way, while in old algorithm
 those computations are independent and may be executed in parallel.
 \begin{small}
 
 \section{Problems we have encountered}
 
-\subsection{MonadPlus nuanses and CPS introduction}
+\subsection{MonadPlus nuances and CPS introduction}
 
 All of our provers are based on a \verb|MonadPlus| type class. It is common to use this 
 interface for writing algorithm with backtracking.
 instances of \verb|MonadPlus| works. All of our provers
 are written behind polymorphic interface, but at
 beginning we had been
-testing provers only with the \verb|List| datatype instance.
+testing provers only with the \verb|List| data type instance.
 
 We were startled when some of our provers did not work
-when we switch to the \verb|Maybe| datatype instance.
+when we switch to the \verb|Maybe| data type instance.
 Although our code was written using only abstract monadic interface
-it relies on some properties of the \verb|List| datatype instance.
+it relies on some properties of the \verb|List| data type instance.
 We thought that main difference between those instances
 is that the \verb|Maybe| finds only first solution, while second obtains all solutions.
 It turned out to be false statement. 
 2
 \end{verbatim}
 \end{small}
-The \verb|find_div2| function cannot find any solution for the \verb|Maybe| datatype, because it need support for backtracking from a instance.
-Implementing a \verb|MonadPlus|-based code in favour of continuation
+The \verb|find_div2| function cannot find any solution for the \verb|Maybe| data type, because it need support for backtracking from a instance.
+Implementing a \verb|MonadPlus|-based code in favor of continuation
 passing style gives us possibility to write backtracking algorithms
 in the independent of instance way.
 \begin{small}
 
 Crucial difference between direct and CPS-like implementation is how the computations use
 the \verb|mplus| operator. In the direct style computation can bind results of previous
-computations merged by operator. Instance of the \verb|Maybe| datatype merges data by selecting one of those result
+computations merged by operator. Instance of the \verb|Maybe| data type merges data by selecting one of those result
 and forgetting about rest. When current computation fails for this selection then whole computation fails. 
 Other results (which we can understand as choices) are forgotten. 
 
   return x
 \end{verbatim}
 \end{small}
-First line strongly depends on particular implementation of the \verb|mplus| operator, for the \verb|Maybe| datatype
+First line strongly depends on particular implementation of the \verb|mplus| operator, for the \verb|Maybe| data type
 it is equal to \verb|x <- return 1|.
 
-In the flavour of continuation passing style the \verb|mplus| operator is used to merge
+In the flavor of continuation passing style the \verb|mplus| operator is used to merge
 final results of computation, never merges partial results. If we do similar reduction in
 second program we get this code:
 \begin{small}
 
 \end{verbatim}
 \end{small}
-Operator merges only final results of function, what for the \verb|Maybe| datatype  means
+Operator merges only final results of function, what for the \verb|Maybe| data type  means
 that we get first positive result of whole computation.
 
 \subsection{The copy rule and the positive atom optimalization}
 
 For running the benchmarks and graphing the results we have used the criterion library \cite{criterion}. 
 
-All graphs present the times in miliseconds and the benchmarks are in the same top-to-bottom order that they are introduced below.
+All graphs present the times in milliseconds and the benchmarks are in the same top-to-bottom order that they are introduced below.
 In all tests we have used the forward chaining version of the focusing calculi.
 
-\subsection{Test cases used for analysing the provers}
+\subsection{Test cases used for analyzing the provers}
 
 \noindent
-We present five of the artificial examples that we have designed for analysing the performance
+We present five of the artificial examples that we have designed for analyzing the performance
 of our implementations. In all cases $n$ is a parameter of the benchmark; it has been adjusted in the following
 test runs so that the graphs would show the trends that we have observed during multiple runs.
 
 
 The first two benchmarking examples are used to illustrate the importance of efficient
 context management. The first two graphs show the difference in performance very clearly.
-What is not visible here, is that the lazy splitting versions seem to scale linearily for
+What is not visible here, is that the lazy splitting versions seem to scale linearly for
 those examples.
 
 The middle example (Tensor with zeros) is interesting. In Figure 1, Naive and NaiveLS do not use inversion,
 require (respectively) non-deterministic choice of the
 leftover resources and checking if both branches have the same multiset of the unused assumptions.
 
-The same argument can propably be used to justify the outcome of the 4th benchmark in both Figures.
+The same argument can probably be used to justify the outcome of the 4th benchmark in both Figures.
 
 The proof in the last example can be found using just the invertible rules. We difference comes from the
 fact that we have implemented an optimized version of the inversion procedure in the FullLS and WeakLS
 The times for the first example are almost identical - neither inversion nor focusing
 can make any difference here, as at all times there is only one applicable rule.
 
-The full focusing approach is succesfull in the rest of the benchmarks, 
+The full focusing approach is successful in the rest of the benchmarks, 
 mainly because many invertible steps can be taken in them.
 
-The full focusing approach is succesfull in the second benchmark, mainly because of inversion - 
-in the other approaches the right rules are tried first.
-
 \subsubsection{Comparison within the lazy splitting group - Figure 4.}
 
 The full focusing prover always beats its less deterministic version, the weak focusing prover, as we expected,
 better than the FullLS prover at the test Tensor commutativity. It shows defect of our implementation.
 The FullLS (and the WeakLS) prover tries to make possible inversions after each recursive call, but
 context is fully decomposed into atoms at beginning. It means that after each step whole context would be
-analysed for possible inversions. We can solve this problem by splitting input resources into two sets,
+analyzed for possible inversions. We can solve this problem by splitting input resources into two sets,
 one for already inverted hypotheses and second for not.
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 ~
 
 \noindent
-It is difficult to stress enough how useful these libraries are in practise. 
+It is difficult to stress enough how useful these libraries are in practice. 
 The method is very similar in spirit and assurance to a proof of correctness, but is much more cheaper and quicker to use.
 
 To name a few concrete properties that we used in the project: identity expansion, $\ !A |- A$ and commutativity of $\tensor$, $\oplus$ and $\with$.
 To give some anecdotal evidence: we found critical bugs in provers of both flavors after we have introduced these tools into our project:
 
 \begin{itemize}
-  \item in the exponential context management flavor, we observed that encapsulating the proof search depth limit using a State monad didn’t do exactly what we thought it would
+  \item in the exponential context management flavor, we observed that encapsulating the proof search depth limit using a State monad did not do exactly what we thought it would
   \item in the lazy splitting version, we understood that our implementation relied on a particular instance
 of the MonadPlus interface. The implications of this were described in one of the previous sections.
 \end{itemize}