Commits

Miran Levar committed 273d15d

Added more widget documentation.

Comments (0)

Files changed (15)

_multitarget/widgets/OWClusteringRandomForest.py

         self.preprocessor = None
 
         box = OWGUI.widgetBox(self.controlArea, "Classifier/Learner Name")
-            OWGUI.lineEdit(box, self, "name")
+        OWGUI.lineEdit(box, self, "name")
 
-        OWGUI.spin(self.bBox, self, "trees", 1, 1000, orientation="horizontal", label="Number of trees in forest")
+        OWGUI.spin(box, self, "trees", 1, 1000, orientation="horizontal", label="Number of trees in forest")
 
         OWGUI.separator(self.controlArea)
 

docs/rst/ClassifierChain.rst

 
 
 Setting:
-	- Use actual values
-		If checked, the values added into features are actual values from the data. Otherwise the values predicted by the classifier are used.
+--------
+- Use actual values
+	If checked, the values added into features are actual values from the data. Otherwise the values predicted by the classifier are used.
 
 

docs/rst/ClusteringRandomForest.rst

+ClusteringRandomForest
+=========================
+
+.. image:: ../../_multitarget/widgets/icons/ClusteringRandomForest.png
+   :alt: Widget icon
+   
+Signals
+-------
+
+Inputs:
+   - Data
+   		Data to be used for learning.
+
+Outputs:
+   - Learner or Classifier
+
+Description
+-----------
+
+.. image:: images/crf1.*
+   :alt: Usage example
+
+A clustering random forest is a random forest consisting of clustering trees. The usage is straightforward and the setting are described below.
+
+
+Settings
+--------
+* Number of trees in forest
+
+    Number of trees in forest. 
+
+* Stop splitting nodes at depth
+
+    Maximal depth of tree.
+	
+* Minimal majority class proportion
+
+    Minimal proportion of the majority class value each of the class variables has to reach
+    to stop induction (only used for classification). 
+
+
+* Min mean squared error
+
+    Minimal mean squared error each of the class variables has to reach
+    to stop induction (only used for regression). 
+
+* Min. instances in leaves
+
+    Minimal number of instances in leaves. Instance count is weighed.
+
+* Feature scorer
+
+        * Inter dist (default) - Euclidean distance between centroids of clusters
+        * Intra dist - average Euclidean distance of each member of a cluster to the centroid of that cluster
+        * Silhouette - silhouette (http://en.wikipedia.org/wiki/Silhouette_(clustering)) measure calculated with euclidean distances between clusters instead of elements of a cluster.
+        * Gini-index - calculates the Gini-gain index, should be used with class variables with nominal values

docs/rst/ClusteringTree.rst

-EnsembleClassifierChains
+ClusteringTree
 =========================
 
-.. image:: ../../_multitarget/widgets/icons/EnsembleClassifierChain.png
+.. image:: ../../_multitarget/widgets/icons/ClusteringTree.png
    :alt: Widget icon
    
 Signals
 -------
 
 Inputs:
-   - Learner
-   		The base learner used in the ensemble technique.
+   - Data
+   		Data to be used for learning.
 
 Outputs:
    - Learner or Classifier
 Description
 -----------
 
-.. image:: images/ecchain1.*
+.. image:: images/ct1.*
    :alt: Usage example
 
-Ensemble classifier chain learner takes a single-target learner and with it creates a number of classifier chains. Each chain is constructed on a random sample of the dataset.
+Clustering trees are similiar to classic decision trees, to select features they measure the distance between clusters the featers would create by splitting the dataset. Usage is simple, the settings are described below.
 
 
-Setting:
-	- Number of chains
-		Number of classifier chains that are built.
-	- Sample size
-		The size of the random sample taken from the dataset for each chain.
-	- Use actual values
-		If checked, the values added into features are actual values from the data. Otherwise the values predicted by the classifier are used.
+Settings
+--------
 
+* Stop splitting nodes at depth
 
+    Maximal depth of tree.
+	
+* Minimal majority class proportion
+
+    Minimal proportion of the majority class value each of the class variables has to reach
+    to stop induction (only used for classification). 
+
+
+* Min mean squared error
+
+    Minimal mean squared error each of the class variables has to reach
+    to stop induction (only used for regression). 
+
+* Min. instances in leaves
+
+    Minimal number of instances in leaves. Instance count is weighed.
+
+* Feature scorer
+
+        * Inter dist (default) - Euclidean distance between centroids of clusters
+        * Intra dist - average Euclidean distance of each member of a cluster to the centroid of that cluster
+        * Silhouette - silhouette (http://en.wikipedia.org/wiki/Silhouette_(clustering)) measure calculated with euclidean distances between clusters instead of elements of a cluster.
+        * Gini-index - calculates the Gini-gain index, should be used with class variables with nominal values

docs/rst/EnsembleClassifierChains.rst

 
 
 Setting:
-	- Number of chains
-		Number of classifier chains that are built.
-	- Sample size
-		The size of the random sample taken from the dataset for each chain.
-	- Use actual values
-		If checked, the values added into features are actual values from the data. Otherwise the values predicted by the classifier are used.
+--------
+- Number of chains
+	Number of classifier chains that are built.
+- Sample size
+	The size of the random sample taken from the dataset for each chain.
+- Use actual values
+	If checked, the values added into features are actual values from the data. Otherwise the values predicted by the classifier are used.
 
 

docs/rst/NeuralNetwork.rst

+NeuralNetwork
+=========================
+
+.. image:: ../../_multitarget/widgets/icons/NeuralNetwork.png
+   :alt: Widget icon
+   
+Signals
+-------
+
+Inputs:
+   - Data
+   		Data to be used for learning.
+
+Outputs:
+   - Learner or Classifier
+
+Description
+-----------
+
+.. image:: images/neural1.*
+   :alt: Usage example
+
+Neural networks are a complex technique accessible in a simple widget. Setting are described below.
+
+
+Settings
+--------
+
+* Hidden layer neurons
+
+    The number of neurons in the hidden layer.
+	
+* Reguralization factor
+
+    Regularization factor controls overfitting.
+
+
+* Max iterations
+
+    Maximal number of iterations the optimization algorithm can make.
+

docs/rst/PLSClassification.rst

+PLSClassification
+=========================
+
+.. image:: ../../_multitarget/widgets/icons/PLSClassification.png
+   :alt: Widget icon
+   
+Signals
+-------
+
+Inputs:
+   - Data
+   		Data to be used for learning.
+
+Outputs:
+   - Learner or Classifier
+
+Description
+-----------
+
+.. image:: images/pls1.*
+   :alt: Usage example
+
+PLS is originally a regression technique. PLSClassification wraps Orange's implementation of PLS into a classifier. Usage is equall to all learners, settings are described below.
+
+
+Settings
+--------
+
+* Num. of components to keep
+  The number of components in the matrix that PLS constructs for regression and classification.

docs/rst/TestMultitargetLearners.rst

+TestMultitargetLearners
+=========================
+
+.. image:: ../../_multitarget/widgets/icons/TestMTLearners.png
+   :alt: Widget icon
+   
+Signals
+-------
+
+Inputs:
+  - Data
+  	Data to be used for testing.
+
+  - Seoerate Test Data
+    Separate data for testing
+
+  - Learner
+    One or more learning algorithms
+
+Outputs:
+   - Evaluation results
+      Results of testing the algorithms
+
+Description
+-----------
+
+.. image:: images/test1.*
+   :alt: Usage example
+
+This widget is used for testing built learners. We provide the data and a number of learners we want to compare to the inputs of this wiget. Inside the widget we then select the method of testing and the scores we wish to measure. Results are displayed on the table to the right.
+
+
+.. image:: images/test2.*
+   :alt: Settings example
+
+Settings
+--------
+
+* Sampling
+
+    Here we can choose the method of sampling for testing the learners. Available methods are cross-validation, leave-one-out testing, random sampling, test on train data and test on test data (this requires additional test data on input)
+	
+* Performance Scorers
+
+    A list of scorers is available, by clicking on one of them we either add or remove a scorer from the table of results.

docs/rst/images/crf1.png

Added
New image

docs/rst/images/ct1.png

Added
New image

docs/rst/images/neural1.png

Added
New image

docs/rst/images/pls1.png

Added
New image

docs/rst/images/test1.png

Added
New image

docs/rst/images/test2.png

Added
New image

docs/rst/index.rst

    EnsembleClassifierChains
    PLSClassification
    ClusteringTree
+   ClusteringRandomForest
    NeuralNetwork
    TestMultitargetLearners