[libfann] 93/242: Tons of changes. Added lots of Steffen's report, constants, structures, some functions, finished bibliography, used tidy to fix indent, etc. The PDF jumped from ~45 pages to 66.

Christian Kastner chrisk-guest at moszumanska.debian.org
Sat Oct 4 21:10:23 UTC 2014


This is an automated email from the git hooks/post-receive script.

chrisk-guest pushed a commit to tag Version2_0_0
in repository libfann.

commit 73a1c911d3bee42195304fb5c5ff8e9bdf68227d
Author: Evan Nemerson <evan at coeus-group.com>
Date:   Thu Feb 19 06:30:13 2004 +0000

    Tons of changes. Added lots of Steffen's report, constants, structures, some functions, finished bibliography, used tidy to fix indent, etc. The PDF jumped from ~45 pages to 66.
---
 doc/fann.xml | 5907 +++++++++++++++++++++++++++++++++++++---------------------
 1 file changed, 3751 insertions(+), 2156 deletions(-)

diff --git a/doc/fann.xml b/doc/fann.xml
index 82b4de7..f95a5b4 100644
--- a/doc/fann.xml
+++ b/doc/fann.xml
@@ -1,138 +1,136 @@
+<?xml version='1.0' encoding='iso-8859-1'?>
 <!-- $Id$ -->
-<?xml version='1.0' encoding='ISO-8859-1' ?>
 <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" "docbook/xml-dtd-4.1.2/docbookx.dtd">
 <book>
- <bookinfo id="bookinfo">
-  <title>Fast Artificial Neural Network Library</title>
-  <authorgroup id="authors">
-   <author>
-    <firstname>Steffen</firstname>
-    <surname>Nissen</surname>
-   </author>
-   <author>
-    <firstname>Evan</firstname>
-    <surname>Nemerson</surname>
-   </author>
-  </authorgroup>
-  <copyright>
-   <year>2004</year>
-  </copyright>
- </bookinfo>
-
- <chapter id="intro">
-  <title>Introduction</title>
-  <para>
-   fann - Fast Artificial Neural Network Library is written in ANSI C. The
-   library implements multilayer feedforward ANNs, up to 150 times faster
-   than other libraries. FANN supports execution in fixed point, for fast
-   execution on systems like the iPAQ.
-  </para>
-
-  <section id="intro.install">
-   <title>Installation</title>
-
-   <section id="intro.install.rpm">
-    <title>RPMs</title>
-    <para>
-     RPMs are a simple way to manage packages, and is used on many common
-     Linux distributions such as <ulink url="http://www.redhat.com">Red Hat</ulink>
-     and <ulink url="http://www.mandrake.com/">Mandrake</ulink>.
-    </para>
-    <para>
-     After downloading FANN, simply run (as root) the following command:
-     <command>rpm -ivh $PATH_TO_RPM</command>
-    </para>
-   </section>
-
-   <section id="intro.install.deb">
-    <title>DEBs</title>
-    <para>
-     Dunno- never used dpkg. Steffen?
-    </para>
-   </section>
-
-   <section id="intro.install.win32">
-    <title>Windows</title>
-    <para>
-     Instructions for Borland & VC++
-    </para>
-   </section>
-
-   <section id="intro.install.src">
-    <title>Compiling from source</title>
-    <para>
-     Compiling FANN from source code entails the standard GNU autotools technique. First,
-     configure the package as you want it by typing (in the FANN directory), <command>
-     ./configure</command> If you need help choosing the options you would like to use,
-     try <command>./configure --help</command>
-    </para>
+  <bookinfo id="bookinfo">
+    <title>Fast Artificial Neural Network Library</title>
+    <authorgroup id="authors">
+      <author>
+        <firstname>Steffen</firstname>
+        <surname>Nissen</surname>
+      </author>
+      <author>
+        <firstname>Evan</firstname>
+        <surname>Nemerson</surname>
+      </author>
+    </authorgroup>
+    <copyright>
+      <year>2004</year>
+    </copyright>
+  </bookinfo>
+  <chapter id="intro">
+    <title>Introduction</title>
     <para>
-     Next, you have to actually compile the library. To do this, simply type <command>make
-     </command>
+      fann - Fast Artificial Neural Network Library is written in ANSI C. The library implements multilayer
+      feedforward ANNs, up to 150 times faster than other libraries. FANN supports execution in fixed point, for fast
+      execution on systems like the iPAQ.
     </para>
-    <para>
-     Finally, to install the library, type <command>make install</command> Odds are you will
-     have to be root to install, so you may need to <command>su</command> to root before installing.
-     Please remember to log out of the root account immediately after <command>make install
-     </command> finishes.
-    </para>
-   </section>
-  </section>
+    <section id="intro.dl">
+      <title id="intro.dl.title">Getting FANN</title>
 
-  <section id="intro.start">
-   <title>Getting Started</title>
-   <para>
-    An ANN is normally run in two different modes, a training mode and an execution mode.
-    Although it is possible to do this in the same program, I will recommend doing it in two
-    different programs.
-   </para>
-   <para>
-    There are several reasons to why it is usually a good idea to write the training and
-    execution in two different programs, but the most obvious is the fact that a typical ANN
-    system is only trained once, while it is executed many times.
-   </para>
-   <section id="intro.start.train">
-    <title>Training</title>
-    <para>
-     The following is a simple program which trains an ANN with a data set and then saves the
-     ANN to a file.
-     <example>
-      <title>Simple training example</title>
-      <programlisting>
+      <para>
+        Copies of FANN can be obtained from our SourceForge project page, located at
+	<ulink url="http://www.sourceforge.net/projects/fann/">http://www.sourceforge.net/projects/fann/</ulink>
+      </para>
+      <para>
+        You can currently get FANN as source code (<filename>fann-*.tar.bz2</filename>), Debian packages
+	(<filename>fann-*.deb</filename>), or RPM's (<filename>fann-*.rpm</filename>).
+      </para>
+      <para>
+        FANN is available under the terms of the
+	<ulink url="http://www.fsf.org/copyleft/lesser.html">GNU Lesser General Public License</ulink>.
+      </para>
+    </section>
+    <section id="intro.install">
+      <title>Installation</title>
+      <section id="intro.install.rpm">
+        <title>RPMs</title>
+        <para>
+	  RPMs are a simple way to manage packages, and is used on many common Linux distributions such as 
+          <ulink url="http://www.redhat.com">Red Hat</ulink>, <ulink url="http://www.mandrake.com/">Mandrake</ulink>,
+	  and <ulink url="http://www.suse.com/">SuSE</ulink>.
+	</para>
+        <para>
+	  After downloading FANN, simply run (as root) the following command: <command>rpm -ivh $PATH_TO_RPM</command>
+	</para>
+      </section>
+      <section id="intro.install.deb">
+        <title>DEBs</title>
+        <para>Dunno- never used dpkg. Steffen?</para>
+      </section>
+      <section id="intro.install.win32">
+        <title>Windows</title>
+        <para>Instructions for Borland & VC++</para>
+      </section>
+      <section id="intro.install.src">
+        <title id="intro.install.src.title">Compiling from source</title>
+        <para>
+	  Compiling FANN from source code entails the standard GNU autotools technique. First, configure the package as
+	  you want it by typing (in the FANN directory), <command>./configure</command> If you need help choosing the
+	  options you would like to use, try <command>./configure --help</command>
+	</para>
+        <para>
+	  Next, you have to actually compile the library. To do this, simply type <command>make</command>
+	</para>
+	<para>
+	  Finally, to install the library, type <command>make install</command>. Odds are you will have to
+	  be root to install, so you may need to <command>su</command> to root before installing. Please
+	  remember to log out of the root account immediately after <command>make install</command> finishes.
+	</para>
+      </section>
+    </section>
+    <section id="intro.start">
+      <title id="intro.start.title">Getting Started</title>
+      <para>
+        An ANN is normally run in two different modes, a training mode and an execution mode. Although it is
+        possible to do this in the same program, using different programs is recommended.
+      </para>
+      <para>
+        There are several reasons to why it is usually a good idea to write the training and execution in two
+	different programs, but the most obvious is the fact that a typical ANN system is only trained once, while it
+	is executed many times.
+      </para>
+      <section id="intro.start.train">
+        <title id="intro.start.train.title">Training</title>
+        <para>
+	  The following is a simple program which trains an ANN with a data set and then saves the ANN to a file. 
+	</para>
+	<example id="example.simple_train">
+	  <title id="example.simple_train.title">Simple training example</title>
+          <programlisting>
 <![CDATA[
 #include "fann.h"
 
 int main()
 {
-	const float connection_rate = 1;
-	const float learning_rate = 0.7;
-	const unsigned int num_input = 2;
-	const unsigned int num_output = 1;
-	const unsigned int num_layers = 3;
-	const unsigned int num_neurons_hidden = 4;
-	const float desired_error = 0.0001;
-	const unsigned int max_iterations = 500000;
-	const unsigned int iterations_between_reports = 1000;
-
-	struct fann *ann = fann_create(connection_rate, learning_rate, num_layers,
-		num_input, num_neurons_hidden, num_output);
-	
-	fann_train_on_file(ann, "xor.data", max_iterations,
-		iterations_between_reports, desired_error);
-	
-	fann_save(ann, "xor_float.net");
-	
-	fann_destroy(ann);
-
-	return 0;
+        const float connection_rate = 1;
+        const float learning_rate = 0.7;
+        const unsigned int num_input = 2;
+        const unsigned int num_output = 1;
+        const unsigned int num_layers = 3;
+        const unsigned int num_neurons_hidden = 4;
+        const float desired_error = 0.0001;
+        const unsigned int max_iterations = 500000;
+        const unsigned int iterations_between_reports = 1000;
+
+        struct fann *ann = fann_create(connection_rate, learning_rate, num_layers,
+                num_input, num_neurons_hidden, num_output);
+        
+        fann_train_on_file(ann, "xor.data", max_iterations,
+                iterations_between_reports, desired_error);
+        
+        fann_save(ann, "xor_float.net");
+        
+        fann_destroy(ann);
+
+        return 0;
 }
 ]]>
-      </programlisting>
-     </example>
-    </para>
-    <para>
-     The file xor.data, used to train the xor function:
-     <literallayout>
+          </programlisting>
+	</example>
+        <para>
+	  The file xor.data, used to train the xor function:
+	  <literallayout>
 4 2 1
 0 0
 0
@@ -142,1311 +140,2781 @@ int main()
 1
 1 1
 0
-     </literallayout>The first line consists of three numbers:
-     The first is the number of training pairs in the file, the
-     second is the number of inputs and the third is the number
-     of outputs. The rest of the file is the actual training data,
-     consisting of one line with inputs, one with outputs etc.
-    </para>
-    <para>
-     TODO: Link up functions to API reference.
-    </para>
-   </section>
-
-   <section id="intro.start.execution">
-    <title>Execution</title>
-    <para>
-     The following example shows a simple program which executes a single
-     input on the ANN. The program introduces two new functions which were
-     not used in the traiining procedure, as well as the fann_type type.
-     <example>
-      <title>Simple training example</title>
-      <programlisting>
+	  </literallayout> The first line consists of three numbers: The first is the number of training pairs in the file, the second is the number of inputs and
+	  the third is the number of outputs. The rest of the file is the actual training data, consisting of one line with inputs, one with outputs etc.
+	</para>
+	<para>
+	  This example introduces several fundamental functions, namely <link linkend="api.fann_create"><function>fann_create</function></link>,
+	  <link linkend="api.fann_train_on_file"><function>fann_train_on_file</function></link>,
+	  <link linkend="api.fann_save"><function>fann_save</function></link>, and <link linkend="api.fann_destroy"><function>fann_destroy</function></link>.
+	</para>
+      </section>
+      <section id="intro.start.execution">
+        <title id="intro.start.execution.title">Execution</title>
+        <para>
+	  The following example shows a simple program which executes a single input on the ANN. The program introduces two new functions
+	  (<link linkend="api.fann_create_from_file"><function>fann_create_from_file</function></link> and
+	  <link linkend="api.fann_run"><function>fann_run</function></link>) which were not used in the training procedure, as well as the <type>fann_type</type>
+	  type.
+	</para>
+        <example id="example.simple_exec">
+          <title id="example.simple_exec.title">Simple execution example</title>
+          <programlisting>
 <![CDATA[
 #include <stdio.h>
 #include "floatfann.h"
 
 int main()
 {
-	fann_type *calc_out;
-	fann_type input[2];
-
-	struct fann *ann = fann_create_from_file("xor_float.net");
-	
-	input[0] = 0;
-	input[1] = 1;
-	calc_out = fann_run(ann, input);
-
-	printf("xor test (%f,%f) -> %f\n",
-		input[0], input[1], *calc_out);
-	
-	fann_destroy(ann);
-	return 0;
+        fann_type *calc_out;
+        fann_type input[2];
+
+        struct fann *ann = fann_create_from_file("xor_float.net");
+        
+        input[0] = 0;
+        input[1] = 1;
+        calc_out = fann_run(ann, input);
+
+        printf("xor test (%f,%f) -> %f\n",
+                input[0], input[1], *calc_out);
+        
+        fann_destroy(ann);
+        return 0;
 }
 ]]>
-      </programlisting>
-     </example>
-    </para>
-   </section>
-  </section>
- </chapter>
-
- <chapter id="theory">
-  <title>Neural Network Theory</title>
-  <para>
-   This section will briefly explain the theory of neural networks (hereafter known
-   as NN) and artificial neural networks (hereafter known as ANN). For a more in depth
-   explanation of these concepts please consult the literature;
-   <xref linkend="bib.hassoun_1995" endterm="bib.hassoun_1995.abbrev"/> has good coverage
-   of most concepts of ANN and <xref linkend="bib.hertz_1991" endterm="bib.hertz_1991.abbrev"/>
-   describes the mathematics of ANN very thoroughly, while
-   <xref linkend="bib.anderson_1995" endterm="bib.anderson_1995.abbrev"/> has a more
-   psychological and physiological approach to NN and ANN. For the pragmatic I could recommend
-   <xref linkend="bib.tettamanzi_2001" endterm="bib.tettamanzi_2001.abbrev"/>, which has a short
-   and easily understandable introduction to NN and ANN.
-  </para>
-
-  <section id="theory.neural_networks">
-   <title>Neural Networks</title>
-
-   <para>
-    The human brain is a highly complicated machine capable of solving very complex problems.
-    Although we have a good understanding of some of the basic operations that drive the brain,
-    we are still far from understanding everything there is to know about the brain. 
-   </para>
-   <para>
-    In order to understand ANN, you will need to have a basic knowledge of how the internals of
-    the brain work. The brain is part of the central nervous system and consists of a very large
-    NN. The NN is actually quite complicated, but I will only include the details needed to
-    understand ANN, in order to simplify the explanation. 
-   </para>
-   <para>
-    The NN is a network consisting of connected neurons. The center of the neuron is called the
-    nucleus. The nucleus is connected to other nucleuses by means of the dendrites and the axon.
-    This connection is called a synaptic connection.
-   </para>
-   <para>
-    The neuron can fire electric pulses through its synaptic connections, which is received at
-    the dendrites of other neurons.
-   </para>
-   <para>
-    When a neuron receives enough electric pulses through its dendrites, it activates and fires a
-    pulse through its axon, which is then received by other neurons. In this way information can
-    propagate through the NN. The synaptic connections change throughout the lifetime of a neuron
-    and the amount of incoming pulses needed to activate a neuron (the threshold) also change. This
-    behavior allows the NN to learn.
-   </para>
-   <para>
-    The human brain consists of around 10^11 neurons which are highly interconnected with around
-    10^15 connections <xref linkend="bib.tettamanzi_2001" endterm="bib.tettamanzi_2001.abbrev"/>.
-    These neurons activates in parallel as an effect to internal and external sources. The brain is
-    connected to the rest of the nervous system, which allows it to receive information by means of
-    the five senses and also allows it to control the muscles.
-   </para>
-  </section>
-
-  <section id="theory.artificial_neural_networks">
-   <title>Artificial Neural Networks</title>
+          </programlisting>
+	</example>
+      </section>
+    </section>
+  </chapter>
+  <chapter id="adv">
+    <title id="adv.title">Advanced Usage</title>
     <para>
-     It is not possible (at the moment) to make an artificial brain, but it is possible to make
-     simplified artificial neurons and artificial neural networks. These ANNs can be made in many
-     different ways and can try to mimic the brain in many different ways.
-   </para>
-   <para>
-    ANNs are not intelligent, but they are good for recognizing patterns and making simple rules for
-    complex problems. They also have excellent training capabilities which is why they are often used
-    in artificial intelligence research.
-   </para>
-   <para>
-    ANNs are good at generalizing from a set of training data. E.g. this means an ANN given data about
-    a set of animals connected to a fact telling if they are mammals or not, is able to predict whether
-    an animal outside the original set is a mammal from its data. This is a very desirable feature of
-    ANNs, because you do not need to know the characteristics defining a mammal, the ANN will find out
-    by itself.
-   </para>
-  </section>
-
-  <section id="theory.training">
-   <title>Training an ANN</title>
-
-   <para>
-    When training an ANN with a set of input and output data, we wish to adjust the weights in the ANN,
-    to make the ANN give the same outputs as seen in the training data. On the other hand, we do not
-    want to make the ANN too specific, making it give precise results for the training data, but incorrect
-    results for all other data. When this happens, we say that the ANN has been over-fitted.
-   </para>
-   <para>
-    The training process can be seen as an optimization problem, where we wish to minimize the mean square
-    error of the entire set of training data. This problem can be solved in many different ways, ranging
-    from standard optimization heuristics like simulated annealing, through more special optimization
-    techniques like genetic algorithms to specialized gradient descent algorithms like backpropagation. 
-   </para>
-   <para>
-    The most used algorithm is the backpropagation algorithm, but this algorithm has some limitations
-    concerning, the extent of adjustment to the weights in each iteration. This problem has been solved in
-    more advanced algorithms like RPROP
-    <xref linkend="bib.riedmiller_1993" endterm="bib.riedmiller_1993.abbrev"/> and quickprop
-    <xref linkend="bib.fahlman_1988" endterm="bib.fahlman_1988.abbrev"/>, but I will not elaborate further
-    on these algorithms.
-   </para>
-  </section>
- </chapter>
-
- <chapter id="api">
-  <title>API Reference</title>
-  <para>
-   This is a list of all functions in FANN.
-  </para>
-
-  <section id="api.sec.create_destroy">
-   <title>Creation and Destruction</title>
-
-   <refentry id="api.fann_create">
-    <refnamediv>
-     <refname>fann_create</refname>
-     <refpurpose>Save an artificial neural network to a file.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>struct fann *</type><methodname>fann_create</methodname>
-       <methodparam><type>float</type><parameter>connection_rate</parameter></methodparam>
-       <methodparam><type>float</type><parameter>learning_rate</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>num_layers</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>...</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_create</function> will create a new artificial neural network, and return
-      a pointer to it.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_create_array">
-    <refnamediv>
-     <refname>fann_create_array</refname>
-     <refpurpose>Save an artificial neural network to a file.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>struct fann *</type><methodname>fann_create_array</methodname>
-       <methodparam><type>float</type><parameter>connection_rate</parameter></methodparam>
-       <methodparam><type>float</type><parameter>learning_rate</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>num_layers</parameter></methodparam>
-       <methodparam><type>unsigned int *</type><parameter>neurons_per_layer</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_create_array</function> will create a new artificial neural network, and return
-      a pointer to it. It is the same as <function>fann_create</function>, only it accepts an array
-      as its final parameter instead of variable arguments.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_destroy">
-    <refnamediv>
-     <refname>fann_destroy</refname>
-     <refpurpose>Destroy an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_destroy</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_destroy</function> will destroy an artificial neural network, properly
-      freeing all associated memory.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_run">
-    <refnamediv>
-     <refname>fann_run</refname>
-     <refpurpose>Run an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>fann_type *</type><methodname>fann_run</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>fann_type *</type><parameter>input</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_run</function> will run <parameter>input</parameter> through
-      <parameter>ann</parameter>, returning an array of outputs, the number of which
-      being equal to the number of neurons in the output layer.
-     </para>
-    </refsect1>
-   </refentry>
-  </section>
-
-  <section id="api.sec.io">
-   <title>Input/Output</title>
-
-   <refentry id="api.fann_save">
-    <refnamediv>
-     <refname>fann_save</refname>
-     <refpurpose>Save an ANN to a file.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_save</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>const char *</type><parameter>configuration_file</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_save</function> will attempt to save <parameter>ann</parameter>
-      to the file located at <parameter>configuration_file</parameter>
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_save_to_fixed">
-    <refnamediv>
-     <refname>fann_save_to_fixed</refname>
-     <refpurpose>Save an ANN to a fixed-point file.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_save_to_fixed</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>const char *</type><parameter>configuration_file</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_save_fixed</function> will attempt to save <parameter>ann</parameter>
-      to the file located at <parameter>configuration_file</parameter> as a fixed-point netowrk.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_create_from_file">
-    <refnamediv>
-     <refname>fann_create_from_file</refname>
-     <refpurpose>Load an ANN from a file..</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>struct fann *</type><methodname>fann_create_from_file</methodname>
-       <methodparam><type>const char *</type><parameter>configuration_file</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_create_from_file</function> will attempt to load an artificial neural netowrk
-      from a file.
-     </para>
-    </refsect1>
-   </refentry>
-  </section>
-
-  <section id="api.sec.train_algo">
-   <title>Training</title>
-
-   <refentry id="api.fann_train">
-    <refnamediv>
-     <refname>fann_train</refname>
-     <refpurpose>Train an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_train</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>fann_type *</type><parameter>input</parameter></methodparam>
-       <methodparam><type>fann_type *</type><parameter>output</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_train</function> will train one iteration with a set of inputs, and
-      a set of desired outputs.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_test">
-    <refnamediv>
-     <refname>fann_test</refname>
-     <refpurpose>Tests an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>fann_type *</type><methodname>fann_test</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>fann_type *</type><parameter>input</parameter></methodparam>
-       <methodparam><type>fann_type *</type><parameter>desired_error</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      Test with a set of inputs, and a set of desired outputs.
-      This operation updates the mean square error, but does not
-      change the network in any way.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_get_MSE">
-    <refnamediv>
-     <refname>fann_get_MSE</refname>
-     <refpurpose>Return the mean square error of an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>float</type><methodname>fann_get_MSE</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      Reads the mean square error from the network.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_reset_MSE">
-    <refnamediv>
-     <refname>fann_reset_MSE</refname>
-     <refpurpose>Reset the mean square error of an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_reset_MSE</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      Resets the mean square error from the network.
-     </para>
-    </refsect1>
-   </refentry>
-  </section>
-
-  <section id="api.sec.train_data">
-   <title>Training Data</title>
-
-   <refentry id="api.fann_read_train_from_file">
-    <refnamediv>
-     <refname>fann_read_train_from_file</refname>
-     <refpurpose>Read training data from a file.</refpurpose>
-    </refnamediv>
-    <refsect1>
-    <title>Description</title>
-     <methodsynopsis>
-      <type>struct fann_train_data *</type><methodname>fann_read_train_from_file</methodname>
-      <methodparam><type>char *</type><parameter>filename</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      <function>fann_read_train_from_file</function> will load training data from a file.
+      This section describes some of the low-level functions and how they can be used to obtain more control of the fann library. For a full list of functions,
+      lease see the <link linkend="api">API Reference</link>, which has an explanation of all the fann library functions. Also feel free to take a look at
+      the source code.
     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_destroy_train">
-    <refnamediv>
-     <refname>fann_destroy_train</refname>
-    <refpurpose>Destroy training data.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-     <type>void</type><methodname>fann_destroy_train_data</methodname>
-      <methodparam><type>struct fann_train_data *</type><parameter>train_data</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Destroy the training data stored in <parameter>train_data</parameter>, freeing the associated memory.
+    <para>
+      This section describes different procedures, which can help to get more power out of the fann library:
+      <link linkend="adv.adj" endterm="adv.adj.title" />, <link linkend="adv.design" endterm="adv.design.title" />,
+      <link linkend="adv.errval" endterm="adv.errval.title" />, and <link linkend="adv.train_test" endterm="adv.train_test.title" />.
     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="api.fann_train_on_data">
-    <refnamediv>
-     <refname>fann_train_on_data</refname>
-    <refpurpose>Train an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_train_on_data</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>struct fann_train_data *</type><parameter>data</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>max_epochs</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>epochs_between_reports</parameter></methodparam>
-      <methodparam><type>float</type><parameter>desired_error</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Trains <parameter>ann</parameter> using <parameter>data</parameter> until <parameter>desired_error</parameter>
-      is reached, or until <parameter>max_epochs</parameter> is surpassed.
-     </para>
-    </refsect1>
-   </refentry>
 
-   <refentry id="api.fann_train_on_data_callback">
-    <refnamediv>
-     <refname>fann_train_on_data_callback</refname>
-     <refpurpose>Train an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_train_on_data_callback</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>struct fann_train_data *</type><parameter>data</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>max_epochs</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>epochs_between_reports</parameter></methodparam>
-      <methodparam><type>float</type><parameter>desired_error</parameter></methodparam>
-      <methodparam><type>int</type><parameter>(*callback)(unsigned int epochs, float error)</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Trains <parameter>ann</parameter> using <parameter>data</parameter> until <parameter>desired_error</parameter>
-      is reached, or until <parameter>max_epochs</parameter> is surpassed.
-     </para>
-     <para>
-      This function behaves identically to <link linkend="api.fann_train_on_data"><function>fann_train_on_data</function></link>,
-      except that <function>fann_train_on_data_callback</function> allows you to specify a function to be called every
-      <parameter>epochs_between_reports</parameter> instead of using the default reporting mechanism.
-     </para>
-    </refsect1>
-   </refentry>
+    <section id="adv.adj">
+      <title id="adv.adj.title">Adjusting Parameters</title>
 
-   <refentry id="api.fann_train_on_file">
-    <refnamediv>
-     <refname>fann_train_on_file</refname>
-     <refpurpose>Train an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_train_on_file</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>char *</type><parameter>filename</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>max_epochs</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>epochs_between_reports</parameter></methodparam>
-      <methodparam><type>float</type><parameter>desired_error</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Trains <parameter>ann</parameter> using the data in <parameter>filename</parameter> until
-      <parameter>desired_error</parameter> is reached, or until <parameter>max_epochs</parameter>
-      is surpassed.
-     </para>
-    </refsect1>
-   </refentry>
+      <para>
+        Several different parameters exists in an ANN, these parameters are given defaults in the fann library, but they can be adjusted at runtime. There is no
+	sense in adjusting most of these parameters after the training, since it would invalidate the training, but it does make sense to adjust some of the
+	parameters during training, as will be described in <link linkend="adv.train_test" endterm="adv.train_test.title" />. Generally speaking,
+	these are parameters that should be adjusted before training.
+      </para>
+      <para>
+	The learning rate, is one of the most important parameters, but unfortunately it is also a parameter which is hard to find a reasonable default for. I
+	(SN) have several times ended up using 0.7, but it is a good idea to test several different learning rates when training a network. The learning rate can
+	be set when creating the network, but it can also be set by the
+	<link linkend="api.fann_set_learning_rate"><function>fann_set_learning_rate</function></link> function.
+      </para>
+      <para>
+	The initial weights are random values between -0.1 and 0.1, if other weights are preferred, the weights can be altered by the
+	<link linkend="api.fann_randomize_weights"><function>fann_randomize_weights</function></link> function.
+      </para>
+      <para>
+	The standard activation function is the sigmoid activation function, but it is also possible to use the threshold activation function. A list of the
+	currently available activation functions is available in the <link linkend="api.sec.constants.activation" endterm="api.sec.constants.activation.title"/>
+	section. The activation functions are chosen using the
+	<link linkend="api.fann_set_activation_function_hidden"><function>fann_set_activation_function_hidden</function></link> and
+	<link linkend="api.fann_set_activation_function_output"><function>fann_set_activation_function_output</function></link> functions.
+      </para>
+      <para>
+	These two functions set the activation function for the hidden layers and for the output layer. Likewise the steepness parameter used in the sigmoid
+	function can be adjusted with the
+	<link linkend="api.fann_set_activation_hidden_steepness"><function>fann_set_activation_hidden_steepness</function></link> and
+	<link linkend="api.fann_set_activation_output_steepness"><function>fann_set_activation_output_steepness</function></link> functions.
+      </para>
+      <para>
+        FANN distinguishes between the hidden layers and the output layer, to allow more flexibility. This is especially a good idea for users wanting discrete
+	output from the network, since they can set the activation function for the output to threshold. Please note, that it is not possible to train a network
+	when using the threshold activation function, due to the fact, that it is not differentiable.
+      </para>
+    </section>
 
-   <refentry id="api.fann_train_on_file_callback">
-    <refnamediv>
-     <refname>fann_train_on_file_callback</refname>
-     <refpurpose>Train an ANN.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_train_on_file_callback</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>char *</type><parameter>filename</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>max_epochs</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>epochs_between_reports</parameter></methodparam>
-      <methodparam><type>float</type><parameter>desired_error</parameter></methodparam>
-      <methodparam><type>int</type><parameter>(*callback)(unsigned int epochs, float error)</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Trains <parameter>ann</parameter> using the data in <parameter>filename</parameter> until
-      <parameter>desired_error</parameter> is reached, or until <parameter>max_epochs</parameter>
-      is surpassed.
-     </para>
-     <para>
-      This function behaves identically to <link linkend="api.fann_train_on_file"><function>fann_train_on_file</function></link>,
-      except that <function>fann_train_on_file_callback</function> allows you to specify a function to be called every
-      <parameter>epochs_between_reports</parameter> instead of using the default reporting mechanism.
-     </para>
-    </refsect1>
-   </refentry>
+    <section id="adv.design">
+      <title id="adv.design.title">Network Design</title>
 
-   <refentry id="api.fann_shuffle_train_data">
-    <refnamediv>
-     <refname>fann_shuffle_train_data</refname>
-     <refpurpose>Shuffle the training data.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_shuffle_train_data</methodname>
-      <methodparam><type>struct fann_train_data *</type><parameter>data</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      <function>fann_shuffle_train_data</function> will randomize the order of the training data
-      contained in <parameter>data</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+      <para>
+	When creating a network it is necessary to define how many layers, neurons and connections it should have. If the network become too large, the ANN will
+	have difficulties learning and when it does learn it will tend to over-fit resulting in poor generalization. If the network becomes too small, it will
+	not be able to represent the rules needed to learn the problem and it will never gain a sufficiently low error rate.
+      </para>
+      <para>
+	The number of hidden layers is also important. Generally speaking, if the problem is simple it is often enough to have one or two hidden layers, but as
+	the problems get more complex, so does the need for more layers.
+      </para>
+      <para>
+        One way of getting a large network which is not too complex, is to adjust the connection_rate parameter given to
+	<link linkend="api.fann_create"><function>fann_create</function></link>. If this parameter is 0.5, the constructed network will have the same amount of
+	neurons, but only half as many connections. It is difficult to say which problems this approach is useful for, but if you have a problem which can be
+	solved by a fully connected network, then it would be a good idea to see if it still works after removing half the connections.
+      </para>
+    </section>
 
-   <refentry id="api.fann_merge_train_data">
-    <refnamediv>
-     <refname>fann_merge_train_data</refname>
-     <refpurpose>Merge two sets of training data.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>struct fann_train_data *</type><methodname>fann_merge_train_data</methodname>
-      <methodparam><type>struct fann_train_data *</type><parameter>data1</parameter></methodparam>
-      <methodparam><type>struct fann_train_data *</type><parameter>data2</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      <function>fann_merge_train_data</function> will return a single set of training data which
-      contains all data from <parameter>data1</parameter> and <parameter>data2</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+    <section id="adv.errval">
+      <title id="adv.errval.title">Understanding the Error Value</title>
 
-   <refentry id="api.fann_duplicate_train_data">
-    <refnamediv>
-     <refname>fann_duplicate_train_data</refname>
-     <refpurpose>Copies a set of training data.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>struct fann_train_data *</type><methodname>fann_duplicate_train_data</methodname>
-      <methodparam><type>struct fann_train_data *</type><parameter>data</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      <function>fann_duplicate_train_data</function> will return a copy of <parameter>data</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
-  </section>
+      <para>
+	The mean square error value is calculated while the ANN is being trained. Some functions are implemented, to use and manipulate this error value. The
+	<link linkend="api.fann_get_error"><function>fann_get_error</function></link> function returns the error value and the
+	<link linkend="api.fann_reset_error"><function>fann_reset_error</function></link> resets the error value. The following explains how the mean square error
+	value is calculated, to give an idea of the value's ability to reveal the quality of the training.
+      </para>
+      <para>
+	If <emphasis>d</emphasis> is the desired output of an output neuron and <emphasis>y</emphasis> is the actual output of the neuron, the square error is
+	(d - y) squared. If two output neurons exists, then the mean square error for these two neurons is the average of the two square errors.
+      </para>
+      <para>
+	When training with the <link linkend="api.fann_train_on_file"><function>fann_train_on_file</function></link> function, an error value is printed. This
+	error value is the mean square error for all the training data. Meaning that it is the average of all the square errors in each of the training pairs.
+      </para>
+    </section>
 
-  <section id="api.sec.options">
-   <title>Options</title>
+    <section id="adv.train_test">
+      <title id="adv.train_test.title">Training and Testing</title>
 
-   <refentry id="api.fann_get_learning_rate">
-    <refnamediv>
-     <refname>fann_get_learning_rate</refname>
-     <refpurpose>Retrieve learning rate from a network.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>float</type><methodname>fann_get_learning_rate</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the learning rate for a given network.
-     </para>
-    </refsect1>
-   </refentry>
+      <para>
+        Normally it will be sufficient to use the <link linkend="api.fann_train_on_file"><function>fann_train_on_file</function></link> training function, but
+	sometimes you want to have more control and you will have to write a custom training loop. This could be because you would like another stop criteria,
+	or because you would like to adjust some of the parameters during training. Another stop criteria than the value of the combined mean square error could
+	be that each of the training pairs should have a mean square error lower than a given value.
+      </para>
+      <example id="example.train_on_file_internals">
+        <title id="example.train_on_file_internals.title">
+	  The internals of the <function>fann_train_on_file</function> function, without writing the status line.
+	</title>
+        <programlisting>
+<![CDATA[
+struct fann_train_data *data = fann_read_train_from_file(filename);
+for(i = 1 ; i <= max_epochs ; i++) {
+  fann_reset_error(ann);
+  for (j = 0 ; j != data->num_data ; j++) {
+    fann_train(ann, data->input[j], data->output[j]);
+  }
+  if ( fann_get_error(ann) < desired_error ) {
+    break;
+  }
+}
+fann_destroy_train(data);
+]]>
+        </programlisting>
+      </example>
+      <para>
+	This piece of code introduces the <link linkend="api.fann_train"><function>fann_train</function></link> function, which trains the ANN for one iteration
+	with one pair of inputs and outputs and also updates the mean square error. The
+	<link linkend="api.struct.fann_train_data"><type>fann_train_data</type></link> structure is also introduced, this structure is a container for the
+	training data in the file described in figure 10. The structure can be used to train the ANN, but it can also be used to test the ANN with data which it
+	has not been trained with.
+      </para>
+      <example id="example.calc_mse">
+	<title id="example.calc_mse.title">Test all of the data in a file and calculates the mean square error.</title>
+	<programlisting>
+<![CDATA[
+struct fann_train_data *data = fann_read_train_from_file(filename);
+fann_reset_error(ann);
+for(i = 0 ; i != data->num_data ; i++ ) {
+  fann_test(ann, data->input[i], data->output[i]);
+}
+printf("Mean Square Error: %f\n", fann_get_error(ann));
+fann_destroy_train(data);
+]]>
+	</programlisting>
+      </example>
+      <para>
+	This piece of code introduces another useful function: <link linkend="api.fann_test"><function>fann_test</function></link> function, which takes an input
+	array and a desired output array as the parameters and returns the calculated output. It also updates the mean square error.
+      </para>
+    </section>
+    <section id="adv.over_fit">
+      <title id="adv.over_fit.title">Avoid Over-Fitting</title>
 
-   <refentry id="api.fann_set_learning_rate">
-    <refnamediv>
-     <refname>fann_set_learning_rate</refname>
-     <refpurpose>Set a network's learning rate.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type></type><methodname>fann_set_learning_rate</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>float</type><parameter>learning_rate</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Set the learning rate of a network.
-     </para>
-    </refsect1>
-   </refentry>
+      <para>
+        With the knowledge of how to train and test an ANN, a new approach to training can be introduced. If too much training is applied to a set of data, the
+	ANN will eventually over-fit, meaning that it will be fitted precisely to this set of training data and thereby loosing generalization. It is often a
+	good idea to test, how good an ANN performs on data that it has not seen before. Testing with data not seen before, can be done while training, to see
+	how much training is required in order to perform well without over-fitting. The testing can either be done by hand, or an automatic test can be applied,
+	which stops the training when the mean square error of the test data is not improving anymore.
+      </para>
+    </section>
+    <section id="adv.adj_train">
+      <title id="adv.adj_train.title">Adjusting Parameters During Training</title>
 
-   <refentry id="api.fann_get_activation_function_hidden">
-    <refnamediv>
-     <refname>fann_get_activation_function_hidden</refname>
-     <refpurpose>Get the activation function of the hidden layer.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>unsigned int</type><methodname>fann_get_activation_function_hidden</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the activation function of the hidden layer.
-     </para>
-    </refsect1>
-   </refentry>
+      <para>
+	If a very low mean square error is required it can sometimes be a good idea to gradually decrease the learning rate during training, in order to make the
+	adjusting of weights more subtle. If more precision is required, it might also be a good idea to use double precision floats instead of standard floats.
+      </para>
+      <para>
+	The threshold activation function is faster than the sigmoid function, but since it is not possible to train with this function, you may wish to consider
+	an alternate approach:
+      </para>
+      <para>
+	While training the ANN you could slightly increase the steepness parameter of the sigmoid function. This would make the sigmoid function more steep and
+	make it look more like the threshold function. After this training session you could set the activation function to the threshold function and the ANN
+	would work with this activation function. This approach will not work on all kinds of problems, and has been successfully tested on the XOR function.
+      </para>
+    </section>
+  </chapter>
+  <chapter id="fixed">
+    <title id="fixed.title">Fixed Point Usage</title>
 
-   <refentry id="api.fann_set_activation_function_hidden">
-    <refnamediv>
-     <refname>fann_set_activation_function_hidden</refname>
-     <refpurpose>Set the activation function for the hidden layer.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type></type><methodname>fann_set_activation_function_hidden</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>activation_function</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Set the activation function of the hidden layer to <parameter>activation_function></parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+    <para>
+      It is possible to run the ANN with fixed point numbers (internally represented as integers). This option is only intended for use on computers with no
+      floating point processor, for example, the iPAQ, but a minor performance enhancement can also be seen on most modern computers
+      <xref linkend="bib.IDS_2000" endterm="bib.IDS_2000.abbrev"/>.
+    </para>
 
-   <refentry id="api.fann_get_activation_function_output">
-    <refnamediv>
-     <refname>fann_get_activation_function_output</refname>
-     <refpurpose>Get the activation function of the output layer.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>unsigned int</type><methodname>fann_get_activation_function_output</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the activation function of the output layer.
-     </para>
-    </refsect1>
-   </refentry>
+    <section id="fixed.train">
+      <title id="fixed.train.title">Training a Fixed Point ANN</title>
 
-   <refentry id="api.fann_set_activation_function_output">
-    <refnamediv>
-     <refname>fann_set_activation_function_output</refname>
-     <refpurpose>Set the activation function for the output layer.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_set_activation_function_output</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>unsigned int</type><parameter>activation_function</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Set the activation function of the output layer to <parameter>activation_function></parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+      <para>
+        The ANN cannot be trained in fixed point, which is why the training part is basically the same as for floating point numbers. The only difference is that
+	you should save the ANN as fixed point. This is done by the <link linkend="api.fann_save_to_fixed"><function>fann_save_to_fixed</function></link>
+	function. This function saves a fixed point version of the ANN, but it also does some analysis, in order to find out where the decimal point should be.
+	The result of this analysis is returned from the function.
+      </para>
+      <para>
+	The decimal point returned from the function is an indicator of, how many bits is used for the fractional part of the fixed point numbers. If this number
+	is negative, there will most likely be integer overflow when running the library with fixed point numbers and this should be avoided. Furthermore, if
+	the decimal point is too low (e.g. lower than 5), it is probably not a good idea to use the fixed point version.
+      </para>
+      <para>
+	Please note, that the inputs to networks that should be used in fixed point should be between -1 and 1.
+      </para>
+      <example id="example.train_fixed">
+	<title id="example.train_fixed.title">An example of a program written to support training in both fixed point and floating point numbers</title>
+	<programlisting>
+<![CDATA[
+#include "fann.h"
+#include <stdio.h>
 
-   <refentry id="api.fann_get_activation_hidden_steepness">
-    <refnamediv>
-     <refname>fann_get_activation_hidden_steepness</refname>
-     <refpurpose>Retrieve the steepness of the activation function of the hidden layers.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>fann_type</type><methodname>fann_get_activation_hidden_steepness</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the steepness of the activation function of the hidden layers.
-     </para>
-    </refsect1>
-   </refentry>
+int main()
+{
+	fann_type *calc_out;
+	const float connection_rate = 1;
+	const float learning_rate = 0.7;
+	const unsigned int num_input = 2;
+	const unsigned int num_output = 1;
+	const unsigned int num_layers = 3;
+	const unsigned int num_neurons_hidden = 4;
+	const float desired_error = 0.001;
+	const unsigned int max_iterations = 20000;
+	const unsigned int iterations_between_reports = 100;
+	struct fann *ann;
+	struct fann_train_data *data;
+	
+	unsigned int i = 0;
+	unsigned int decimal_point;
 
-   <refentry id="api.fann_set_activation_hidden_steepness">
-    <refnamediv>
-     <refname>fann_set_activation_hidden_steepness</refname>
-     <refpurpose>Set the steepness of the activation function of the hidden layers.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_set_activation_hidden_steepness</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>fann_type</type><parameter>steepness</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Set the steepness of the activation function of thie hidden layers of
-      <parameter>ann</parameter> to <parameter>steepness</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+	printf("Creating network.\n");
 
-   <refentry id="api.fann_get_activation_output_steepness">
-    <refnamediv>
-     <refname>fann_get_activation_output_steepness</refname>
-     <refpurpose>Retrieve the steepness of the activation function of the hidden layers.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>fann_type</type><methodname>fann_get_activation_output_steepness</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the steepness of the activation function of the hidden layers.
-     </para>
-    </refsect1>
-   </refentry>
+	ann = fann_create(connection_rate, learning_rate, num_layers,
+		num_input,
+		num_neurons_hidden,
+		num_output);
 
-   <refentry id="api.fann_set_activation_output_steepness">
-    <refnamediv>
-     <refname>fann_set_activation_output_steepness</refname>
-     <refpurpose>Set the steepness of the activation function of the hidden layers.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_set_activation_output_steepness</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>fann_type</type><parameter>steepness</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Set the steepness of the activation function of thie hidden layers of
-      <parameter>ann</parameter> to <parameter>steepness</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+	printf("Training network.\n");
 
-   <refentry id="api.fann_get_num_input">
-    <refnamediv>
-     <refname>fann_get_num_input</refname>
-     <refpurpose>Get the number of neurons in the input layer.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>unsigned int</type><methodname>fann_get_num_input</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the number of neurons in the input layer of <parameter>ann</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+	data = fann_read_train_from_file("xor.data");
 
-   <refentry id="api.fann_get_num_output">
-    <refnamediv>
-     <refname>fann_get_num_output</refname>
-     <refpurpose>Get number of neurons in the output layer.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>unsigned int</type><methodname>fann_get_num_output</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the number of neurons in the output layer of <parameter>ann</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+	fann_train_on_data(ann, data, max_iterations, iterations_between_reports, desired_error);
 
-   <refentry id="api.fann_get_total_neurons">
-    <refnamediv>
-     <refname>fann_get_total_neurons</refname>
-     <refpurpose>Get the total number of neurons in a network.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>unsigned int</type><methodname>fann_get_total_neurons</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the total number of neurons in <parameter>ann</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+	printf("Testing network.\n");
 
-   <refentry id="api.fann_get_total_connections">
-    <refnamediv>
-     <refname>fann_get_total_connections</refname>
-     <refpurpose>Get the total number of connections in a network.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>unsigned int</type><methodname>fann_get_total_connections</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the total number of connections in <parameter>ann</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+	for(i = 0; i < data->num_data; i++){
+		calc_out = fann_run(ann, data->input[i]);
+		printf("XOR test (%f,%f) -> %f, should be %f, difference=%f\n",
+		data->input[i][0], data->input[i][1], *calc_out, data->output[i][0], fann_abs(*calc_out - data->output[i][0]));
+	}
+	
+	printf("Saving network.\n");
 
-   <refentry id="api.fann_get_decimal_point">
-    <refnamediv>
-     <refname>fann_get_decimal_point</refname>
-     <refpurpose>Get the position of the decimal point.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>unsigned int</type><methodname>fann_get_decimal_point</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the position of the decimal point in <parameter>ann</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
+	fann_save(ann, "xor_float.net");
 
-   <refentry id="api.fann_get_multiplier">
-    <refnamediv>
-     <refname>fann_get_multiplier</refname>
-     <refpurpose>Get the multiplier.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type></type><methodname>fann_get_multiplier</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Return the multiplier that fix point data in <parameter>ann</parameter> is
-      multiplied with.
-     </para>
-    </refsect1>
-   </refentry>
-  </section>
+	decimal_point = fann_save_to_fixed(ann, "xor_fixed.net");
+	fann_save_train_to_fixed(data, "xor_fixed.data", decimal_point);
+	
+	printf("Cleaning up.\n");
+	fann_destroy_train(data);
+	fann_destroy(ann);
+	
+	return 0;
+}
+]]>
+	</programlisting>
+      </example>
+    </section>
+    <section id="fixed.run">
+      <title id="fixed.run.title">Running a Fixed Point ANN</title>
 
-  <section id="api.sec.errors">
-   <title>Error Handling</title>
+      <para>
+	Running a fixed point ANN is done much like running an ordinary ANN. The difference is that the inputs and outputs should be in fixed point
+	representation. Furthermore the inputs should be restricted to be between -<parameter>multiplier</parameter> and <parameter>multiplier</parameter> to
+	avoid integer overflow, where the <parameter>multiplier</parameter> is the value returned from
+	<link linkend="api.fann_get_multiplier"><function>fann_get_multiplier</function></link>. This multiplier is the value that a floating point number should
+	be multiplied with, in order to be a fixed point number, likewise the output of the ANN should be divided by this multiplier in order to be between zero
+	and one.
+      </para>
+      <para>
+	To help using fixed point numbers, another function is provided.
+	<link linkend="api.fann_get_decimal_point"><function>fann_get_decimal_point</function></link> which returns the decimal point. The decimal point is the
+	position dividing the integer and fractional part of the fixed point number and is useful for doing operations on the fixed point inputs and outputs.
+      </para>
+      <example id="example.exec_fixed">
+	<title id="example.exec_fixed.title">An example of a program written to support both fixed point and floating point numbers</title>
+	<programlisting>
+<![CDATA[
+#include <time.h>
+#include <sys/time.h>
+#include <stdio.h>
 
-   <refentry id="api.fann_get_errno">
-    <refnamediv>
-     <refname>fann_get_errno</refname>
-     <refpurpose>Return the numerical representation of the last error.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>unsigned int</type><methodname>fann_get_errno</methodname>
-      <methodparam><type>struct fann_error *</type><parameter>errdat</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Returns the numerical representation of the last error. The error codes are defined
-      in <!-- What do I put this in??? -->fann_errno.h<!-- /confusion -->.
-     </para>
-    </refsect1>
-   </refentry>
+#include "fann.h"
 
-   <refentry id="api.fann_get_errstr">
-    <refnamediv>
-     <refname>fann_get_errstr</refname>
-     <refpurpose>Return the last error.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>char *</type><methodname>fann_get_errstr</methodname>
-      <methodparam><type>struct fann_error *</type><parameter>errdat</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Returns the last error.
-     </para>
-     <para>
-      Note: This will reset the network's error- any subsequent calls to
-      <function>fann_get_errno</function> or <function>fann_get_errstr</function>
-      will yield 0 and NULL, respectively.
-     </para>
-    </refsect1>
-   </refentry>
+int main()
+{
+	fann_type *calc_out;
+	unsigned int i;
+	int ret = 0;
 
-   <refentry id="api.fann_reset_errno">
-    <refnamediv>
-     <refname>fann_reset_errno</refname>
-     <refpurpose>Reset the last error number.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_reset_errno</methodname>
-      <methodparam><type>struct fann_error *</type><parameter>errdat</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Reset the last error number.
-     </para>
-    </refsect1>
-   </refentry>
+	struct fann *ann;
+	struct fann_train_data *data;
 
-   <refentry id="api.fann_reset_errstr">
-    <refnamediv>
-     <refname>fann_reset_errstr</refname>
-     <refpurpose>Reset the last error string.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_reset_errstr</methodname>
-      <methodparam><type>struct fann_error *</type><parameter>errdat</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Reset the last error string.
-     </para>
-    </refsect1>
-   </refentry>
+	printf("Creating network.\n");
 
-   <refentry id="api.fann_set_error_log">
-    <refnamediv>
-     <refname>fann_set_error_log</refname>
-     <refpurpose>Set the error log to a file descriptor.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_set_error_log</methodname>
-      <methodparam><type>struct fann_error *</type><parameter>errdat</parameter></methodparam>
-      <methodparam><type>FILE *</type><parameter>log</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Set the error log to <parameter>log</parameter>.
-     </para>
-     <para>
-      The error log defaults to stderr.
-     </para>
-    </refsect1>
-   </refentry>
+#ifdef FIXEDFANN
+	ann = fann_create_from_file("xor_fixed.net");
+#else
+	ann = fann_create_from_file("xor_float.net");
+#endif
+	
+	if(!ann){
+		printf("Error creating ann --- ABORTING.\n");
+		return 0;
+	}
+
+	printf("Testing network.\n");
+
+#ifdef FIXEDFANN
+	data = fann_read_train_from_file("xor_fixed.data");
+#else
+	data = fann_read_train_from_file("xor.data");
+#endif
+
+	for(i = 0; i < data->num_data; i++){
+		fann_reset_MSE(ann);
+		calc_out = fann_test(ann, data->input[i], data->output[i]);
+#ifdef FIXEDFANN
+		printf("XOR test (%d, %d) -> %d, should be %d, difference=%f\n",
+		data->input[i][0], data->input[i][1], *calc_out, data->output[i][0], (float)fann_abs(*calc_out - data->output[i][0])/fann_get_multiplier(ann));
+
+		if((float)fann_abs(*calc_out - data->output[i][0])/fann_get_multiplier(ann) > 0.1){
+			printf("Test failed\n");
+			ret = -1;
+		}
+#else
+		printf("XOR test (%f, %f) -> %f, should be %f, difference=%f\n",
+		data->input[i][0], data->input[i][1], *calc_out, data->output[i][0], (float)fann_abs(*calc_out - data->output[i][0]));
+#endif
+	}
+
+	printf("Cleaning up.\n");
+	fann_destroy_train(data);
+	fann_destroy(ann);
 
-   <refentry id="api.fann_print_error">
-    <refnamediv>
-     <refname>fann_print_error</refname>
-     <refpurpose>Print the last error to the error log.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_print_error_log</methodname>
-      <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      Prints the network's last error to the error log.
-     </para>
-     <para>
-      The error log defaults to stderr.
-     </para>
-    </refsect1>
-   </refentry>
-  </section>
+	return ret;
+}
+]]>
+	</programlisting>
+      </example>
+    </section>
+    <section id="fixed.precision">
+      <title id="fixed.precision.title">Precision of a Fixed Point ANN</title>
 
-  <section id="api.sec.internal">
-   <title>Internal Functions</title>
-   <section id="api.sec.create_destroy.internal">
-    <title>Creation And Destruction</title>
-    <refentry id="api.fann_allocate_structure">
-     <refnamediv>
-      <refname>fann_allocate_structure</refname>
-      <refpurpose>Allocate the core elements of a <type>struct fann</type>.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-       <methodsynopsis>
-        <type>struct fann *</type><methodname>fann_allocate_structure</methodname>
-        <methodparam><type>float</type><parameter>learning_rate</parameter></methodparam>
-        <methodparam><type>unsigned int</type><parameter>num_layers</parameter></methodparam>
-       </methodsynopsis>
       <para>
-       <function>fann_allocate_structure</function> is used internally to create a
-       <type>struct fann</type>.
+	The fixed point ANN is not as precise as a floating point ANN, furthermore it approximates the sigmoid function by a stepwise linear function. Therefore,
+	it is always a good idea to test the fixed point ANN after loading it from a file. This can be done by calculating the mean square error as described
+	<link linkend="example.calc_mse">earlier</link>. There is, however, one problem with this approach: The training data stored in the file is in floating
+	point format. Therefore, it is possible to save this data in a fixed point format from within the floating point program. This is done by the function
+	<link linkend="api.fann_save_train_to_fixed"><function>fann_save_train_to_fixed</function></link>. Please note that this function takes the decimal point
+	as an argument, meaning that the decimal point should be calculated first by using the
+	<link linkend="api.fann_save_to_fixed"><function>fann_save_to_fixed</function></link> function.
       </para>
-     </refsect1>
-    </refentry>
-   </section>
-
-   <section id="api.sec.io.internal">
-    <title>Input/Output</title>
-    <refentry id="api.fann_save_internal">
-     <refnamediv>
-     <refname>fann_save_internal</refname>
-      <refpurpose>Save an ANN to a file.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-       <methodsynopsis>
-        <type>int</type><methodname>fann_save_internal</methodname>
-        <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-        <methodparam><type>const char *</type><parameter>configuration_file</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>save_as_fixed</parameter></methodparam>
-       </methodsynopsis>
-      <para>
-       <function>fann_save_internal_fd</function> is used internally to save an ANN to a file.
-     </para>
-     </refsect1>
-    </refentry>
-
-    <refentry id="api.fann_save_internal_fd">
-     <refnamediv>
-     <refname>fann_save_internal_fd</refname>
-      <refpurpose>Save an ANN to a file descriptor.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-       <methodsynopsis>
-        <type>int</type><methodname>fann_save_internal_fd</methodname>
-        <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-        <methodparam><type>FILE *</type><parameter>conf</parameter></methodparam>
-        <methodparam><type>const char *</type><parameter>configuration_file</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>save_as_fixed</parameter></methodparam>
-       </methodsynopsis>
+    </section>
+  </chapter>
+  <chapter id="theory">
+    <title id="theory.title">Neural Network Theory</title>
+    <para>
+      This section will briefly explain the theory of neural networks (hereafter known as NN) and artificial neural
+      networks (hereafter known as ANN). For a more in depth explanation of these concepts please consult the
+      literature; [<xref linkend="bib.hassoun_1995" endterm="bib.hassoun_1995.abbrev" />] has good coverage of most
+      concepts of ANN and [<xref linkend="bib.hertz_1991" endterm="bib.hertz_1991.abbrev" />] describes the mathematics
+      of ANN very thoroughly, while [<xref linkend="bib.anderson_1995" endterm="bib.anderson_1995.abbrev" />] has a
+      more psychological and physiological approach to NN and ANN. For the pragmatic I (SN) could recommend
+      [<xref linkend="bib.tettamanzi_2001" endterm="bib.tettamanzi_2001.abbrev" />], which has a short and easily
+      understandable introduction to NN and ANN.
+    </para>
+    <section id="theory.neural_networks">
+      <title id="theory.neural_networks.title">Neural Networks</title>
       <para>
-       <function>fann_save_internal_fd</function> is used internally to save an ANN to a location pointed to by
-       <parameter>conf</parameter>. <parameter>configuration_file</parameter> is the name of the file, used only
-       for debugging purposes.
-     </para>
-     </refsect1>
-    </refentry>
-
-    <refentry id="api.fann_create_from_fd">
-     <refnamediv>
-     <refname>fann_create_from_fd</refname>
-      <refpurpose>Load an ANN from a file descriptor.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-       <methodsynopsis>
-        <type>struct fann *</type><methodname>fann_create_from_fd</methodname>
-        <methodparam><type>FILE *</type><parameter>conf</parameter></methodparam>
-        <methodparam><type>const char *</type><parameter>configuration_file</parameter></methodparam>
-       </methodsynopsis>
+        The human brain is a highly complicated machine capable of solving very complex problems. Although we have
+        a good understanding of some of the basic operations that drive the brain, we are still far from understanding
+        everything there is to know about the brain.
+      </para>
       <para>
-       <function>fann_create_from_fd</function> will load an ANN from a file descriptor.
+        In order to understand ANN, you will need to have a basic knowledge of how the internals of the brain work.
+	The brain is part of the central nervous system and consists of a very large NN. The NN is actually quite
+	complicated, so the following discussion shall be relegated to the details needed to understand ANN, in order
+	to simplify the	explanation.
       </para>
-     </refsect1>
-    </refentry>
-   </section>
-
-   <section id="api.sec.train_data.internal">
-    <title>Training Data</title>
-
-    <refentry id="api.fann_save_train_internal">
-     <refnamediv>
-      <refname>fann_save_train_internal</refname>
-      <refpurpose>Save training data to a file.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_save_train_internal</methodname>
-       <methodparam><type>struct fann_train_data *</type><parameter>data</parameter></methodparam>
-       <methodparam><type>char *</type><parameter>filename</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>save_as_fixed</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>decimal_point</parameter></methodparam>
-      </methodsynopsis>
       <para>
-       Saves the data in <parameter>data</parameter> to <parameter>filename</parameter>.
-       <parameter>save_as_fixed</parameter> is either TRUE or FALSE.
-       <parameter>decimal_point</parameter> tells FANN where the decimal point may be if using
-       fixed point math. (Right?)
+        The NN is a network consisting of connected neurons. The center of the neuron is called the nucleus. The
+	nucleus is connected to other nucleuses by means of the dendrites and the axon. This connection is called a
+	synaptic connection.
       </para>
-     </refsect1>
-    </refentry>
-
-    <refentry id="api.fann_save_train_internal_fd">
-     <refnamediv>
-      <refname>fann_save_train_internal_fd</refname>
-      <refpurpose>Save training data to a file descriptor.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_save_train_internal_fd</methodname>
-       <methodparam><type>struct fann_train_data *</type><parameter>data</parameter></methodparam>
-       <methodparam><type>FILE *</type><parameter>file</parameter></methodparam>
-       <methodparam><type>char *</type><parameter>filename</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>save_as_fixed</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>decimal_point</parameter></methodparam>
-      </methodsynopsis>
       <para>
-       Saves the data in <parameter>data</parameter> to <parameter>file</parameter>.
-       <parameter>save_as_fixed</parameter> is either TRUE or FALSE.
-       <parameter>decimal_point</parameter> tells FANN where the decimal point may be if using
-       fixed point math. (Right?)
+        The neuron can fire electric pulses through its synaptic connections, which is received at the dendrites of
+        other neurons.
       </para>
       <para>
-       <parameter>filename</parameter> is used for debugging output only.
+        When a neuron receives enough electric pulses through its dendrites, it activates and fires a pulse through
+	its axon, which is then received by other neurons. In this way information can propagate through the NN. The
+	synaptic connections change throughout the lifetime of a neuron and the amount of incoming pulses needed to
+	activate a neuron (the threshold) also change. This behavior allows the NN to learn.
       </para>
-     </refsect1>
-    </refentry>
-
-    <refentry id="api.fann_read_train_from_fd">
-     <refnamediv>
-      <refname>fann_read_train_from_fd</refname>
-      <refpurpose>Read training data from a file descriptor.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-      <methodsynopsis>
-       <type>struct fann_train_data *</type><methodname>fann_read_train_from_file</methodname>
-       <methodparam><type>FILE *</type><parameter>file</parameter></methodparam>
-       <methodparam><type>char *</type><parameter>filename</parameter></methodparam>
-      </methodsynopsis>
       <para>
-       <function>fann_read_train_from_file</function> will load training data from the file
-       descriptor <parameter>file</parameter>.
+        The human brain consists of around 10^11 neurons which are highly interconnected with around 10^15
+        connections [<xref linkend="bib.tettamanzi_2001" endterm="bib.tettamanzi_2001.abbrev" />]. These neurons
+	activates in parallel as an effect to internal and external sources. The brain is connected to the rest of the
+	nervous system, which allows it to receive information by means of the five senses and also allows it to
+	control the muscles.
       </para>
+    </section>
+    <section id="theory.artificial_neural_networks">
+      <title id="theory.artificial_neural_networks.title">Artificial Neural Networks</title>
       <para>
-       <parameter>filename</parameter> is used for debugging output only.
+        It is not possible (at the moment) to make an artificial brain, but it is possible to make simplified
+        artificial neurons and artificial neural networks. These ANNs can be made in many different ways and can try to
+        mimic the brain in many different ways.
       </para>
-     </refsect1>
-    </refentry>
-   </section>
-
-   <section id="api.sec.io.errors">
-    <title>Error Handling</title>
-
-    <refentry id="api.fann_error">
-     <refnamediv>
-      <refname>fann_error</refname>
-      <refpurpose>Throw an internal error.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_error</methodname>
-       <methodparam><type>struct fann_error *</type><parameter>errdat</parameter></methodparam>
-       <methodparam><type>unsigned int</type><parameter>errno</parameter></methodparam>
-       <methodparam><parameter>...</parameter></methodparam>
-      </methodsynopsis>
       <para>
-       This will set the network's error to correspond to <parameter>errno</parameter>.
-       The variable arguments depend (both in type and quantity) on <parameter>errno</parameter>.
-       Possible <parameter>errno</parameter> values are defined in fann_errno.h.
+        ANNs are not intelligent, but they are good for recognizing patterns and making simple rules for complex
+        problems. They also have excellent training capabilities which is why they are often used in artificial
+        intelligence research.
       </para>
-     </refsect1>
-    </refentry>
-   </section>
-
-   <section id="api.sec.options.internal">
-    <title>Options</title>
-
-    <refentry id="api.fann_update_stepwise_hidden">
-     <refnamediv>
-      <refname>fann_update_stepwise_hidden</refname>
-      <refpurpose>Adjust the stepwise function in the hidden layers.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_update_stepwise_hidden</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
       <para>
-       Update the stepwise function in the hidden layers of <parameter>ann</parameter>.
+        ANNs are good at generalizing from a set of training data. E.g. this means an ANN given data about a set of
+	animals connected to a fact telling if they are mammals or not, is able to predict whether an animal outside
+	the original set is a mammal from its data. This is a very desirable feature of ANNs, because you do not need
+	to know the characteristics defining a mammal, the ANN will find out by itself.
       </para>
-     </refsect1>
-    </refentry>
-
-    <refentry id="api.fann_update_stepwise_output">
-     <refnamediv>
-      <refname>fann_update_stepwise_output</refname>
-      <refpurpose>Adjust the stepwise functions in the output layers.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_update_stepwise_output</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
+    </section>
+    <section id="theory.training">
+      <title id="theory.training.title">Training an ANN</title>
       <para>
-       Update the stepwise function in the output layers of <parameter>ann</parameter>.
+        When training an ANN with a set of input and output data, we wish to adjust the weights in the ANN, to make
+	the ANN give the same outputs as seen in the training data. On the other hand, we do not want to make the ANN
+	too specific, making it give precise results for the training data, but incorrect results for all other data.
+	When this happens, we say that the ANN has been over-fitted.
       </para>
-     </refsect1>
-    </refentry>
-   </section>
-  </section>
-
-  <section id="api.sec.deprecated">
-   <title>Deprecated Functions</title>
-
-   <section id="api.sec.error.deprecated">
-    <title>Error Handling</title>
-
-    <refentry id="api.fann_get_error">
-     <refnamediv>
-      <refname>fann_get_error</refname>
-      <refpurpose>Return the mean square error of an ANN.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-      <methodsynopsis>
-       <type>float</type><methodname>fann_get_error</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
       <para>
-       This function is deprecated and will be removed in a future version. Use
-       <link linkend="api.fann_get_MSE"><function>fann_get_MSE</function></link> instead.
+        The training process can be seen as an optimization problem, where we wish to minimize the mean square
+	error of the entire set of training data. This problem can be solved in many different ways, ranging from
+	standard optimization heuristics like simulated annealing, through more special optimization techniques like
+	genetic algorithms to specialized gradient descent algorithms like backpropagation.
       </para>
-     </refsect1>
-    </refentry>
-
-    <refentry id="api.fann_reset_error">
-     <refnamediv>
-      <refname>fann_get_error</refname>
-      <refpurpose>Reset the mean square error of an ANN.</refpurpose>
-     </refnamediv>
-     <refsect1>
-      <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_reset_error</methodname>
-       <methodparam><type>struct fann *</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
       <para>
-       This function is deprecated and will be removed in a future version. Use
-       <link linkend="api.fann_reset_MSE"><function>fann_reset_MSE</function></link> instead.
+        The most used algorithm is the backpropagation algorithm, but this algorithm has some limitations
+	concerning, the extent of adjustment to the weights in each iteration. This problem has been solved in more
+	advanced algorithms like RPROP [<xref linkend="bib.riedmiller_1993" endterm="bib.riedmiller_1993.abbrev" />]
+	and quickprop [<xref linkend="bib.fahlman_1988" endterm="bib.fahlman_1988.abbrev" />].
       </para>
-     </refsect1>
-    </refentry>
-   </section>
-  </section>
- </chapter>
-
- <chapter id="php">
-  <title>PHP Extension</title>
-  <para>
-   These functions allow you to interact with the FANN library from PHP.
-  </para>
-  <para>
-   This extension requires the
-   <ulink url="http://fann.sf.net/">FANN</ulink> library,
-   version 1.0.6 or later.
-  </para>
-  <para>
-   The following activation functions are supported:
-   <itemizedlist>
-    <listitem><simpara>FANN_SIGMOID</simpara></listitem>
-    <listitem><simpara>FANN_THRESHOLD</simpara></listitem>
-    <listitem><simpara>FANN_SIGMOID_STEPWISE</simpara></listitem>
-   </itemizedlist>
-  </para>
-
-  <section id="php.api">
-   <title>API Reference</title>
-   <refentry id="function.fann_create">
-    <refnamediv>
-     <refname>fann_create</refname>
-     <refpurpose>Creates an artificial neural network.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>mixed</type><methodname>fann_create</methodname>
-       <methodparam><type>array</type><parameter>data</parameter></methodparam>
-       <methodparam choice="opt"><type>float</type><parameter>connection_rate</parameter></methodparam>
-       <methodparam choice="opt"><type>float</type><parameter>learning_rate</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_create</function> will create an artificial neural
-      network using the data given.
-     </para>
-     <para>
-      If the first parameter is an array, <function>fann_create</function>
-      will use the data and structure of the array, as well as
-      <parameter>connection_rate</parameter> and
-      <parameter>learning_rate</parameter>.
-     </para>
-     <para>
-      If <function>fann_create</function> is called with a sole string argument,
-      it will attempt to load an ANN created with <function>fann_save</function>
-      from the file at <parameter>filename</parameter>.
-     </para>
-     <para>
-      <function>fann_create</function> will return the artificial neural network
-      on success, or FALSE if it fails.
-     </para>
-     <para>
-      <example>
-       <title><function>fann_create</function> from scratch</title>
-       <programlisting role="php">
+    </section>
+  </chapter>
+  <chapter id="api">
+    <title id="api.title">API Reference</title>
+    <para>This is a list of all functions and structures in FANN.</para>
+    <section id="api.sec.create_destroy">
+      <title id="api.sec.create_destroy.title">Creation and Destruction</title>
+      <refentry id="api.fann_create">
+        <refnamediv>
+          <refname>fann_create</refname>
+          <refpurpose>Save an artificial neural network to a file.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>struct fann *</type>
+            <methodname>fann_create</methodname>
+            <methodparam>
+              <type>float</type>
+              <parameter>connection_rate</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>learning_rate</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>num_layers</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>...</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_create</function> will create a new artificial neural network, and return a pointer to it.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_create_array">
+        <refnamediv>
+          <refname>fann_create_array</refname>
+          <refpurpose>Save an artificial neural network to a file.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>struct fann *</type>
+            <methodname>fann_create_array</methodname>
+            <methodparam>
+              <type>float</type>
+              <parameter>connection_rate</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>learning_rate</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>num_layers</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int *</type>
+              <parameter>neurons_per_layer</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_create_array</function> will create a new artificial neural network, and return a pointer to
+	    it. It is the same as <function>fann_create</function>, only it accepts an array as its final parameter
+	    instead of variable arguments.
+	  </para>
+          <para>This function appears in FANN >= 1.0.5.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_destroy">
+        <refnamediv>
+          <refname>fann_destroy</refname>
+          <refpurpose>Destroy an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_destroy</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_destroy</function> will destroy an artificial neural network, properly freeing all associate
+	    memory.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_run">
+        <refnamediv>
+          <refname>fann_run</refname>
+          <refpurpose>Run an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>fann_type *</type>
+            <methodname>fann_run</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>fann_type *</type>
+              <parameter>input</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    <function>fann_run</function> will run <parameter>input</parameter> through <parameter>ann</parameter>,
+	    returning an array of outputs, the number of which being equal to the number of neurons in the output
+	    layer.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_randomize_weights">
+        <refnamediv>
+          <refname>fann_randomize_weights</refname>
+          <refpurpose>Give each neuron a random weights.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>struct fann *</type>
+            <methodname>ann</methodname>
+            <methodparam>
+              <type>fann_type</type>
+              <parameter>min_weight</parameter>
+            </methodparam>
+            <methodparam>
+              <type>fann_type</type>
+              <parameter>max_height</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Randomizes the weight of each neuron in <parameter>ann</parameter>, effectively resetting the network.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+    </section>
+    <section id="api.sec.io">
+      <title id="api.sec.io.title">Input/Output</title>
+      <refentry id="api.fann_save">
+        <refnamediv>
+          <refname>fann_save</refname>
+          <refpurpose>Save an ANN to a file.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_save</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>const char *</type>
+              <parameter>configuration_file</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_save</function> will attempt to save <parameter>ann</parameter> to the file located at 
+            <parameter>configuration_file</parameter>
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_save_to_fixed">
+        <refnamediv>
+          <refname>fann_save_to_fixed</refname>
+          <refpurpose>Save an ANN to a fixed-point file.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_save_to_fixed</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>const char *</type>
+              <parameter>configuration_file</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_save_fixed</function> will attempt to save <parameter>ann</parameter> to the file located at
+	    <parameter>configuration_file</parameter> as a fixed-point netowrk.
+	  </para>
+	  <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_create_from_file">
+        <refnamediv>
+          <refname>fann_create_from_file</refname>
+          <refpurpose>Load an ANN from a file..</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>struct fann *</type>
+            <methodname>fann_create_from_file</methodname>
+            <methodparam>
+              <type>const char *</type>
+              <parameter>configuration_file</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_create_from_file</function>will attempt to load an artificial neural netowrk from a file.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+    </section>
+    <section id="api.sec.train_algo">
+      <title id="api.sec.train_algo.title">Training</title>
+      <refentry id="api.fann_train">
+        <refnamediv>
+          <refname>fann_train</refname>
+          <refpurpose>Train an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_train</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>fann_type *</type>
+              <parameter>input</parameter>
+            </methodparam>
+            <methodparam>
+              <type>fann_type *</type>
+              <parameter>output</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    <function>fann_train</function> will train one iteration with a set of inputs, and a set of desired
+	    outputs.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_test">
+        <refnamediv>
+          <refname>fann_test</refname>
+          <refpurpose>Tests an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>fann_type *</type>
+            <methodname>fann_test</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>fann_type *</type>
+              <parameter>input</parameter>
+            </methodparam>
+            <methodparam>
+              <type>fann_type *</type>
+              <parameter>desired_error</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Test with a set of inputs, and a set of desired outputs. This operation updates the mean square error,
+            but does not change the network in any way.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_MSE">
+        <refnamediv>
+          <refname>fann_get_MSE</refname>
+          <refpurpose>Return the mean square error of an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>float</type>
+            <methodname>fann_get_MSE</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Reads the mean square error from the network.</para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_reset_MSE">
+        <refnamediv>
+          <refname>fann_reset_MSE</refname>
+          <refpurpose>Reset the mean square error of an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_reset_MSE</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Resets the mean square error from the network.</para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+    </section>
+    <section id="api.sec.train_data">
+      <title id="api.sec.train_data.title">Training Data</title>
+      <refentry id="api.fann_read_train_from_file">
+        <refnamediv>
+          <refname>fann_read_train_from_file</refname>
+          <refpurpose>Read training data from a file.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>struct fann_train_data *</type>
+            <methodname>fann_read_train_from_file</methodname>
+            <methodparam>
+              <type>char *</type>
+              <parameter>filename</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    <function>fann_read_train_from_file</function>will load training data from a file.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_save_train">
+        <refnamediv>
+          <refname>fann_save_train</refname>
+          <refpurpose>Save training data.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_save_train</methodname>
+            <methodparam>
+              <type>struct data *</type>
+              <parameter>train_data</parameter>
+            </methodparam>
+            <methodparam>
+              <type>FILE *</type>
+              <parameter>filename</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Save <parameter>train_data</parameter> to <parameter>filename</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_save_train_to_fixed">
+        <refnamediv>
+          <refname>fann_save_train_to_fixed</refname>
+          <refpurpose>Save training data as fixed point.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_save_to_fixed</methodname>
+            <methodparam>
+              <type>struct data *</type>
+              <parameter>train_data</parameter>
+            </methodparam>
+            <methodparam>
+              <type>FILE *</type>
+              <parameter>filename</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>decimal_point</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Save <parameter>train_data</parameter> as fixed point to <parameter>filename</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_destroy_train">
+        <refnamediv>
+          <refname>fann_destroy_train</refname>
+          <refpurpose>Destroy training data.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_destroy_train_data</methodname>
+            <methodparam>
+              <type>struct fann_train_data *</type>
+              <parameter>train_data</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Destroy the training data stored in <parameter>train_data</parameter>, freeing the associated memory.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_train_on_data">
+        <refnamediv>
+          <refname>fann_train_on_data</refname>
+          <refpurpose>Train an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_train_on_data</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>struct fann_train_data *</type>
+              <parameter>data</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>max_epochs</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>epochs_between_reports</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>desired_error</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Trains 
+          <parameter>ann</parameter>using 
+          <parameter>data</parameter>until 
+          <parameter>desired_error</parameter>is reached, or until 
+          <parameter>max_epochs</parameter>is surpassed.</para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_train_on_data_callback">
+        <refnamediv>
+          <refname>fann_train_on_data_callback</refname>
+          <refpurpose>Train an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_train_on_data_callback</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>struct fann_train_data *</type>
+              <parameter>data</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>max_epochs</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>epochs_between_reports</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>desired_error</parameter>
+            </methodparam>
+            <methodparam>
+              <type>int</type>
+              <parameter>(*callback)(unsigned int epochs, float error)</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Trains <parameter>ann</parameter> using <parameter>data</parameter> until
+	    <parameter>desired_error</parameter> is reached, or until <parameter>max_epochs</parameter>
+	    is surpassed.
+	  </para>
+          <para>
+	    This function behaves identically to 
+            <link linkend="api.fann_train_on_data"><function>fann_train_on_data</function></link>, except that 
+	    <function>fann_train_on_data_callback</function>allows you to specify a function to be called every 
+	    <parameter>epochs_between_reports</parameter>instead of using the default reporting mechanism.
+	  </para>
+          <para>This function appears in FANN >= 1.0.5.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_train_on_file">
+        <refnamediv>
+          <refname>fann_train_on_file</refname>
+          <refpurpose>Train an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_train_on_file</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>char *</type>
+              <parameter>filename</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>max_epochs</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>epochs_between_reports</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>desired_error</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Trains <parameter>ann</parameter> using the data in <parameter>filename</parameter> until
+	    <parameter>desired_error</parameter> is reached, or until <parameter>max_epochs</parameter> is surpassed.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_train_on_file_callback">
+        <refnamediv>
+          <refname>fann_train_on_file_callback</refname>
+          <refpurpose>Train an ANN.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_train_on_file_callback</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>char *</type>
+              <parameter>filename</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>max_epochs</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>epochs_between_reports</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>desired_error</parameter>
+            </methodparam>
+            <methodparam>
+              <type>int</type>
+              <parameter>(*callback)(unsigned int epochs, float error)</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Trains <parameter>ann</parameter> using the data in <parameter>filename</parameter> until
+	    <parameter>desired_error</parameter> is reached, or until <parameter>max_epochs</parameter> is surpassed.
+	  </para>
+          <para>
+	    This function behaves identically to
+	    <link linkend="api.fann_train_on_file"><function>fann_train_on_file</function></link>, except that 
+	    <function>fann_train_on_file_callback</function> allows you to specify a function to be called every 
+            <parameter>epochs_between_reports</parameter> instead of using the default reporting mechanism.
+	  </para>
+          <para>This function appears in FANN >= 1.0.5.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_shuffle_train_data">
+        <refnamediv>
+          <refname>fann_shuffle_train_data</refname>
+          <refpurpose>Shuffle the training data.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_shuffle_train_data</methodname>
+            <methodparam>
+              <type>struct fann_train_data *</type>
+              <parameter>data</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_shuffle_train_data</function>will randomize the order of the training data contained in 
+            <parameter>data</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_merge_train_data">
+        <refnamediv>
+          <refname>fann_merge_train_data</refname>
+          <refpurpose>Merge two sets of training data.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>struct fann_train_data *</type>
+            <methodname>fann_merge_train_data</methodname>
+            <methodparam>
+              <type>struct fann_train_data *</type>
+              <parameter>data1</parameter>
+            </methodparam>
+            <methodparam>
+              <type>struct fann_train_data *</type>
+              <parameter>data2</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_merge_train_data</function>will return a single set of training data which contains all data
+            from <parameter>data1</parameter> and <parameter>data2</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_duplicate_train_data">
+        <refnamediv>
+          <refname>fann_duplicate_train_data</refname>
+          <refpurpose>Copies a set of training data.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>struct fann_train_data *</type>
+            <methodname>fann_duplicate_train_data</methodname>
+            <methodparam>
+              <type>struct fann_train_data *</type>
+              <parameter>data</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    <function>fann_duplicate_train_data</function>will return a copy of <parameter>data</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+    </section>
+    <section id="api.sec.options">
+      <title id="api.sec.options.title">Options</title>
+      <refentry id="api.fann_get_learning_rate">
+        <refnamediv>
+          <refname>fann_get_learning_rate</refname>
+          <refpurpose>Retrieve learning rate from a network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>float</type>
+            <methodname>fann_get_learning_rate</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Return the learning rate for a given network.</para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_set_learning_rate">
+        <refnamediv>
+          <refname>fann_set_learning_rate</refname>
+          <refpurpose>Set a network's learning rate.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type></type>
+            <methodname>fann_set_learning_rate</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>learning_rate</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Set the learning rate of a network.</para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_activation_function_hidden">
+        <refnamediv>
+          <refname>fann_get_activation_function_hidden</refname>
+          <refpurpose>Get the activation function of the hidden layer.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>unsigned int</type>
+            <methodname>fann_get_activation_function_hidden</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Return the activation function of the hidden layer.</para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_set_activation_function_hidden">
+        <refnamediv>
+          <refname>fann_set_activation_function_hidden</refname>
+          <refpurpose>Set the activation function for the hidden layer.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type></type>
+            <methodname>fann_set_activation_function_hidden</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>activation_function</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Set the activation function of the hidden layer to 
+            <parameter>activation_function</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_activation_function_output">
+        <refnamediv>
+          <refname>fann_get_activation_function_output</refname>
+          <refpurpose>Get the activation function of the output layer.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>unsigned int</type>
+            <methodname>fann_get_activation_function_output</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Return the activation function of the output layer.</para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_set_activation_function_output">
+        <refnamediv>
+          <refname>fann_set_activation_function_output</refname>
+          <refpurpose>Set the activation function for the output layer.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_set_activation_function_output</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>unsigned int</type>
+              <parameter>activation_function</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Set the activation function of the output layer to 
+	    <parameter>activation_function</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_activation_hidden_steepness">
+        <refnamediv>
+          <refname>fann_get_activation_hidden_steepness</refname>
+          <refpurpose>Retrieve the steepness of the activation function of the hidden layers.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>fann_type</type>
+            <methodname>fann_get_activation_hidden_steepness</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Return the steepness of the activation function of the hidden layers.</para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_set_activation_hidden_steepness">
+        <refnamediv>
+          <refname>fann_set_activation_hidden_steepness</refname>
+          <refpurpose>Set the steepness of the activation function of the hidden layers.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_set_activation_hidden_steepness</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>fann_type</type>
+              <parameter>steepness</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Set the steepness of the activation function of thie hidden layers of 
+	    <parameter>ann</parameter>to 
+	    <parameter>steepness</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_activation_output_steepness">
+        <refnamediv>
+          <refname>fann_get_activation_output_steepness</refname>
+          <refpurpose>Retrieve the steepness of the activation function of the hidden layers.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>fann_type</type>
+            <methodname>fann_get_activation_output_steepness</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Return the steepness of the activation function of the hidden layers.</para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_set_activation_output_steepness">
+        <refnamediv>
+          <refname>fann_set_activation_output_steepness</refname>
+          <refpurpose>Set the steepness of the activation function of the hidden layers.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_set_activation_output_steepness</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>fann_type</type>
+              <parameter>steepness</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Set the steepness of the activation function of thie hidden layers of 
+            <parameter>ann</parameter> to <parameter>steepness</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_num_input">
+        <refnamediv>
+          <refname>fann_get_num_input</refname>
+          <refpurpose>Get the number of neurons in the input layer.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>unsigned int</type>
+            <methodname>fann_get_num_input</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Return the number of neurons in the input layer of 
+          <parameter>ann</parameter>.</para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_num_output">
+        <refnamediv>
+          <refname>fann_get_num_output</refname>
+          <refpurpose>Get number of neurons in the output layer.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>unsigned int</type>
+            <methodname>fann_get_num_output</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Return the number of neurons in the output layer of 
+            <parameter>ann</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_total_neurons">
+        <refnamediv>
+          <refname>fann_get_total_neurons</refname>
+          <refpurpose>Get the total number of neurons in a network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>unsigned int</type>
+            <methodname>fann_get_total_neurons</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Return the total number of neurons in 
+	    <parameter>ann</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_total_connections">
+        <refnamediv>
+          <refname>fann_get_total_connections</refname>
+          <refpurpose>Get the total number of connections in a network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>unsigned int</type>
+            <methodname>fann_get_total_connections</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Return the total number of connections in <parameter>ann</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_decimal_point">
+        <refnamediv>
+          <refname>fann_get_decimal_point</refname>
+          <refpurpose>Get the position of the decimal point.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>unsigned int</type>
+            <methodname>fann_get_decimal_point</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Return the position of the decimal point in <parameter>ann</parameter>.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_multiplier">
+        <refnamediv>
+          <refname>fann_get_multiplier</refname>
+          <refpurpose>Get the multiplier.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type></type>
+            <methodname>fann_get_multiplier</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Return the multiplier that fix point data in <parameter>ann</parameter>is multiplied with.
+	  </para>
+          <para>This function appears in FANN >= 1.0.0.</para>
+        </refsect1>
+      </refentry>
+    </section>
+    <section id="api.sec.errors">
+      <title id="api.sec.errors.title">Error Handling</title>
+      <refentry id="api.fann_get_errno">
+        <refnamediv>
+          <refname>fann_get_errno</refname>
+          <refpurpose>Return the numerical representation of the last error.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>unsigned int</type>
+            <methodname>fann_get_errno</methodname>
+            <methodparam>
+              <type>struct fann_error *</type>
+              <parameter>errdat</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Returns the numerical representation of the last error. The error codes are defined in 
+            <filename>fann_errno.h</filename>.
+	  </para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_get_errstr">
+        <refnamediv>
+          <refname>fann_get_errstr</refname>
+          <refpurpose>Return the last error.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>char *</type>
+            <methodname>fann_get_errstr</methodname>
+            <methodparam>
+              <type>struct fann_error *</type>
+              <parameter>errdat</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Returns the last error.</para>
+          <para>
+	    Note: This will reset the network's error- any subsequent calls to <function>fann_get_errno</function> or
+	    <function>fann_get_errstr</function> will yield 0 and NULL, respectively.
+	  </para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_reset_errno">
+        <refnamediv>
+          <refname>fann_reset_errno</refname>
+          <refpurpose>Reset the last error number.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_reset_errno</methodname>
+            <methodparam>
+              <type>struct fann_error *</type>
+              <parameter>errdat</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Reset the last error number.</para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_reset_errstr">
+        <refnamediv>
+          <refname>fann_reset_errstr</refname>
+          <refpurpose>Reset the last error string.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_reset_errstr</methodname>
+            <methodparam>
+              <type>struct fann_error *</type>
+              <parameter>errdat</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Reset the last error string.</para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_set_error_log">
+        <refnamediv>
+          <refname>fann_set_error_log</refname>
+          <refpurpose>Set the error log to a file descriptor.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_set_error_log</methodname>
+            <methodparam>
+              <type>struct fann_error *</type>
+              <parameter>errdat</parameter>
+            </methodparam>
+            <methodparam>
+              <type>FILE *</type>
+              <parameter>log</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+	    Set the error log to <parameter>log</parameter>.
+	  </para>
+          <para>The error log defaults to stderr.</para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="api.fann_print_error">
+        <refnamediv>
+          <refname>fann_print_error</refname>
+          <refpurpose>Print the last error to the error log.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_print_error_log</methodname>
+            <methodparam>
+              <type>struct fann *</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>Prints the network's last error to the error log.</para>
+          <para>The error log defaults to stderr.</para>
+          <para>This function appears in FANN >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+    </section>
+    <section id="api.sec.struct">
+      <title id="api.sec.struct.title">Data Structures</title>
+      <refentry id="api.struct.fann">
+        <refnamediv>
+          <refname>struct fann</refname>
+          <refpurpose>Describes a neural network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <para>
+	    This structure is subject to change at any time. If you need to use the values contained herein, please
+	    see the <link linkend="api.sec.options">Options</link>functions. If these functions do not fulfill your
+	    needs, please open a feature request on our SourceForge
+	    <ulink url="http://www.sourceforge.net/projects/fann">project page</ulink>.
+	  </para>
+          <variablelist>
+            <title>Properties</title>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>errno_f</varname>
+              </term>
+              <listitem>
+                <para>The type of error that last occured.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>FILE *</type>
+                <varname>error_log</varname>
+              </term>
+              <listitem>
+                <para>Where to log error messages.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>char *</type>
+                <varname>errstr</varname>
+              </term>
+              <listitem>
+                <para>A string representation of the last error.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>float</type>
+                <varname>learning_rate</varname>
+              </term>
+              <listitem>
+                <para>The learning rate of the network.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>float</type>
+                <varname>connection_rate</varname>
+              </term>
+              <listitem>
+                <para>The connection rate of the network. Between 0 and 1, 1 meaning fully connected.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>struct fann_layer *</type>
+                <varname>first_layer</varname>
+              </term>
+              <listitem>
+                <para>
+		  Pointer to the first layer (input layer) in an array af all the layers, including the input and
+                  output layers.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>struct fann_layer *</type>
+                <varname>last_layer</varname>
+              </term>
+              <listitem>
+                <para>
+		  Pointer to the layer past the last layer in an array af all the layers, including the input and
+                  output layers.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>total_neurons</varname>
+              </term>
+              <listitem>
+                <para>
+		  Total number of neurons. Very useful, because the actual neurons are allocated in one long
+                  array.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>num_input</varname>
+              </term>
+              <listitem>
+                <para>Number of input neurons (not calculating bias)</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>num_output</varname>
+              </term>
+              <listitem>
+                <para>Number of output neurons (not calculating bias)</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type *</type>
+                <varname>train_deltas</varname>
+              </term>
+              <listitem>
+                <para>
+		  Used to contain the error deltas used during training Is allocated during first training session,
+                  which means that if we do not train, it is never allocated.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>activation_function_output</varname>
+              </term>
+              <listitem>
+                <para>Used to choose which activation function to use in the output layer.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>activation_function_hidden</varname>
+              </term>
+              <listitem>
+                <para>Used to choose which activation function to use in the hidden layers.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>activation_hidden_steepness</varname>
+              </term>
+              <listitem>
+                <para>Parameters for the activation function in the hidden layers.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>activation_output_steepness</varname>
+              </term>
+              <listitem>
+                <para>Parameters for the activation function in the output layer.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>decimal point</varname>
+              </term>
+              <listitem>
+                <para>
+                <emphasis>Fixed point only.</emphasis>The decimal point, used for shifting the fix point in fixed point
+                integer operatons.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>multiplier</varname>
+              </term>
+              <listitem>
+                <para>
+                  <emphasis>Fixed point only.</emphasis>The multiplier, used for multiplying the fix point in fixed point
+                  integer operatons. Only used in special cases, since the decimal_point is much faster.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type *</type>
+                <varname>activation_hidden_results</varname>
+              </term>
+              <listitem>
+                <para>
+		  An array of six members used by some activation functions to hold results for the hidden
+                  layer(s).
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type *</type>
+                <varname>activation_hidden_values</varname>
+              </term>
+              <listitem>
+                <para>
+		  An array of six members used by some activation functions to hold values for the hidden
+                  layer(s).
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type *</type>
+                <varname>activation_output_results</varname>
+              </term>
+              <listitem>
+                <para>
+		  An array of six members used by some activation functions to hold results for the output
+                  layer.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type *</type>
+                <varname>activation_output_values</varname>
+              </term>
+              <listitem>
+                <para>
+		  An array of six members used by some activation functions to hold values for the output
+                  layer.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>total_connections</varname>
+              </term>
+              <listitem>
+                <para>
+		  Total number of connections. Very useful, because the actual connections are allocated in one
+                  long array.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type *</type>
+                <varname>output</varname>
+              </term>
+              <listitem>
+                <para>Used to store outputs in.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>num_errors</varname>
+              </term>
+              <listitem>
+                <para>The number of data used to calculate the error.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>float</type>
+                <varname>error_value</varname>
+              </term>
+              <listitem>
+                <para>The total error value. The real mean square error is error_value/num_errors.</para>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </refsect1>
+      </refentry>
+      <refentry id="api.struct.fann_train_data">
+        <refnamediv>
+          <refname>struct fann_train_data</refname>
+          <refpurpose>Describes a set of training data.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+	  <para>
+	    This structure is subject to change at any time. If you need to use the values contained herein, please
+	    see the <link linkend="api.sec.train_data">Training Data</link> functions. If these functions do not
+	    fulfill your needs, please open a feature request on our SourceForge
+	    <ulink url="http://www.sourceforge.net/projects/fann">project page</ulink>.
+	  </para>
+          <variablelist>
+            <title>Properties</title>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>errno_f</varname>
+              </term>
+              <listitem>
+                <para>The type of error that last occured.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>FILE *</type>
+                <varname>error_log</varname>
+              </term>
+              <listitem>
+                <para>Where to log error messages.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>char *</type>
+                <varname>errstr</varname>
+              </term>
+              <listitem>
+                <para>A string representation of the last error.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>num_data</varname>
+              </term>
+              <listitem>
+                <para>The number of sets of data in the array.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>num_input</varname>
+              </term>
+              <listitem>
+                <para>The number of inputs per set of data.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>num_output</varname>
+              </term>
+              <listitem>
+                <para>The number of outputs per set of data.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type **</type>
+                <varname>input</varname>
+              </term>
+              <listitem>
+                <para>
+		  An array of <varname>num_data</varname> elements, each of which contain an array of
+		  <varname>num_input</varname> elements, which represent every item of input data.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type **</type>
+                <varname>input</varname>
+              </term>
+              <listitem>
+                <para>
+		  An array of <varname>num_data</varname> elements, each of which contain an array of
+		  <varname>num_output</varname> elements, which represent every item of output data.
+		</para>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </refsect1>
+      </refentry>
+      <refentry id="api.struct.fann_error">
+        <refnamediv>
+          <refname>struct fann_error</refname>
+          <refpurpose>Describes an error.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+	  <para>
+	    This structure is subject to change at any time. If you need to use the values contained herein, please
+	    see the <link linkend="api.sec.errors">Error Handling</link> functions. If these functions do not
+	    fulfill your needs, please open a feature request on our SourceForge
+	    <ulink url="http://www.sourceforge.net/projects/fann">project page</ulink>.
+	  </para>
+	  <para>
+	    You may notice that this structure is identical to the first three properties of the
+	    <link linkend="api.struct.fann"><type>fann</type></link> and
+	    <link linkend="api.struct.fann_train_data"><type>fann_train_data</type></link> structures. This is so you can cast
+	    each of those structures to <type>struct fann_error *</type> when calling the
+	    <link linkend="api.sec.errors">Error Handling</link> functions.
+	  </para>
+          <variablelist>
+            <title>Properties</title>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>errno_f</varname>
+              </term>
+              <listitem>
+                <para>The type of error that last occured.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>FILE *</type>
+                <varname>error_log</varname>
+              </term>
+              <listitem>
+                <para>Where to log error messages.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>char *</type>
+                <varname>errstr</varname>
+              </term>
+              <listitem>
+                <para>A string representation of the last error.</para>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </refsect1>
+      </refentry>
+      <refentry id="api.struct.fann_neuron">
+        <refnamediv>
+          <refname>struct fann_neuron</refname>
+          <refpurpose>Describes an individual neuron.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+	  <para>
+	    This structure is subject to change at any time. If you require direct
+	    access to the contents of this structure, you may want to consider contacting
+	    the <ulink url="mailto:fann-general at lists.sourceforge.net">FANN development
+	    team</ulink>.
+	  </para>
+          <variablelist>
+            <title>Properties</title>
+            <varlistentry>
+              <term>
+                <type>fann_type *</type>
+                <varname>weights</varname>
+              </term>
+              <listitem>
+                <para>This property is not yet documented.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>struct fann_neuron **</type>
+                <varname>connected_neurons</varname>
+              </term>
+              <listitem>
+                <para>This property is not yet documented.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>unsigned int</type>
+                <varname>num_connections</varname>
+              </term>
+              <listitem>
+                <para>This property is not yet documented.</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>fann_type</type>
+                <varname>value</varname>
+              </term>
+              <listitem>
+                <para>This property is not yet documented.</para>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </refsect1>
+      </refentry>
+      <refentry id="api.struct.fann_layer">
+        <refnamediv>
+          <refname>struct fann_layer</refname>
+          <refpurpose>Describes a layer in a network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+	  <para>
+	    This structure is subject to change at any time. If you require direct
+	    access to the contents of this structure, you may want to consider contacting
+	    the <ulink url="mailto:fann-general at lists.sourceforge.net">FANN development
+	    team</ulink>.
+	  </para>
+          <variablelist>
+            <title>Properties</title>
+            <varlistentry>
+              <term>
+                <type>struct fann_neuron *</type>
+                <varname>first_neuron</varname>
+              </term>
+              <listitem>
+                <para>
+		  A pointer to the first neuron in the layer. When allocated, all the
+		  neurons in all the layers are actually in one long array, this is
+		  because we wan't to easily clear all the neurons at once.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>
+                <type>struct fann_neuron *</type>
+                <varname>last_neuron</varname>
+              </term>
+              <listitem>
+                <para>
+		  A pointer to the neuron past the last neuron in the layer
+		  the number of neurons is <varname>last_neuron</varname>
+		  - <varname>first_neuron</varname>
+		</para>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </refsect1>
+      </refentry>
+    </section>
+    <section id="api.sec.constants">
+      <title id="api.sec.constants.title">Constants</title>
+
+      <refentry id="api.sec.constants.activation">
+        <refnamediv>
+          <refname id="api.sec.constants.activation.title">Activation Function Constants</refname>
+          <refpurpose>Constants representing activation functions.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+	  <para>
+	    These constants represent the activation functions available within the fann library.
+	    The list will grow over time, but probably not shrink.
+	  </para>
+          <variablelist>
+            <title>Constants</title>
+            <varlistentry>
+              <term>FANN_THRESHOLD</term>
+              <listitem>
+                <para>
+		  <emphasis>Execution only</emphasis> -
+		  Threshold activation function.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_SIGMOID</term>
+              <listitem>
+                <para>
+		  Sigmoid activation function. One of the most used activation functions.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_SIGMOID_STEPWISE</term>
+              <listitem>
+                <para>
+		  Stepwise linear approximation to sigmoid. Faster than sigmoid but a bit less precise.
+		</para>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </refsect1>
+      </refentry>
+      <refentry id="api.sec.constants.error">
+        <refnamediv>
+          <refname id="api.sec.constants.error.title">Error Codes</refname>
+          <refpurpose>Constants representing errors.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+	  <para>
+	    These constants represent the various errors possible in fann, as
+	    defined by <filename>fann_errno.h</filename>.	    
+	  </para>
+          <variablelist>
+            <title>Constants</title>
+            <varlistentry>
+              <term>FANN_E_NO_ERROR</term>
+              <listitem>
+                <para>
+		  No error.
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_OPEN_CONFIG_R</term>
+              <listitem>
+                <para>
+		  Unable to open configuration file for reading
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_OPEN_CONFIG_W</term>
+              <listitem>
+                <para>
+		  Unable to open configuration file for writing
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_WRONG_CONFIG_VERSION</term>
+              <listitem>
+                <para>
+		  Wrong version of configuration file
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_READ_CONFIG</term>
+              <listitem>
+                <para>
+		  Error reading info from configuration file
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_READ_NEURON</term>
+              <listitem>
+                <para>
+		 Error reading neuron info from configuration file
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_READ_CONNECTIONS</term>
+              <listitem>
+                <para>
+		  Error reading connections from configuration file
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_WRONG_NUM_CONNECTIONS</term>
+              <listitem>
+                <para>
+		  Number of connections not equal to the number expected
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_OPEN_TD_W</term>
+              <listitem>
+                <para>
+		  Unable to open train data file for writing
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_OPEN_TD_R</term>
+              <listitem>
+                <para>
+		  Unable to open train data file for reading
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_READ_TD</term>
+              <listitem>
+                <para>
+		  Error reading training data from file
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_ALLOCATE_MEM</term>
+              <listitem>
+                <para>
+		  Unable to allocate memory
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_TRAIN_ACTIVATION</term>
+              <listitem>
+                <para>
+		  Unable to train with the selected activation function
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_CANT_USE_ACTIVATION</term>
+              <listitem>
+                <para>
+		  Unable to use the selected activation function
+		</para>
+              </listitem>
+            </varlistentry>
+            <varlistentry>
+              <term>FANN_E_TRAIN_DATA_MISMATCH</term>
+              <listitem>
+                <para>
+		  Irreconcilable differences between two fann_train_data structures
+		</para>
+              </listitem>
+            </varlistentry>
+          </variablelist>
+        </refsect1>
+      </refentry>
+    </section>
+    <section id="api.sec.internal">
+      <title id="api.sec.internal.title">Internal Functions</title>
+      <section id="api.sec.create_destroy.internal">
+        <title id="api.sec.create_destroy.internal.title">Creation And Destruction</title>
+        <refentry id="api.fann_allocate_structure">
+          <refnamediv>
+            <refname>fann_allocate_structure</refname>
+            <refpurpose>Allocate the core elements of a 
+            <type>struct fann</type>.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>struct fann *</type>
+              <methodname>fann_allocate_structure</methodname>
+              <methodparam>
+                <type>float</type>
+                <parameter>learning_rate</parameter>
+              </methodparam>
+              <methodparam>
+                <type>unsigned int</type>
+                <parameter>num_layers</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+            <function>fann_allocate_structure</function>is used internally to create a 
+            <type>struct fann</type>.</para>
+            <para>This function appears in FANN >= 1.0.0.</para>
+          </refsect1>
+        </refentry>
+      </section>
+      <section id="api.sec.io.internal">
+        <title id="api.sec.io.internal.title">Input/Output</title>
+        <refentry id="api.fann_save_internal">
+          <refnamediv>
+            <refname>fann_save_internal</refname>
+            <refpurpose>Save an ANN to a file.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>int</type>
+              <methodname>fann_save_internal</methodname>
+              <methodparam>
+                <type>struct fann *</type>
+                <parameter>ann</parameter>
+              </methodparam>
+              <methodparam>
+                <type>const char *</type>
+                <parameter>configuration_file</parameter>
+              </methodparam>
+              <methodparam>
+                <type>unsigned int</type>
+                <parameter>save_as_fixed</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+	      <function>fann_save_internal_fd</function> is used internally to save an ANN to a file.
+	    </para>
+            <para>This function appears in FANN >= 1.0.0.</para>
+          </refsect1>
+        </refentry>
+        <refentry id="api.fann_save_internal_fd">
+          <refnamediv>
+            <refname>fann_save_internal_fd</refname>
+            <refpurpose>Save an ANN to a file descriptor.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>int</type>
+              <methodname>fann_save_internal_fd</methodname>
+              <methodparam>
+                <type>struct fann *</type>
+                <parameter>ann</parameter>
+              </methodparam>
+              <methodparam>
+                <type>FILE *</type>
+                <parameter>conf</parameter>
+              </methodparam>
+              <methodparam>
+                <type>const char *</type>
+                <parameter>configuration_file</parameter>
+              </methodparam>
+              <methodparam>
+                <type>unsigned int</type>
+                <parameter>save_as_fixed</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+              <function>fann_save_internal_fd</function> is used internally to save an ANN to a location pointed to by 
+              <parameter>conf</parameter>. <parameter>configuration_file</parameter> is the name of the file, used only
+	      for debugging purposes.
+	    </para>
+            <para>This function appears in FANN >= 1.1.0.</para>
+          </refsect1>
+        </refentry>
+        <refentry id="api.fann_create_from_fd">
+          <refnamediv>
+            <refname>fann_create_from_fd</refname>
+            <refpurpose>Load an ANN from a file descriptor.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>struct fann *</type>
+              <methodname>fann_create_from_fd</methodname>
+              <methodparam>
+                <type>FILE *</type>
+                <parameter>conf</parameter>
+              </methodparam>
+              <methodparam>
+                <type>const char *</type>
+                <parameter>configuration_file</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+              <function>fann_create_from_fd</function>will load an ANN from a file descriptor.
+	    </para>
+            <para>This function appears in FANN >= 1.1.0.</para>
+          </refsect1>
+        </refentry>
+      </section>
+      <section id="api.sec.train_data.internal">
+        <title id="api.sec.train_data.internal.title">Training Data</title>
+        <refentry id="api.fann_save_train_internal">
+          <refnamediv>
+            <refname>fann_save_train_internal</refname>
+            <refpurpose>Save training data to a file.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>void</type>
+              <methodname>fann_save_train_internal</methodname>
+              <methodparam>
+                <type>struct fann_train_data *</type>
+                <parameter>data</parameter>
+              </methodparam>
+              <methodparam>
+                <type>char *</type>
+                <parameter>filename</parameter>
+              </methodparam>
+              <methodparam>
+                <type>unsigned int</type>
+                <parameter>save_as_fixed</parameter>
+              </methodparam>
+              <methodparam>
+                <type>unsigned int</type>
+                <parameter>decimal_point</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+	      Saves the data in <parameter>data</parameter> to <parameter>filename</parameter>.
+	      <parameter>save_as_fixed</parameter> is either TRUE or FALSE. <parameter>decimal_point</parameter> tells
+	      FANN where the decimal point may be if using fixed point math.
+	    </para>
+            <para>This function appears in FANN >= 1.0.0.</para>
+          </refsect1>
+        </refentry>
+        <refentry id="api.fann_save_train_internal_fd">
+          <refnamediv>
+            <refname>fann_save_train_internal_fd</refname>
+            <refpurpose>Save training data to a file descriptor.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>void</type>
+              <methodname>fann_save_train_internal_fd</methodname>
+              <methodparam>
+                <type>struct fann_train_data *</type>
+                <parameter>data</parameter>
+              </methodparam>
+              <methodparam>
+                <type>FILE *</type>
+                <parameter>file</parameter>
+              </methodparam>
+              <methodparam>
+                <type>char *</type>
+                <parameter>filename</parameter>
+              </methodparam>
+              <methodparam>
+                <type>unsigned int</type>
+                <parameter>save_as_fixed</parameter>
+              </methodparam>
+              <methodparam>
+                <type>unsigned int</type>
+                <parameter>decimal_point</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+	      Saves the data in <parameter>data</parameter> to <parameter>file</parameter>.
+	      <parameter>save_as_fixed</parameter> is either TRUE or FALSE. <parameter>decimal_point</parameter> tells
+	      FANN where the decimal point may be if using fixed point math.
+            </para>
+            <para>
+	      <parameter>filename</parameter> is used for debugging output only.
+	    </para>
+            <para>This function appears in FANN >= 1.1.0.</para>
+          </refsect1>
+        </refentry>
+        <refentry id="api.fann_read_train_from_fd">
+          <refnamediv>
+            <refname>fann_read_train_from_fd</refname>
+            <refpurpose>Read training data from a file descriptor.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>struct fann_train_data *</type>
+              <methodname>fann_read_train_from_file</methodname>
+              <methodparam>
+                <type>FILE *</type>
+                <parameter>file</parameter>
+              </methodparam>
+              <methodparam>
+                <type>char *</type>
+                <parameter>filename</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+              <function>fann_read_train_from_file</function> will load training data from the file descriptor 
+              <parameter>file</parameter>.
+	    </para>
+            <para>
+              <parameter>filename</parameter> is used for debugging output only.
+	    </para>
+            <para>This function appears in FANN >= 1.1.0.</para>
+          </refsect1>
+        </refentry>
+      </section>
+      <section id="api.sec.io.errors">
+        <title id="api.sec.io.errors.title">Error Handling</title>
+        <refentry id="api.fann_error">
+          <refnamediv>
+            <refname>fann_error</refname>
+            <refpurpose>Throw an internal error.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>void</type>
+              <methodname>fann_error</methodname>
+              <methodparam>
+                <type>struct fann_error *</type>
+                <parameter>errdat</parameter>
+              </methodparam>
+              <methodparam>
+                <type>unsigned int</type>
+                <parameter>errno</parameter>
+              </methodparam>
+              <methodparam>
+                <parameter>...</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+	      This will set the network's error to correspond to <parameter>errno</parameter>. The variable arguments
+	      depend (both in type and quantity) on <parameter>errno</parameter>. Possible <parameter>errno</parameter>
+	      values are defined in <filename>fann_errno.h</filename>.
+	    </para>
+            <para>This function appears in FANN >= 1.1.0.</para>
+          </refsect1>
+        </refentry>
+      </section>
+      <section id="api.sec.options.internal">
+        <title id="api.sec.options.internal.title">Options</title>
+        <refentry id="api.fann_update_stepwise_hidden">
+          <refnamediv>
+            <refname>fann_update_stepwise_hidden</refname>
+            <refpurpose>Adjust the stepwise function in the hidden layers.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>void</type>
+              <methodname>fann_update_stepwise_hidden</methodname>
+              <methodparam>
+                <type>struct fann *</type>
+                <parameter>ann</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+	      Update the stepwise function in the hidden layers of <parameter>ann</parameter>.
+	    </para>
+            <para>This function appears in FANN >= 1.0.0.</para>
+          </refsect1>
+        </refentry>
+        <refentry id="api.fann_update_stepwise_output">
+          <refnamediv>
+            <refname>fann_update_stepwise_output</refname>
+            <refpurpose>Adjust the stepwise functions in the output layers.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>void</type>
+              <methodname>fann_update_stepwise_output</methodname>
+              <methodparam>
+                <type>struct fann *</type>
+                <parameter>ann</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+	      Update the stepwise function in the output layers of <parameter>ann</parameter>.
+	    </para>
+            <para>This function appears in FANN >= 1.0.0.</para>
+          </refsect1>
+        </refentry>
+      </section>
+    </section>
+    <section id="api.sec.deprecated">
+      <title id="api.sec.deprecated.title">Deprecated Functions</title>
+      <section id="api.sec.error.deprecated">
+        <title id="api.sec.error.deprecated.title">Error Handling</title>
+        <refentry id="api.fann_get_error">
+          <refnamediv>
+            <refname>fann_get_error</refname>
+            <refpurpose>Return the mean square error of an ANN.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>float</type>
+              <methodname>fann_get_error</methodname>
+              <methodparam>
+                <type>struct fann *</type>
+                <parameter>ann</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+	      This function is deprecated and will be removed in a future version. Use 
+              <link linkend="api.fann_get_MSE"><function>fann_get_MSE</function></link> instead.
+	    </para>
+            <para>This function appears in FANN >= 1.0.0, but is deprecated in FANN >= 1.1.0.</para>
+          </refsect1>
+        </refentry>
+        <refentry id="api.fann_reset_error">
+          <refnamediv>
+            <refname>fann_get_error</refname>
+            <refpurpose>Reset the mean square error of an ANN.</refpurpose>
+          </refnamediv>
+          <refsect1>
+            <title>Description</title>
+            <methodsynopsis>
+              <type>void</type>
+              <methodname>fann_reset_error</methodname>
+              <methodparam>
+                <type>struct fann *</type>
+                <parameter>ann</parameter>
+              </methodparam>
+            </methodsynopsis>
+            <para>
+	      This function is deprecated and will be removed in a future version. Use
+	      <link linkend="api.fann_reset_MSE"><function>fann_reset_MSE</function></link>instead.
+	    </para>
+            <para>This function appears in FANN >= 1.0.0, but is deprecated in FANN >= 1.1.0.</para>
+          </refsect1>
+        </refentry>
+      </section>
+    </section>
+  </chapter>
+  <chapter id="php">
+    <title id="php.title">PHP Extension</title>
+    <para>These functions allow you to interact with the FANN library from PHP.</para>
+    <para>This extension requires the 
+    <ulink url="http://fann.sf.net/">FANN</ulink>library, version 1.0.6 or later.</para>
+    <para>The following activation functions are supported: 
+    <itemizedlist>
+      <listitem>
+        <simpara>FANN_SIGMOID</simpara>
+      </listitem>
+      <listitem>
+        <simpara>FANN_THRESHOLD</simpara>
+      </listitem>
+      <listitem>
+        <simpara>FANN_SIGMOID_STEPWISE</simpara>
+      </listitem>
+    </itemizedlist></para>
+    <section id="php.api">
+      <title id="php.api.title">API Reference</title>
+      <refentry id="function.fann_create">
+        <refnamediv>
+          <refname>fann_create</refname>
+          <refpurpose>Creates an artificial neural network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>mixed</type>
+            <methodname>fann_create</methodname>
+            <methodparam>
+              <type>mixed</type>
+              <parameter>data</parameter>
+            </methodparam>
+            <methodparam choice="opt">
+              <type>float</type>
+              <parameter>connection_rate</parameter>
+            </methodparam>
+            <methodparam choice="opt">
+              <type>float</type>
+              <parameter>learning_rate</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_create</function> will create an artificial neural network using the data given.
+	  </para>
+          <para>
+	    If the first parameter is an array, <function>fann_create</function> will use the data and structure of the
+	    array, as well as <parameter>connection_rate</parameter> and <parameter>learning_rate</parameter>.
+	  </para>
+          <para>
+	    If <function>fann_create</function> is called with a sole string argument, it will attempt to load an ANN
+	    created with <function>fann_save</function> from the file at <parameter>filename</parameter>.
+	  </para>
+          <para>
+            <function>fann_create</function>will return the artificial neural network on success, or FALSE if it fails.
+	  </para>
+          <example id="example.php.fann_create.scratch">
+            <title id="example.php.fann_create.scratch.title"><function>fann_create</function> from scratch</title>
+            <programlisting role="php">
 <![CDATA[
 <?php
 $ann = fann_create(
@@ -1458,146 +2926,168 @@ $ann = fann_create(
   0.7);
 ?>
 ]]>
-       </programlisting>
-      </example>
-     </para>
-     <para>
-      <example>
-       <title><function>fann_create</function> loading from a file</title>
-       <programlisting role="php">
+            </programlisting>
+          </example>
+          <example id="example.php.fann_create.load">
+            <title id="example.php.fann_create.load.title"><function>fann_create</function> loading from a file</title>
+            <programlisting role="php">
 <![CDATA[
 <?php
 $ann = fann_create("http://www.example.com/ann.net");
-);
 ?>
 ]]>
-       </programlisting>
-      </example>
-     </para>
-     <para>
-      See also <function>fann_save</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_train">
-    <refnamediv>
-     <refname>fann_train</refname>
-     <refpurpose>Train an artificial neural network.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>bool</type><methodname>fann_train</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>mixed</type><parameter>data</parameter></methodparam>
-       <methodparam><type>int</type><parameter>max_iterations</parameter></methodparam>
-       <methodparam><type>double</type><parameter>desired_error</parameter></methodparam>
-       <methodparam choice="opt"><type>int</type><parameter>iterations_between_reports</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_train</function> will train <parameter>ann</parameter> on
-      the data supplied, returning TRUE on success or FALSE on failure.
-     </para>
-     <para>
-      Resources is anrtificial neural network returned by <function>fann_create</function>.
-     </para>
-     <para>
-      <parameter>data</parameter> must be either an array of training data, or
-      the URI of a properly formatted training file.
-     </para>
-     <para>
-      <function>fann_train</function> will continue training until
-      <parameter>desired_error</parameter> is reached, or
-      <parameter>max_iterations</parameter> is exceeded.
-     </para>
-     <para>
-      If <parameter>iterations_between_reports</parameter> is set,
-      <function>fann_create</function> will output a short progress
-      report every <parameter>iterations_between_reports</parameter>.
-      Default is 0 (meaning no reports).
-     </para>
-     <para>
-      <example>
-       <title><function>fann_create</function> from training data</title>
-       <programlisting role="php">
+            </programlisting>
+          </example>
+          <para>
+	    See also <function>fann_save</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_train">
+        <refnamediv>
+          <refname>fann_train</refname>
+          <refpurpose>Train an artificial neural network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>bool</type>
+            <methodname>fann_train</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>mixed</type>
+              <parameter>data</parameter>
+            </methodparam>
+            <methodparam>
+              <type>int</type>
+              <parameter>max_iterations</parameter>
+            </methodparam>
+            <methodparam>
+              <type>double</type>
+              <parameter>desired_error</parameter>
+            </methodparam>
+            <methodparam choice="opt">
+              <type>int</type>
+              <parameter>iterations_between_reports</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_train</function> will train <parameter>ann</parameter> on the data supplied, returning TRUE
+	    on success or FALSE on failure.
+	  </para>
+          <para>
+	    Resources is an artificial neural network returned by <function>fann_create</function>.
+	  </para>
+          <para>
+            <parameter>data</parameter> must be either an array of training data, or the URI of a properly formatted
+	    training file.
+	  </para>
+          <para>
+            <function>fann_train</function> will continue training until <parameter>desired_error</parameter> is
+	    reached, or <parameter>max_iterations</parameter> is exceeded.
+	  </para>
+          <para>
+	    If <parameter>iterations_between_reports</parameter> is set, <function>fann_create</function> will output a
+	    short progress report every <parameter>iterations_between_reports</parameter>. Default is 0 (meaning no
+	    reports).
+	  </para>
+          <example id="example.php.fann_train">
+            <title id="example.php.fann_train.title">
+            <function>fann_create</function> from training data</title>
+            <programlisting role="php">
 <![CDATA[
 <?php
 $ann = fann_create(array(2, 4, 1), 1.0, 0.7);
 if ( fann_train($ann,
-	   array(
-		 array(
-		       array(0,0), /* Input(s) */
-		       array(0) /* Output(s) */
-		       ),
-		 array(
-		       array(0,1), /* Input(s) */
-		       array(1) /* Output(s) */
-		       ),
-		 array(
-		       array(1,0), /* Input(s) */
-		       array(1) /* Output(s) */
-		       ),
-		 array(array(1,1), /* Input(s) */
-		       array(0) /* Output(s) */
-		       )
-		 ),
-	   100000,
-	   0.00001,
-	   1000) == FALSE) {
+           array(
+                 array(
+                       array(0,0), /* Input(s) */
+                       array(0) /* Output(s) */
+                       ),
+                 array(
+                       array(0,1), /* Input(s) */
+                       array(1) /* Output(s) */
+                       ),
+                 array(
+                       array(1,0), /* Input(s) */
+                       array(1) /* Output(s) */
+                       ),
+                 array(array(1,1), /* Input(s) */
+                       array(0) /* Output(s) */
+                       )
+                 ),
+           100000,
+           0.00001,
+           1000) == FALSE) {
   exit('Could not train $ann.');
 }
 ?>
 ]]>
-       </programlisting>
-      </example>
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_save">
-    <refnamediv>
-     <refname>fann_save</refname>
-     <refpurpose>Save an artificial neural network to a file.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>bool</type><methodname>fann_save</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>string</type><parameter>filename</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_save</function> will save <parameter>ann</parameter> to
-      <parameter>filename</parameter>, returning TRUE on success or FALSE on failure.
-     </para>
-     <para>
-      See also <function>fann_create</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_run">
-    <refnamediv>
-     <refname>fann_run</refname>
-     <refpurpose>Run an artificial neural network.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>mixed</type><methodname>fann_run</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>array</type><parameter>input</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_run</function> will run <parameter>input</parameter> through
-      <parameter>ann</parameter>, returning an an ouput array on success or FALSE
-      on failure.
-     </para>
-     <para>
-      <example>
-       <title><function>fann_run</function> Example</title>
-       <programlisting role="php">
+            </programlisting>
+          </example>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_save">
+        <refnamediv>
+          <refname>fann_save</refname>
+          <refpurpose>Save an artificial neural network to a file.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>bool</type>
+            <methodname>fann_save</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>string</type>
+              <parameter>filename</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_save</function> will save <parameter>ann</parameter> to <parameter>filename</parameter>,
+	    returning TRUE on success or FALSE on failure.
+	  </para>
+          <para>
+	    See also <function>fann_create</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_run">
+        <refnamediv>
+          <refname>fann_run</refname>
+          <refpurpose>Run an artificial neural network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>mixed</type>
+            <methodname>fann_run</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>array</type>
+              <parameter>input</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_run</function> will run <parameter>input</parameter> through <parameter>ann</parameter>,
+	    returning an an ouput array on success or FALSE on failure.
+	  </para>
+          <example id="example.php.fann_run">
+            <title id="example.php.fann_run.title">
+            <function>fann_run</function>Example</title>
+            <programlisting role="php">
 <![CDATA[
 <?php
 if ( ($ann = fann_create("http://www.example.com/ann.net")) == FALSE )
@@ -1611,676 +3101,781 @@ else
   print_r($output);
 ?>
 ]]>
-       </programlisting>
-      </example>
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_randomize_weights">
-    <refnamediv>
-     <refname>fann_randomize_weights</refname>
-     <refpurpose>Randomize the weights of the neurons in the network.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_save</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-       <methodparam choice="opt"><type>float</type><parameter>minimum</parameter></methodparam>
-       <methodparam choice="opt"><type>float</type><parameter>maximum</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_randomize_weights</function> will randomize the weights of all neurons in
-      <parameter>ann</parameter>, effectively resetting the network.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_MSE">
-    <refnamediv>
-     <refname>fann_get_MSE</refname>
-     <refpurpose>Get the mean squared error.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>float</type><methodname>fann_get_MSE</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_MSE</function> will return the mean squared error (MSE) of
-      <parameter>ann</parameter>, or 0 if it is unavailable.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_num_input">
-    <refnamediv>
-     <refname>fann_get_num_input</refname>
-     <refpurpose>Get the number of input neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>int</type><methodname>fann_get_num_input</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_num_input</function> will return the number of input neurons in
-      <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_get_num_output</function>, <function>fann_get_total_neurons</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_num_output">
-    <refnamediv>
-     <refname>fann_get_num_output</refname>
-     <refpurpose>Get the number of output neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>int</type><methodname>fann_get_num_output</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_num_output</function> will return the number of output neurons in
-      <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_get_num_input</function>, <function>fann_get_total_neurons</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_total_neurons">
-    <refnamediv>
-     <refname>fann_get_total_neurons</refname>
-     <refpurpose>Get the total number of neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>int</type><methodname>fann_get_total_neurons</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_total_neurons</function> will return the total number of neurons in
-      <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_get_num_input</function>, <function>fann_get_num_output</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_total_connections">
-    <refnamediv>
-     <refname>fann_get_total_connections</refname>
-     <refpurpose>Get the total number of connections.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>int</type><methodname>fann_get_total_connections</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_total_connections</function> will return the total number of connections in
-      <parameter>ann</parameter>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_learning_rate">
-    <refnamediv>
-     <refname>fann_get_learning_rate</refname>
-     <refpurpose>Get the learning rate.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>float</type><methodname>fann_get_learning_rate</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_learning_rate</function> will return the learning rate of
-      <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_set_learning_rate</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_activation_function_hidden">
-    <refnamediv>
-     <refname>fann_get_activation_function_hidden</refname>
-     <refpurpose>Get the activation function of the hidden neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>int</type><methodname>fann_get_activation_function_hidden</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_activation_function_hidden</function> will return the activation function
-      for the hidden neurons in <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_set_activation_function_hidden</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_activation_function_output">
-    <refnamediv>
-     <refname>fann_get_activation_function_output</refname>
-     <refpurpose>Get the activation function of the output neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>int</type><methodname>fann_get_activation_function_output</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_activation_function_output</function> will return the activation function
-      for the output neurons in <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_set_activation_function_output</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_activation_hidden_steepness">
-    <refnamediv>
-     <refname>fann_get_activation_hidden_steepness</refname>
-     <refpurpose>Get the steepness of the activation function for the hidden neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>float</type><methodname>fann_get_activation_hidden_steepness</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_activation_hidden_steepness</function> will return the steepness of the
-      activation function for the hidden neurons in <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_set_activation_function_hidden_steepness</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_get_activation_output_steepness">
-    <refnamediv>
-     <refname>fann_get_activation_output_steepness</refname>
-     <refpurpose>Get the steepness of the activation function for the output neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>float</type><methodname>fann_get_activation_output_steepness</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_get_activation_output_steepness</function> will return the steepness of the
-      activation function for the output neurons in <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_set_activation_output_steepness</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_set_learning_rate">
-    <refnamediv>
-     <refname>fann_set_learning_rate</refname>
-     <refpurpose>Set the learning rate.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>float</type><methodname>fann_set_learning_rate</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_set_learning_rate</function> will return the learning rate of
-      <parameter>ann</parameter>.
-     </para>
-     <para>
-      See also <function>fann_set_learning_rate</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_set_activation_function_hidden">
-    <refnamediv>
-     <refname>fann_set_activation_function_hidden</refname>
-     <refpurpose>Set the activation function for the hidden neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_set_activation_function_hidden</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>int</type><parameter>activation_function</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_set_activation_function_hidden</function> sets the activation function
-      for the hidden neurons to <parameter>activation_function</parameter>, which must be one
-      of the supported activation functions.
-     </para>
-     <para>
-      See also <function>fann_get_activation_function_hidden</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_set_activation_function_output">
-    <refnamediv>
-     <refname>fann_set_activation_function_output</refname>
-     <refpurpose>Set the activation function for the output neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_set_activation_function_output</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>int</type><parameter>activation_function</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_set_activation_function_output</function> sets the activation function
-      for the output neurons to <parameter>activation_function</parameter>, which must be one
-      of the supported activation functions.
-     </para>
-     <para>
-      See also <function>fann_get_activation_function_output</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_set_activation_hidden_steepness">
-    <refnamediv>
-     <refname>fann_set_activation_hidden_steepness</refname>
-     <refpurpose>Set the steepness of the activation function for the hidden neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-      <methodsynopsis>
-       <type>void</type><methodname>fann_set_activation_hidden_steepness</methodname>
-       <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-       <methodparam><type>float</type><parameter>steepness</parameter></methodparam>
-      </methodsynopsis>
-     <para>
-      <function>fann_set_activation_hidden_steepness</function> sets the steepness of the
-      activation function hidden neurons to <parameter>steepness</parameter>.
-     </para>
-     <para>
-      See also <function>fann_get_activation_hidden_steepness</function>.
-     </para>
-    </refsect1>
-   </refentry>
-
-   <refentry id="function.fann_set_activation_output_steepness">
-    <refnamediv>
-     <refname>fann_set_activation_output_steepness</refname>
-     <refpurpose>Set the steepness of the activation function for the output neurons.</refpurpose>
-    </refnamediv>
-    <refsect1>
-     <title>Description</title>
-     <methodsynopsis>
-      <type>void</type><methodname>fann_set_activation_output_steepness</methodname>
-      <methodparam><type>resource</type><parameter>ann</parameter></methodparam>
-      <methodparam><type>float</type><parameter>steepness</parameter></methodparam>
-     </methodsynopsis>
-     <para>
-      <function>fann_set_activation_output_steepness</function> sets the steepness of the
-      activation function output neurons to <parameter>steepness</parameter>.
-     </para>
-     <para>
-      See also <function>fann_get_activation_output_steepness</function>.
-     </para>
-    </refsect1>
-   </refentry>
-  </section>
- </chapter>
- <bibliography id="bibliography">
-  <title>Bibliography</title>
-
-  <biblioentry id="bib.tettamanzi_2001">
-   <abbrev id="bib.tettamanzi_2001.abbrev">[Tettamanzi and Tomassini, 2001]</abbrev>
-   <author>
-    <firstname>A.</firstname>
-    <surname>Tettamanzi</surname>
-   </author>
-   <author>
-    <firstname>M.</firstname>
-    <surname>Tomassini</surname>
-   </author>
-   <pubdate></pubdate>
-   <title>Soft Computing</title>
-   <publishername>Springer-Verlag</publishername>
-  </biblioentry>
-
-  <biblioentry id="bib.anderson_1995">
-   <abbrev id="bib.anderson_1995.abbrev">[Anderson, 1995]</abbrev>
-   <author>
-    <firstname>J.A.</firstname>
-    <surname>Anderson</surname>
-   </author>
-   <pubdate>1995</pubdate>
-   <title>An Introduction to Neural Networks</title>
-   <publishername>The MIT Press</publishername>
-  </biblioentry>
-
-  <biblioentry id="bib.anguita_1993">
-   <abbrev id="bib.anguita_1993.abbrev">[Anguita, 1993]</abbrev>
-   <author>
-    <firstname>D.</firstname>
-    <surname>Anguita</surname>
-   </author>
-   <title>Matrix back propagation v1.1</title>
-  </biblioentry>
-
-  <biblioentry id="bib.bentley_1982">
-   <abbrev id="bib.bently_1982.abbrev">[Bentley, 1982]</abbrev>
-   <author>
-    <firstname>J.L.</firstname>
-    <surname>Bentley</surname>
-   </author>
-   <pubdate>1982</pubdate>
-   <title>Writing Efficient Programs</title>
-   <publishername>Prentice-Hall</publishername>
-  </biblioentry>
-
-  <biblioentry id="bib.blake_1998">
-   <abbrev id="bib.blake_1998.abbrev">[Blake and Merz, 1998]</abbrev>
-   <author>
-    <firstname>C.</firstname>
-    <surname>Blake</surname>
-   </author>
-   <author>
-    <firstname>C.</firstname>
-    <surname>Merz</surname>
-   </author>
-   <pubdate>1998</pubdate>
-   <title>UCI repository of machine learning databases</title>
-   <releaseinfo><ulink url="http://www.ics.uci.edu/mlearn/MLRepository.html">http://www.ics.uci.edu/mlearn/MLRepository.html</ulink></releaseinfo>
-   <publishername></publishername>
-  </biblioentry>
-
-  <biblioentry id="bib.darrington_2003">
-   <abbrev id="bib.darrington_2003.abbrev">[Darrington, 2003]</abbrev>
-   <author>
-    <firstname>J.</firstname>
-    <surname>Darrington</surname>
-   </author>
-   <pubdate>2003</pubdate>
-   <title>Libann</title>
-   <releaseinfo><ulink url="http://www.nongnu.org/libann/index.html">http://www.nongnu.org/libann/index.html</ulink></releaseinfo>
-  </biblioentry>
-
-  <biblioentry id="bib.fahlman_1988">
-   <abbrev id="bib.fahlman_1988.abbrev">[Falhman, 1988]</abbrev>
-   <author>
-    <firstname>S.E.</firstname>
-    <surname>Fahlman</surname>
-   </author>
-   <pubdate>1988</pubdate>
-   <title>Faster-learning variations on back-propagation</title>
-   <subtitle>An empirical stody</subtitle>
-  </biblioentry>
-
-  <biblioentry id="bib.FSF_1999">
-   <abbrev id="bib.FSF_1999.abbrev">[LGPL]</abbrev>
-   <author>
-    <surname>Free Software Foundation</surname>
-   </author>
-   <pubdate>1999</pubdate>
-   <title>GNU Lesser General Public License</title>
-   <publishername>Free Software Foundation</publishername>
-   <releaseinfo><ulink url="http://www.fsf.org/copyleft/lesser.html">http://www.fsf.org/copyleft/lesser.html</ulink></releaseinfo>
-  </biblioentry>
-
-  <biblioentry id="bib.hassoun_1995">
-   <abbrev id="bib.hassoun_1995.abbrev">[Hassoun, 1995]</abbrev>
-   <author>
-    <firstname>M.H.</firstname>
-    <surname>Hassoun</surname>
-   </author>
-   <pubdate>1995</pubdate>
-   <title>Fundamentals of Artificial Neural Networks</title>
-   <publishername>The MIT Press</publishername>
-  </biblioentry>
-
-  <biblioentry id="bib.heller_2002">
-   <abbrev id="bib.heller_2002.abbrev">[Heller, 2002]</abbrev>
-   <author>
-    <firstname>J.</firstname>
-    <surname>Heller</surname>
-   </author>
-   <pubdate>2002</pubdate>
-   <title>Jet's Neural Library</title>
-   <releaseinfo><ulink url="http://www.voltar.org/jneural/jneural_doc/">http://www.voltar.org/jneural/jneural_doc/</ulink></releaseinfo>
-  </biblioentry>
-
-  <biblioentry id="bib.hertz_1991">
-   <abbrev id="bib.hertz_1991.abbrev">[Hertz et al., 1991]</abbrev>
-   <author>
-    <firstname>J.</firstname>
-    <surname>Hertz</surname>
-   </author>
-   <author>
-    <firstname>A.</firstname>
-    <surname>Krogh</surname>
-   </author>
-   <author>
-    <firstname>R.G.</firstname>
-    <surname>Palmer</surname>
-   </author>
-   <pubdate>1991</pubdate>
-   <title>Introduction to The Theory of Neural Computing</title>
-   <publishername>Addison-Wesley Publishing Company</publishername>
-  </biblioentry>
-
-  <biblioentry id="bib.IDS_2000">
-   <abbrev id="bib.IDS_2000.abbrev">[IDS, 2000]</abbrev>
-   <author>
-    <surname>ID Software</surname>
-   </author>
-   <pubdate>2000</pubdate>
-   <title>Quake III Arena</title>
-   <releaseinfo><ulink url="http://www.idsoftware.com/games/quake/quake3-arena/">http://www.idsoftware.com/games/quake/quake3-arena/</ulink></releaseinfo>
-  </biblioentry>
-
-  <biblioentry id="bib.kaelbling_1996">
-   <abbrev id="bib.kaelbling_1996.abbrev">[Kaelbling, 1996]</abbrev>
-   <author>
-    <firstname>L.P.</firstname>
-    <surname>Kaelbling</surname>
-   </author>
-   <author>
-    <firstname>M.L.</firstname>
-    <surname>Littman</surname>
-   </author>
-   <author>
-    <firstname>A.P.</firstname>
-    <surname>Moore</surname>
-   </author>
-   <pubdate>1996</pubdate>
-   <title>Reinforcement Learning</title>
-   <subtitle>A New Survey</subtitle>
-   <publishername>Journal of Artificial Intelligence Research</publishername>
-   <volumenum>4</volumenum>
-   <pagenums>237-285</pagenums>
-  </biblioentry>
-
-  <biblioentry id="bib.lecun_1990">
-   <abbrev id="bib.lecun_1990.abbrev">[LeCun et al., 1990]</abbrev>
-   <author>
-    <firstname>Y.</firstname>
-    <surname>LeCun</surname>
-   </author>
-   <author>
-    <firstname>J.</firstname>
-    <surname>Denker</surname>
-   </author>
-   <author>
-    <firstname>S.</firstname>
-    <surname>Solla</surname>
-   </author>
-   <author>
-    <firstname>R.E.</firstname>
-    <surname>Howard</surname>
-   </author>
-   <author>
-    <firstname>L.D.</firstname>
-    <surname>Jackel</surname>
-   </author>
-   <pubdate>1990</pubdate>
-   <title>Advances in Neural Information Processing Systems II</title>
-  </biblioentry>
-
-  <biblioentry id="bib.nissen_2003">
-   <abbrev id="bib.nissen_2003.abbrev">[Nissen et al., 2003]</abbrev>
-   <author>
-    <firstname>S.</firstname>
-    <surname>Nissen</surname>
-   </author>
-   <author>
-    <firstname>J.</firstname>
-    <surname>Damkj�r</surname>
-   </author>
-   <author>
-    <firstname>J.</firstname>
-    <surname>Hansson</surname>
-   </author>
-   <author>
-    <firstname>S.</firstname>
-    <surname>Larsen</surname>
-   </author>
-   <author>
-    <firstname>S.</firstname>
-    <surname>Jensen</surname>
-   </author>
-   <pubdate>2003</pubdate>
-   <title>Real-time image processing of an ipaq based robot with fuzzy logic (fuzzy)</title>
-   <releaseinfo><ulink url="http://www.hamster.dk/~purple/robot/fuzzy/weblog/">http://www.hamster.dk/~purple/robot/fuzzy/weblog/</ulink></releaseinfo>
-  </biblioentry>
-
-  <biblioentry id="bib.nissen_2002">
-   <abbrev id="bib.nissen_2002.abbrev">[Nissen et al., 2002]</abbrev>
-   <author>
-    <firstname>S.</firstname>
-    <surname>Nissen</surname>
-   </author>
-   <author>
-    <firstname>S.</firstname>
-    <surname>Larsen</surname>
-   </author>
-   <author>
-    <firstname>S.</firstname>
-    <surname>Jensen</surname>
-   </author>
-   <pubdate>2003</pubdate>
-   <title>Real-time image processing of an iPAQ based robot (iBOT)</title>
-   <releaseinfo><ulink url="http://www.hamster.dk/~purple/robot/iBOT/report.pdf">http://www.hamster.dk/~purple/robot/iBOT/report.pdf</ulink></releaseinfo>
-  </biblioentry>
-
-  <biblioentry id="bib.OSDN_2003">
-   <abbrev id="bib.OSDN_2003.abbrev">[OSDN, 2003]</abbrev>
-   <pubdate>2003</pubdate>
-   <title>SourceForge.net</title>
-   <releaseinfo><ulink url="http://sourceforge.net/">http://sourceforge.net/</ulink></releaseinfo>
-  </biblioentry>
-
-  <biblioentry id="bib.pendleton_1993">
-   <abbrev id="bib.pendleton_1993.abbrev">[Pendleton, 1993]</abbrev>
-   <author>
-    <firstname>R.C.</firstname>
-    <surname>Pendleton</surname>
-   </author>
-   <pubdate>1993</pubdate>
-   <title>Doing it Fast</title>
-   <releaseinfo><ulink url="http://www.gameprogrammer.com/4-fixed.html">http://www.gameprogrammer.com/4-fixed.html</ulink></releaseinfo>
-  </biblioentry>
-
-  <biblioentry id="bib.prechelt_1994">
-   <abbrev id="bib.prechelt_1994.abbrev">[Prechelt, 1994]</abbrev>
-   <author>
-    <firstname>L.</firstname>
-    <surname>Prechelt</surname>
-   </author>
-   <pubdate>1994</pubdate>
-   <title>Proben1</title>
-   <subtitle>A set of neural network benchmark problems and benchmarking rules</subtitle>
-  </biblioentry>
-
-  <biblioentry id="bib.riedmiller_1993">
-   <abbrev id="bib.riedmiller_1993.abbrev">[Riedmiller and Braun, 1993]</abbrev>
-   <author>
-    <firstname></firstname>
-    <surname>Riedmiller</surname>
-   </author>
-   <author>
-    <firstname></firstname>
-    <surname>Braun</surname>
-   </author>
-   <pubdate>1993</pubdate>
-   <title>A direct adaptive method for faster backpropagation learning: The RPROP algorithm</title>
-   <pagenums>586-591</pagenums>
-   <releaseinfo><ulink url="http://citeseer.nj.nec.com/riedmiller93direct.html">http://citeseer.nj.nec.com/riedmiller93direct.html</ulink></releaseinfo>
-  </biblioentry>
-
-<!-- TODO:
-
-Sarle, 2002 
-Sarle, W. S. (2002). 
-Neural network faq. 
-
-ftp://ftp.sas.com/pub/neural/FAQ2.html#A_binary. 
-
-Software, 2002 
-Software, W. (2002). 
-Ann++. 
-
-http://savannah.nongnu.org/projects/annpp/. 
-
-Tettamanzi and Tomassini, 2001 
-Tettamanzi, A. and Tomassini, M. (2001). 
-Soft Computing. 
-Springer-Verlag. 
-
-van Rossum, 2003 
-van Rossum, P. (2003). 
-Lightweight neural network. 
-
-http://lwneuralnet.sourceforge.net/. 
-
-van Waveren, 2001 
-van Waveren, J. P. (2001). 
-The quake III arena bot. 
-
-http://www.kbs.twi.tudelft.nl/Publications/MSc/2001-VanWaveren-MSc.html. 
-
-Zell, 2003 
-Zell, A. (2003). 
-Stuttgart neural network simulator. 
-
-http://www-ra.informatik.uni-tuebingen.de/SNNS/.
--->
- </bibliography>
+            </programlisting>
+          </example>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_randomize_weights">
+        <refnamediv>
+          <refname>fann_randomize_weights</refname>
+          <refpurpose>Randomize the weights of the neurons in the network.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_save</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam choice="opt">
+              <type>float</type>
+              <parameter>minimum</parameter>
+            </methodparam>
+            <methodparam choice="opt">
+              <type>float</type>
+              <parameter>maximum</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_randomize_weights</function> will randomize the weights of all neurons in
+	    <parameter>ann</parameter>, effectively resetting the network.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_MSE">
+        <refnamediv>
+          <refname>fann_get_MSE</refname>
+          <refpurpose>Get the mean squared error.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>float</type>
+            <methodname>fann_get_MSE</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_MSE</function> will return the mean squared error (MSE) of <parameter>ann</parameter>,
+	    or 0 if it is unavailable.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_num_input">
+        <refnamediv>
+          <refname>fann_get_num_input</refname>
+          <refpurpose>Get the number of input neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>int</type>
+            <methodname>fann_get_num_input</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+          <function>fann_get_num_input</function>will return the number of input neurons in 
+          <parameter>ann</parameter>.</para>
+          <para>
+	    See also <function>fann_get_num_output</function>, <function>fann_get_total_neurons</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_num_output">
+        <refnamediv>
+          <refname>fann_get_num_output</refname>
+          <refpurpose>Get the number of output neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>int</type>
+            <methodname>fann_get_num_output</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_num_output</function> will return the number of output neurons in
+	    <parameter>ann</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_get_num_input</function>, <function>fann_get_total_neurons</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_total_neurons">
+        <refnamediv>
+          <refname>fann_get_total_neurons</refname>
+          <refpurpose>Get the total number of neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>int</type>
+            <methodname>fann_get_total_neurons</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_total_neurons</function>will return the total number of neurons in
+	    <parameter>ann</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_get_num_input</function>, <function>fann_get_num_output</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_total_connections">
+        <refnamediv>
+          <refname>fann_get_total_connections</refname>
+          <refpurpose>Get the total number of connections.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>int</type>
+            <methodname>fann_get_total_connections</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_total_connections</function> will return the total number of connections in 
+            <parameter>ann</parameter>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_learning_rate">
+        <refnamediv>
+          <refname>fann_get_learning_rate</refname>
+          <refpurpose>Get the learning rate.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>float</type>
+            <methodname>fann_get_learning_rate</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_learning_rate</function> will return the learning rate of <parameter>ann</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_set_learning_rate</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_activation_function_hidden">
+        <refnamediv>
+          <refname>fann_get_activation_function_hidden</refname>
+          <refpurpose>Get the activation function of the hidden neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>int</type>
+            <methodname>fann_get_activation_function_hidden</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_activation_function_hidden</function> will return the activation function for the hidden
+	    neurons in <parameter>ann</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_set_activation_function_hidden</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_activation_function_output">
+        <refnamediv>
+          <refname>fann_get_activation_function_output</refname>
+          <refpurpose>Get the activation function of the output neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>int</type>
+            <methodname>fann_get_activation_function_output</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_activation_function_output</function> will return the activation function for the output
+	    neurons in <parameter>ann</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_set_activation_function_output</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_activation_hidden_steepness">
+        <refnamediv>
+          <refname>fann_get_activation_hidden_steepness</refname>
+          <refpurpose>Get the steepness of the activation function for the hidden neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>float</type>
+            <methodname>fann_get_activation_hidden_steepness</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_activation_hidden_steepness</function> will return the steepness of the activation
+	    function for the hidden neurons in <parameter>ann</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_set_activation_function_hidden_steepness</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_get_activation_output_steepness">
+        <refnamediv>
+          <refname>fann_get_activation_output_steepness</refname>
+          <refpurpose>Get the steepness of the activation function for the output neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>float</type>
+            <methodname>fann_get_activation_output_steepness</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_get_activation_output_steepness</function> will return the steepness of the activation
+	    function for the output neurons in <parameter>ann</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_set_activation_output_steepness</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_set_learning_rate">
+        <refnamediv>
+          <refname>fann_set_learning_rate</refname>
+          <refpurpose>Set the learning rate.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>float</type>
+            <methodname>fann_set_learning_rate</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_set_learning_rate</function> will return the learning rate of <parameter>ann</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_set_learning_rate</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_set_activation_function_hidden">
+        <refnamediv>
+          <refname>fann_set_activation_function_hidden</refname>
+          <refpurpose>Set the activation function for the hidden neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_set_activation_function_hidden</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>int</type>
+              <parameter>activation_function</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_set_activation_function_hidden</function> sets the activation function for the hidden
+	    neurons to <parameter>activation_function</parameter>, which must be one of the supported activation
+	    functions.
+	  </para>
+          <para>
+	    See also <function>fann_get_activation_function_hidden</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_set_activation_function_output">
+        <refnamediv>
+          <refname>fann_set_activation_function_output</refname>
+          <refpurpose>Set the activation function for the output neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_set_activation_function_output</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>int</type>
+              <parameter>activation_function</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_set_activation_function_output</function> sets the activation function for the output
+	    neurons to <parameter>activation_function</parameter>, which must be one of the supported activation
+	    functions.
+	  </para>
+          <para>
+	    See also <function>fann_get_activation_function_output</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_set_activation_hidden_steepness">
+        <refnamediv>
+          <refname>fann_set_activation_hidden_steepness</refname>
+          <refpurpose>Set the steepness of the activation function for the hidden neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_set_activation_hidden_steepness</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>steepness</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_set_activation_hidden_steepness</function>sets the steepness of the activation function
+	    hidden neurons to <parameter>steepness</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_get_activation_hidden_steepness</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+      <refentry id="function.fann_set_activation_output_steepness">
+        <refnamediv>
+          <refname>fann_set_activation_output_steepness</refname>
+          <refpurpose>Set the steepness of the activation function for the output neurons.</refpurpose>
+        </refnamediv>
+        <refsect1>
+          <title>Description</title>
+          <methodsynopsis>
+            <type>void</type>
+            <methodname>fann_set_activation_output_steepness</methodname>
+            <methodparam>
+              <type>resource</type>
+              <parameter>ann</parameter>
+            </methodparam>
+            <methodparam>
+              <type>float</type>
+              <parameter>steepness</parameter>
+            </methodparam>
+          </methodsynopsis>
+          <para>
+            <function>fann_set_activation_output_steepness</function> sets the steepness of the activation function
+	    output neurons to <parameter>steepness</parameter>.
+	  </para>
+          <para>
+	    See also <function>fann_get_activation_output_steepness</function>.
+	  </para>
+          <para>This function appears in FANN-PHP >= 1.1.0.</para>
+        </refsect1>
+      </refentry>
+    </section>
+  </chapter>
+  <bibliography id="bibliography">
+    <title id="bibliography.title">Bibliography</title>
+    <biblioentry id="bib.anderson_1995">
+      <abbrev id="bib.anderson_1995.abbrev">Anderson, 1995</abbrev>
+      <author>
+        <firstname>J.A.</firstname>
+        <surname>Anderson</surname>
+      </author>
+      <pubdate>1995</pubdate>
+      <title id="bib.anderson_1995.title">An Introduction to Neural Networks</title>
+      <publishername>The MIT Press</publishername>
+    </biblioentry>
+    <biblioentry id="bib.anguita_1993">
+      <abbrev id="bib.anguita_1993.abbrev">Anguita, 1993</abbrev>
+      <author>
+        <firstname>D.</firstname>
+        <surname>Anguita</surname>
+      </author>
+      <title id="bib.anguita_1993.title">Matrix back propagation v1.1</title>
+    </biblioentry>
+    <biblioentry id="bib.bentley_1982">
+      <abbrev id="bib.bently_1982.abbrev">Bentley, 1982</abbrev>
+      <author>
+        <firstname>J.L.</firstname>
+        <surname>Bentley</surname>
+      </author>
+      <pubdate>1982</pubdate>
+      <title id="bib.bently_1982.title">Writing Efficient Programs</title>
+      <publishername>Prentice-Hall</publishername>
+    </biblioentry>
+    <biblioentry id="bib.blake_1998">
+      <abbrev id="bib.blake_1998.abbrev">Blake and Merz, 1998</abbrev>
+      <author>
+        <firstname>C.</firstname>
+        <surname>Blake</surname>
+      </author>
+      <author>
+        <firstname>C.</firstname>
+        <surname>Merz</surname>
+      </author>
+      <pubdate>1998</pubdate>
+      <title id="bib.blake_1998.title">UCI repository of machine learning databases</title>
+      <releaseinfo>
+        <ulink url="http://www.ics.uci.edu/mlearn/MLRepository.html">
+        http://www.ics.uci.edu/mlearn/MLRepository.html</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.darrington_2003">
+      <abbrev id="bib.darrington_2003.abbrev">Darrington, 2003</abbrev>
+      <author>
+        <firstname>J.</firstname>
+        <surname>Darrington</surname>
+      </author>
+      <pubdate>2003</pubdate>
+      <title id="bib.darrington_2003.title">Libann</title>
+      <releaseinfo>
+        <ulink url="http://www.nongnu.org/libann/index.html">http://www.nongnu.org/libann/index.html</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.fahlman_1988">
+      <abbrev id="bib.fahlman_1988.abbrev">Falhman, 1988</abbrev>
+      <author>
+        <firstname>S.E.</firstname>
+        <surname>Fahlman</surname>
+      </author>
+      <pubdate>1988</pubdate>
+      <title id="bib.fahlman_1988.title">Faster-learning variations on back-propagation</title>
+      <subtitle>An empirical stody</subtitle>
+    </biblioentry>
+    <biblioentry id="bib.FSF_1999">
+      <abbrev id="bib.FSF_1999.abbrev">LGPL</abbrev>
+      <author>
+        <surname>Free Software Foundation</surname>
+      </author>
+      <pubdate>1999</pubdate>
+      <title id="bib.FSF_1999.title">GNU Lesser General Public License</title>
+      <publishername>Free Software Foundation</publishername>
+      <releaseinfo>
+        <ulink url="http://www.fsf.org/copyleft/lesser.html">http://www.fsf.org/copyleft/lesser.html</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.hassoun_1995">
+      <abbrev id="bib.hassoun_1995.abbrev">Hassoun, 1995</abbrev>
+      <author>
+        <firstname>M.H.</firstname>
+        <surname>Hassoun</surname>
+      </author>
+      <pubdate>1995</pubdate>
+      <title id="bib.hassoun_1995.title">Fundamentals of Artificial Neural Networks</title>
+      <publishername>The MIT Press</publishername>
+    </biblioentry>
+    <biblioentry id="bib.heller_2002">
+      <abbrev id="bib.heller_2002.abbrev">Heller, 2002</abbrev>
+      <author>
+        <firstname>J.</firstname>
+        <surname>Heller</surname>
+      </author>
+      <pubdate>2002</pubdate>
+      <title id="bib.heller_2002.title">Jet's Neural Library</title>
+      <releaseinfo>
+        <ulink url="http://www.voltar.org/jneural/jneural_doc/">http://www.voltar.org/jneural/jneural_doc/</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.hertz_1991">
+      <abbrev id="bib.hertz_1991.abbrev">Hertz et al., 1991</abbrev>
+      <author>
+        <firstname>J.</firstname>
+        <surname>Hertz</surname>
+      </author>
+      <author>
+        <firstname>A.</firstname>
+        <surname>Krogh</surname>
+      </author>
+      <author>
+        <firstname>R.G.</firstname>
+        <surname>Palmer</surname>
+      </author>
+      <pubdate>1991</pubdate>
+      <title id="bib.hertz_1991.title">Introduction to The Theory of Neural Computing</title>
+      <publishername>Addison-Wesley Publishing Company</publishername>
+    </biblioentry>
+    <biblioentry id="bib.IDS_2000">
+      <abbrev id="bib.IDS_2000.abbrev">IDS, 2000</abbrev>
+      <author>
+        <surname>ID Software</surname>
+      </author>
+      <pubdate>2000</pubdate>
+      <title id="bib.IDS_2000.title">Quake III Arena</title>
+      <releaseinfo>
+        <ulink url="http://www.idsoftware.com/games/quake/quake3-arena/">
+        http://www.idsoftware.com/games/quake/quake3-arena/</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.kaelbling_1996">
+      <abbrev id="bib.kaelbling_1996.abbrev">Kaelbling, 1996</abbrev>
+      <author>
+        <firstname>L.P.</firstname>
+        <surname>Kaelbling</surname>
+      </author>
+      <author>
+        <firstname>M.L.</firstname>
+        <surname>Littman</surname>
+      </author>
+      <author>
+        <firstname>A.P.</firstname>
+        <surname>Moore</surname>
+      </author>
+      <pubdate>1996</pubdate>
+      <title id="bib.kaelbling_1996.title">Reinforcement Learning</title>
+      <subtitle>A New Survey</subtitle>
+      <publishername>Journal of Artificial Intelligence Research</publishername>
+      <volumenum>4</volumenum>
+      <pagenums>237-285</pagenums>
+    </biblioentry>
+    <biblioentry id="bib.lecun_1990">
+      <abbrev id="bib.lecun_1990.abbrev">LeCun et al., 1990</abbrev>
+      <author>
+        <firstname>Y.</firstname>
+        <surname>LeCun</surname>
+      </author>
+      <author>
+        <firstname>J.</firstname>
+        <surname>Denker</surname>
+      </author>
+      <author>
+        <firstname>S.</firstname>
+        <surname>Solla</surname>
+      </author>
+      <author>
+        <firstname>R.E.</firstname>
+        <surname>Howard</surname>
+      </author>
+      <author>
+        <firstname>L.D.</firstname>
+        <surname>Jackel</surname>
+      </author>
+      <pubdate>1990</pubdate>
+      <title id="bib.lecun_1990.title">Advances in Neural Information Processing Systems II</title>
+    </biblioentry>
+    <biblioentry id="bib.nissen_2003">
+      <abbrev id="bib.nissen_2003.abbrev">Nissen et al., 2003</abbrev>
+      <author>
+        <firstname>S.</firstname>
+        <surname>Nissen</surname>
+      </author>
+      <author>
+        <firstname>J.</firstname>
+        <surname>Damkjær</surname>
+      </author>
+      <author>
+        <firstname>J.</firstname>
+        <surname>Hansson</surname>
+      </author>
+      <author>
+        <firstname>S.</firstname>
+        <surname>Larsen</surname>
+      </author>
+      <author>
+        <firstname>S.</firstname>
+        <surname>Jensen</surname>
+      </author>
+      <pubdate>2003</pubdate>
+      <title id="bib.nissen_2003.title">Real-time image processing of an ipaq based robot with fuzzy logic (fuzzy)</title>
+      <releaseinfo>
+        <ulink url="http://www.hamster.dk/~purple/robot/fuzzy/weblog/">
+        http://www.hamster.dk/~purple/robot/fuzzy/weblog/</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.nissen_2002">
+      <abbrev id="bib.nissen_2002.abbrev">Nissen et al., 2002</abbrev>
+      <author>
+        <firstname>S.</firstname>
+        <surname>Nissen</surname>
+      </author>
+      <author>
+        <firstname>S.</firstname>
+        <surname>Larsen</surname>
+      </author>
+      <author>
+        <firstname>S.</firstname>
+        <surname>Jensen</surname>
+      </author>
+      <pubdate>2003</pubdate>
+      <title id="bib.nissen_2002.title">Real-time image processing of an iPAQ based robot (iBOT)</title>
+      <releaseinfo>
+        <ulink url="http://www.hamster.dk/~purple/robot/iBOT/report.pdf">
+        http://www.hamster.dk/~purple/robot/iBOT/report.pdf</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.OSDN_2003">
+      <abbrev id="bib.OSDN_2003.abbrev">OSDN, 2003</abbrev>
+      <pubdate>2003</pubdate>
+      <title id="bib.OSDN_2003.title">SourceForge.net</title>
+      <releaseinfo>
+        <ulink url="http://sourceforge.net/">http://sourceforge.net/</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.pendleton_1993">
+      <abbrev id="bib.pendleton_1993.abbrev">Pendleton, 1993</abbrev>
+      <author>
+        <firstname>R.C.</firstname>
+        <surname>Pendleton</surname>
+      </author>
+      <pubdate>1993</pubdate>
+      <title id="bib.pendleton_1993.title">Doing it Fast</title>
+      <releaseinfo>
+        <ulink url="http://www.gameprogrammer.com/4-fixed.html">http://www.gameprogrammer.com/4-fixed.html</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.prechelt_1994">
+      <abbrev id="bib.prechelt_1994.abbrev">Prechelt, 1994</abbrev>
+      <author>
+        <firstname>L.</firstname>
+        <surname>Prechelt</surname>
+      </author>
+      <pubdate>1994</pubdate>
+      <title id="bib.prechelt_1994.title">Proben1</title>
+      <subtitle>A set of neural network benchmark problems and benchmarking rules</subtitle>
+    </biblioentry>
+    <biblioentry id="bib.riedmiller_1993">
+      <abbrev id="bib.riedmiller_1993.abbrev">Riedmiller and Braun, 1993</abbrev>
+      <author>
+        <firstname></firstname>
+        <surname>Riedmiller</surname>
+      </author>
+      <author>
+        <firstname></firstname>
+        <surname>Braun</surname>
+      </author>
+      <pubdate>1993</pubdate>
+      <title id="bib.riedmiller_1993.title">A direct adaptive method for faster backpropagation learning: The RPROP algorithm</title>
+      <pagenums>586-591</pagenums>
+      <releaseinfo>
+        <ulink url="http://citeseer.nj.nec.com/riedmiller93direct.html">
+        http://citeseer.nj.nec.com/riedmiller93direct.html</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.sarle_2002">
+      <abbrev id="bib.sarle_2002.abbrev">Sarle, 2002</abbrev>
+      <author>
+        <firstname>W.S.</firstname>
+        <surname>Sarle</surname>
+      </author>
+      <pubdate></pubdate>
+      <title id="bib.sarle_2002.title">Neural Network FAQ</title>
+      <releaseinfo>
+        <ulink url="ftp://ftp.sas.com/pub/neural/FAQ2.html#A_binary">ftp://ftp.sas.com/pub/neural/FAQ2.html#A_binary</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.pemstein">
+      <abbrev id="bib.pemstein.abbrev">Pemstein, 2002</abbrev>
+      <author>
+        <firstname>Dan</firstname>
+        <surname>Pemstein</surname>
+      </author>
+      <pubdate>2002</pubdate>
+      <title id="bib.pemstein_2002.title">ANN++</title>
+      <releaseinfo>
+        <ulink url="http://savannah.nongnu.org/projects/annpp/">http://savannah.nongnu.org/projects/annpp/</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.tettamanzi_2001">
+      <abbrev id="bib.tettamanzi_2001.abbrev">Tettamanzi and Tomassini, 2001</abbrev>
+      <author>
+        <firstname>A.</firstname>
+        <surname>Tettamanzi</surname>
+      </author>
+      <author>
+        <firstname>M.</firstname>
+        <surname>Tomassini</surname>
+      </author>
+      <pubdate></pubdate>
+      <title id="bib.tettamanzi_2001.title">Soft Computing</title>
+      <publishername>Springer-Verlag</publishername>
+    </biblioentry>
+    <biblioentry id="bib.van_rossum_2003">
+      <abbrev id="bib.van_rossum_2003.abbrev">van Rossum, 2003</abbrev>
+      <author>
+        <firstname>P.</firstname>
+        <surname>van Rossum</surname>
+      </author>
+      <pubdate>2003</pubdate>
+      <title id="bib.van_rossum_2003.title">Lightweight neural network</title>
+      <releaseinfo>
+        <ulink url="http://lwneuralnet.sourceforge.net/">http://lwneuralnet.sourceforge.net/</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.van_waveren_2001">
+      <abbrev id="bib.van_waveren_2001.abbrev">van Waveren, 2001</abbrev>
+      <author>
+        <firstname>J.P.</firstname>
+        <surname>van Waveren</surname>
+      </author>
+      <pubdate>2001</pubdate>
+      <title id="bib.van_waveren_2001.title">The quake III arena bot</title>
+      <releaseinfo>
+        <ulink url="http://www.kbs.twi.tudelft.nl/Publications/MSc/2001-VanWaveren-MSc.html">
+	http://www.kbs.twi.tudelft.nl/Publications/MSc/2001-VanWaveren-MSc.html</ulink>
+      </releaseinfo>
+    </biblioentry>
+    <biblioentry id="bib.zell_2003">
+      <abbrev id="bib.zell_2003.abbrev">Zell, 2003</abbrev>
+      <author>
+        <firstname>A.</firstname>
+        <surname>Zell</surname>
+      </author>
+      <pubdate>2003</pubdate>
+      <title id="bib.zell_2003.title">Stuttgart neural network simulator</title>
+      <releaseinfo>
+        <ulink url="http://www-ra.informatik.uni-tuebingen.de/SNNS/">http://www-ra.informatik.uni-tuebingen.de/SNNS/</ulink>
+      </releaseinfo>
+    </biblioentry>
+  </bibliography>
 </book>
-
 <!-- Keep this comment at the end of the file
 Local variables:
 mode: sgml

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/libfann.git



More information about the debian-science-commits mailing list