[python-fann2] 01/06: Imported Upstream version 1.0.7
Christian Kastner
ckk at moszumanska.debian.org
Tue Sep 29 19:31:34 UTC 2015
This is an automated email from the git hooks/post-receive script.
ckk pushed a commit to branch master
in repository python-fann2.
commit c7e5e0d684e12f697e1a40db11a8ff8dee61723b
Author: Christian Kastner <ckk at kvr.at>
Date: Tue Sep 29 12:55:54 2015 +0200
Imported Upstream version 1.0.7
---
.gitignore | 54 +
ChangeLog | 16 +
INSTALL | 17 +
LICENSE | 458 ++++++
MANIFEST.in | 3 +
README.rst | 125 ++
fann2/__init__.py | 7 +
fann2/fann2.i | 206 +++
fann2/fann_cpp_subclass.h | 580 +++++++
include/compat_time.h | 141 ++
include/config.h | 8 +
include/doublefann.h | 33 +
include/fann.h | 613 ++++++++
include/fann_activation.h | 144 ++
include/fann_cascade.h | 557 +++++++
include/fann_cpp.h | 3709 +++++++++++++++++++++++++++++++++++++++++++++
include/fann_data.h | 824 ++++++++++
include/fann_error.h | 165 ++
include/fann_internal.h | 152 ++
include/fann_io.h | 100 ++
include/fann_train.h | 1310 ++++++++++++++++
include/fixedfann.h | 33 +
include/floatfann.h | 33 +
setup.py | 105 ++
24 files changed, 9393 insertions(+)
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..51cbe85
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,54 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+env/
+bin/
+build/
+develop-eggs/
+dist/
+eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+*.egg-info/
+.installed.cfg
+*.egg
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.coverage
+.cache
+nosetests.xml
+coverage.xml
+
+# Translations
+*.mo
+
+# Mr Developer
+.mr.developer.cfg
+.project
+.pydevproject
+
+# Rope
+.ropeproject
+
+# Django stuff:
+*.log
+*.pot
+
+# Sphinx documentation
+docs/_build/
+
diff --git a/ChangeLog b/ChangeLog
new file mode 100644
index 0000000..fb195c7
--- /dev/null
+++ b/ChangeLog
@@ -0,0 +1,16 @@
+=========
+CHANGELOG
+=========
+
+1.0.6 - 2015-03-30
+
+ * Fixed type/size checking for array of arrays (ksuszka/patch-1).
+ * Updated README.
+ * Added ChangeLog.
+
+1.0.0 - 2014-06-20
+
+ * Original python bindings included with FANN 2.1.0beta, updated to include
+ support for python 2.6-3.4.
+ * Added pypi package for these bindings.
+ * Added pkgsrc package for these bindings.
diff --git a/INSTALL b/INSTALL
new file mode 100644
index 0000000..7588a3d
--- /dev/null
+++ b/INSTALL
@@ -0,0 +1,17 @@
+INSTRUCTIONS
+
+
+PREREQUISITES
+^^^^^^^^^^^^^
+Make sure you can make and install the fann 2.2.0 library first.
+You are required to have swig and python development files
+installed. After you compiled the fann library...
+(http://sourceforge.net/projects/fann/files/fann/2.2.0/FANN-2.2.0-Source.zip/download)
+
+
+BUILDING AND INSTALLING USING DISTUTILS (second alternative)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+run the following command
+
+# pip install pyfann
+
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..8c177f8
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,458 @@
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 2.1, February 1999
+
+ Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL. It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+Licenses are intended to guarantee your freedom to share and change
+free software--to make sure the software is free for all its users.
+
+ This license, the Lesser General Public License, applies to some
+specially designated software packages--typically libraries--of the
+Free Software Foundation and other authors who decide to use it. You
+can use it too, but we suggest you first think carefully about whether
+this license or the ordinary General Public License is the better
+strategy to use in any particular case, based on the explanations below.
+
+ When we speak of free software, we are referring to freedom of use,
+not price. Our General Public Licenses are designed to make sure that
+you have the freedom to distribute copies of free software (and charge
+for this service if you wish); that you receive source code or can get
+it if you want it; that you can change the software and use pieces of
+it in new free programs; and that you are informed that you can do
+these things.
+
+ To protect your rights, we need to make restrictions that forbid
+distributors to deny you these rights or to ask you to surrender these
+rights. These restrictions translate to certain responsibilities for
+you if you distribute copies of the library or if you modify it.
+
+ For example, if you distribute copies of the library, whether gratis
+or for a fee, you must give the recipients all the rights that we gave
+you. You must make sure that they, too, receive or can get the source
+code. If you link other code with the library, you must provide
+complete object files to the recipients, so that they can relink them
+with the library after making changes to the library and recompiling
+it. And you must show them these terms so they know their rights.
+
+ We protect your rights with a two-step method: (1) we copyright the
+library, and (2) we offer you this license, which gives you legal
+permission to copy, distribute and/or modify the library.
+
+ To protect each distributor, we want to make it very clear that
+there is no warranty for the free library. Also, if the library is
+modified by someone else and passed on, the recipients should know
+that what they have is not the original version, so that the original
+author's reputation will not be affected by problems that might be
+introduced by others.
+
+ Finally, software patents pose a constant threat to the existence of
+any free program. We wish to make sure that a company cannot
+effectively restrict the users of a free program by obtaining a
+restrictive license from a patent holder. Therefore, we insist that
+any patent license obtained for a version of the library must be
+consistent with the full freedom of use specified in this license.
+
+ Most GNU software, including some libraries, is covered by the
+ordinary GNU General Public License. This license, the GNU Lesser
+General Public License, applies to certain designated libraries, and
+is quite different from the ordinary General Public License. We use
+this license for certain libraries in order to permit linking those
+libraries into non-free programs.
+
+ When a program is linked with a library, whether statically or using
+a shared library, the combination of the two is legally speaking a
+combined work, a derivative of the original library. The ordinary
+General Public License therefore permits such linking only if the
+entire combination fits its criteria of freedom. The Lesser General
+Public License permits more lax criteria for linking other code with
+the library.
+
+ We call this license the "Lesser" General Public License because it
+does Less to protect the user's freedom than the ordinary General
+Public License. It also provides other free software developers Less
+of an advantage over competing non-free programs. These disadvantages
+are the reason we use the ordinary General Public License for many
+libraries. However, the Lesser license provides advantages in certain
+special circumstances.
+
+ For example, on rare occasions, there may be a special need to
+encourage the widest possible use of a certain library, so that it becomes
+a de-facto standard. To achieve this, non-free programs must be
+allowed to use the library. A more frequent case is that a free
+library does the same job as widely used non-free libraries. In this
+case, there is little to gain by limiting the free library to free
+software only, so we use the Lesser General Public License.
+
+ In other cases, permission to use a particular library in non-free
+programs enables a greater number of people to use a large body of
+free software. For example, permission to use the GNU C Library in
+non-free programs enables many more people to use the whole GNU
+operating system, as well as its variant, the GNU/Linux operating
+system.
+
+ Although the Lesser General Public License is Less protective of the
+users' freedom, it does ensure that the user of a program that is
+linked with the Library has the freedom and the wherewithal to run
+that program using a modified version of the Library.
+
+ The precise terms and conditions for copying, distribution and
+modification follow. Pay close attention to the difference between a
+"work based on the library" and a "work that uses the library". The
+former contains code derived from the library, whereas the latter must
+be combined with the library in order to run.
+
+ GNU LESSER GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License Agreement applies to any software library or other
+program which contains a notice placed by the copyright holder or
+other authorized party saying it may be distributed under the terms of
+this Lesser General Public License (also called "this License").
+Each licensee is addressed as "you".
+
+ A "library" means a collection of software functions and/or data
+prepared so as to be conveniently linked with application programs
+(which use some of those functions and data) to form executables.
+
+ The "Library", below, refers to any such software library or work
+which has been distributed under these terms. A "work based on the
+Library" means either the Library or any derivative work under
+copyright law: that is to say, a work containing the Library or a
+portion of it, either verbatim or with modifications and/or translated
+straightforwardly into another language. (Hereinafter, translation is
+included without limitation in the term "modification".)
+
+ "Source code" for a work means the preferred form of the work for
+making modifications to it. For a library, complete source code means
+all the source code for all modules it contains, plus any associated
+interface definition files, plus the scripts used to control compilation
+and installation of the library.
+
+ Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running a program using the Library is not restricted, and output from
+such a program is covered only if its contents constitute a work based
+on the Library (independent of the use of the Library in a tool for
+writing it). Whether that is true depends on what the Library does
+and what the program that uses the Library does.
+
+ 1. You may copy and distribute verbatim copies of the Library's
+complete source code as you receive it, in any medium, provided that
+you conspicuously and appropriately publish on each copy an
+appropriate copyright notice and disclaimer of warranty; keep intact
+all the notices that refer to this License and to the absence of any
+warranty; and distribute a copy of this License along with the
+Library.
+
+ You may charge a fee for the physical act of transferring a copy,
+and you may at your option offer warranty protection in exchange for a
+fee.
+
+ 2. You may modify your copy or copies of the Library or any portion
+of it, thus forming a work based on the Library, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) The modified work must itself be a software library.
+
+ b) You must cause the files modified to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ c) You must cause the whole of the work to be licensed at no
+ charge to all third parties under the terms of this License.
+
+ d) If a facility in the modified Library refers to a function or a
+ table of data to be supplied by an application program that uses
+ the facility, other than as an argument passed when the facility
+ is invoked, then you must make a good faith effort to ensure that,
+ in the event an application does not supply such function or
+ table, the facility still operates, and performs whatever part of
+ its purpose remains meaningful.
+
+ (For example, a function in a library to compute square roots has
+ a purpose that is entirely well-defined independent of the
+ application. Therefore, Subsection 2d requires that any
+ application-supplied function or table used by this function must
+ be optional: if the application does not supply it, the square
+ root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Library,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Library, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote
+it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library
+with the Library (or with a work based on the Library) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may opt to apply the terms of the ordinary GNU General Public
+License instead of this License to a given copy of the Library. To do
+this, you must alter all the notices that refer to this License, so
+that they refer to the ordinary GNU General Public License, version 2,
+instead of to this License. (If a newer version than version 2 of the
+ordinary GNU General Public License has appeared, then you can specify
+that version instead if you wish.) Do not make any other change in
+these notices.
+
+ Once this change is made in a given copy, it is irreversible for
+that copy, so the ordinary GNU General Public License applies to all
+subsequent copies and derivative works made from that copy.
+
+ This option is useful when you wish to copy part of the code of
+the Library into a program that is not a library.
+
+ 4. You may copy and distribute the Library (or a portion or
+derivative of it, under Section 2) in object code or executable form
+under the terms of Sections 1 and 2 above provided that you accompany
+it with the complete corresponding machine-readable source code, which
+must be distributed under the terms of Sections 1 and 2 above on a
+medium customarily used for software interchange.
+
+ If distribution of object code is made by offering access to copy
+from a designated place, then offering equivalent access to copy the
+source code from the same place satisfies the requirement to
+distribute the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 5. A program that contains no derivative of any portion of the
+Library, but is designed to work with the Library by being compiled or
+linked with it, is called a "work that uses the Library". Such a
+work, in isolation, is not a derivative work of the Library, and
+therefore falls outside the scope of this License.
+
+ However, linking a "work that uses the Library" with the Library
+creates an executable that is a derivative of the Library (because it
+contains portions of the Library), rather than a "work that uses the
+library". The executable is therefore covered by this License.
+Section 6 states terms for distribution of such executables.
+
+ When a "work that uses the Library" uses material from a header file
+that is part of the Library, the object code for the work may be a
+derivative work of the Library even though the source code is not.
+Whether this is true is especially significant if the work can be
+linked without the Library, or if the work is itself a library. The
+threshold for this to be true is not precisely defined by law.
+
+ If such an object file uses only numerical parameters, data
+structure layouts and accessors, and small macros and small inline
+functions (ten lines or less in length), then the use of the object
+file is unrestricted, regardless of whether it is legally a derivative
+work. (Executables containing this object code plus portions of the
+Library will still fall under Section 6.)
+
+ Otherwise, if the work is a derivative of the Library, you may
+distribute the object code for the work under the terms of Section 6.
+Any executables containing that work also fall under Section 6,
+whether or not they are linked directly with the Library itself.
+
+ 6. As an exception to the Sections above, you may also combine or
+link a "work that uses the Library" with the Library to produce a
+work containing portions of the Library, and distribute that work
+under terms of your choice, provided that the terms permit
+modification of the work for the customer's own use and reverse
+engineering for debugging such modifications.
+
+ You must give prominent notice with each copy of the work that the
+Library is used in it and that the Library and its use are covered by
+this License. You must supply a copy of this License. If the work
+during execution displays copyright notices, you must include the
+copyright notice for the Library among them, as well as a reference
+directing the user to the copy of this License. Also, you must do one
+of these things:
+
+ a) Accompany the work with the complete corresponding
+ machine-readable source code for the Library including whatever
+ changes were used in the work (which must be distributed under
+ Sections 1 and 2 above); and, if the work is an executable linked
+ with the Library, with the complete machine-readable "work that
+ uses the Library", as object code and/or source code, so that the
+ user can modify the Library and then relink to produce a modified
+ executable containing the modified Library. (It is understood
+ that the user who changes the contents of definitions files in the
+ Library will not necessarily be able to recompile the application
+ to use the modified definitions.)
+
+ b) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (1) uses at run time a
+ copy of the library already present on the user's computer system,
+ rather than copying library functions into the executable, and (2)
+ will operate properly with a modified version of the library, if
+ the user installs one, as long as the modified version is
+ interface-compatible with the version that the work was made with.
+
+ c) Accompany the work with a written offer, valid for at
+ least three years, to give the same user the materials
+ specified in Subsection 6a, above, for a charge no more
+ than the cost of performing this distribution.
+
+ d) If distribution of the work is made by offering access to copy
+ from a designated place, offer equivalent access to copy the above
+ specified materials from the same place.
+
+ e) Verify that the user has already received a copy of these
+ materials or that you have already sent this user a copy.
+
+ For an executable, the required form of the "work that uses the
+Library" must include any data and utility programs needed for
+reproducing the executable from it. However, as a special exception,
+the materials to be distributed need not include anything that is
+normally distributed (in either source or binary form) with the major
+components (compiler, kernel, and so on) of the operating system on
+which the executable runs, unless that component itself accompanies
+the executable.
+
+ It may happen that this requirement contradicts the license
+restrictions of other proprietary libraries that do not normally
+accompany the operating system. Such a contradiction means you cannot
+use both them and the Library together in an executable that you
+distribute.
+
+ 7. You may place library facilities that are a work based on the
+Library side-by-side in a single library together with other library
+facilities not covered by this License, and distribute such a combined
+library, provided that the separate distribution of the work based on
+the Library and of the other library facilities is otherwise
+permitted, and provided that you do these two things:
+
+ a) Accompany the combined library with a copy of the same work
+ based on the Library, uncombined with any other library
+ facilities. This must be distributed under the terms of the
+ Sections above.
+
+ b) Give prominent notice with the combined library of the fact
+ that part of it is a work based on the Library, and explaining
+ where to find the accompanying uncombined form of the same work.
+
+ 8. You may not copy, modify, sublicense, link with, or distribute
+the Library except as expressly provided under this License. Any
+attempt otherwise to copy, modify, sublicense, link with, or
+distribute the Library is void, and will automatically terminate your
+rights under this License. However, parties who have received copies,
+or rights, from you under this License will not have their licenses
+terminated so long as such parties remain in full compliance.
+
+ 9. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Library or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Library (or any work based on the
+Library), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Library or works based on it.
+
+ 10. Each time you redistribute the Library (or any work based on the
+Library), the recipient automatically receives a license from the
+original licensor to copy, distribute, link with or modify the Library
+subject to these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties with
+this License.
+
+ 11. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Library at all. For example, if a patent
+license would not permit royalty-free redistribution of the Library by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under any
+particular circumstance, the balance of the section is intended to apply,
+and the section as a whole is intended to apply in other circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 12. If the distribution and/or use of the Library is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Library under this License may add
+an explicit geographical distribution limitation excluding those countries,
+so that distribution is permitted only in or among countries not thus
+excluded. In such case, this License incorporates the limitation as if
+written in the body of this License.
+
+ 13. The Free Software Foundation may publish revised and/or new
+versions of the Lesser General Public License from time to time.
+Such new versions will be similar in spirit to the present version,
+but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Library
+specifies a version number of this License which applies to it and
+"any later version", you have the option of following the terms and
+conditions either of that version or of any later version published by
+the Free Software Foundation. If the Library does not specify a
+license version number, you may choose any version ever published by
+the Free Software Foundation.
+
+ 14. If you wish to incorporate parts of the Library into other free
+programs whose distribution conditions are incompatible with these,
+write to the author to ask for permission. For software which is
+copyrighted by the Free Software Foundation, write to the Free
+Software Foundation; we sometimes make exceptions for this. Our
+decision will be guided by the two goals of preserving the free status
+of all derivatives of our free software and of promoting the sharing
+and reuse of software generally.
+
+ NO WARRANTY
+
+ 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
+WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
+EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
+OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
+KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
+LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
+THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
+AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
+FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
+CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
+LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
+RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
+FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
+SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGES.
+
+ END OF TERMS AND CONDITIONS
diff --git a/MANIFEST.in b/MANIFEST.in
new file mode 100644
index 0000000..8686727
--- /dev/null
+++ b/MANIFEST.in
@@ -0,0 +1,3 @@
+include LICENSE
+graft include
+graft fann2
diff --git a/README.rst b/README.rst
new file mode 100644
index 0000000..a6be3f4
--- /dev/null
+++ b/README.rst
@@ -0,0 +1,125 @@
+======
+README
+======
+
+
+fann2
+=====
+
+Python bindings for Fast Artificial Neural Networks 2.2.0 (FANN >= 2.2.0). These
+are the original python bindings included with FANN 2.1.0beta and updated to
+include support for python 2.6-3.4.
+
+
+DESCRIPTION
+===========
+
+This is a python binding for Fast Artificial Neural Network Library (FANN >=
+2.2.0) that implements multilayer artificial neural networks with support for
+both fully-connected and sparsely-connected networks. It includes a framework
+for easy handling of training data sets. It is easy to use, versatile, well-
+documented, and fast.
+
+FANN 2.2.0 source
+-----------------
+
+- http://sourceforge.net/projects/fann/files/fann/2.2.0/FANN-2.2.0-Source.zip/download
+
+
+INSTALLATION
+============
+
+You can install fann2 from pkgsrc or from pypi, using either pip or
+easy_install:
+
+pypi
+----
+
+
+ $ pip install fann2
+
+
+or
+
+
+ $ easy_install fann2
+
+pkgsrc
+------
+
+
+Source installation
+...................
+
+Get and install pkgsrc. See `pkgsrc documentation
+<http://pkgsrc.org/#index4h1>`_. for platform-specific information.
+
+cd ${PKGSRCDIR}/devel/py-fann2
+
+bmake install
+
+
+From binaries
+.............
+
+Get and install pkgsrc. See `pkgsrc quickstart
+<http://pkgsrc.org/#index1h1>`_. for platform-specific information.
+
+pkgin -y install py-fann2
+
+
+USAGE
+=====
+Just
+
+
+ >> from fann2 import libfann
+
+
+and then create libfann.neural_net and libfann.training_data objects
+
+
+ >> ann = libfann.neural_net()
+
+ >> train_data = libfann.training_data()
+
+
+Look at the examples at the FANN documentation and its C++ bindings for further
+reference.
+
+
+LICENSE
+=======
+
+As with the original python bindings, this package is distributed under the
+terms of the GNU Lesser General Public License, Version 2.1. See LICENSE for
+full terms and conditions.
+
+
+LINKS
+=====
+
+`fann2 on pypi
+<https://pypi.python.org/pypi/fann2>`_.
+
+`py-fann2 in pkgsrc
+<http://pkgsrc.se/devel/py-fann2>`_.
+
+`FANN
+<http://leenissen.dk/fann/>`_.
+
+`pkgsrc
+<http://pkgsrc.org/>`_.
+
+
+CONTACT
+=======
+
+Send us your patches and pull requests! We will release as often as these
+changes are received and integrated. There's no reason to have countless
+branches of this package. Consider this the official one and that it's being
+maintained!
+
+The pkgsrc package is maintained by us as well. We are active users of FANN and
+fann2. If you don't have or want a github account, send your patches for this
+package or the pkgsrc version to pkgsrc at futurelinkcorporation.com.
diff --git a/fann2/__init__.py b/fann2/__init__.py
new file mode 100644
index 0000000..c6a7d78
--- /dev/null
+++ b/fann2/__init__.py
@@ -0,0 +1,7 @@
+#
+# Fast Artificial Neural Network library for Python
+#
+from fann2 import libfann
+__all__ = [
+ 'libfann'
+]
diff --git a/fann2/fann2.i b/fann2/fann2.i
new file mode 100644
index 0000000..aba8c23
--- /dev/null
+++ b/fann2/fann2.i
@@ -0,0 +1,206 @@
+/* File : fann.i */
+%module libfann
+
+%include "typemaps.i"
+%include "stl.i"
+
+%{
+#include "doublefann.h"
+#include "fann_io.h"
+#include "fann_train.h"
+#include "fann_data.h"
+#include "fann_cascade.h"
+#include "fann_error.h"
+#include "fann_activation.h"
+#include "fann_cpp_subclass.h"
+%}
+
+%define HELPER_ARRAY_TEMPLATE( templ , T, GetFunc, SetFunc, cast)
+ %typemap(in) templ<T> * (templ<T> temp){
+ // templ<T>* type_map in
+ int i;
+ if (!PySequence_Check($input)) {
+ PyErr_SetString(PyExc_ValueError,"Expected a sequence");
+ SWIG_fail;
+ }
+ if (PySequence_Length($input) == 0) {
+ PyErr_SetString(PyExc_ValueError,"Size mismatch. Expected some elements");
+ SWIG_fail;
+ }
+ $1=&temp;
+ $1->array_len=PySequence_Length($input);
+ $1->array = (T *) malloc($1->array_len*sizeof(T));
+ for (i = 0; i < PySequence_Length($input); i++) {
+ PyObject *o = PySequence_GetItem($input,i);
+ if (PyNumber_Check(o)) {
+ $1->array[i] = (T) GetFunc(o);
+ } else {
+ PyErr_SetString(PyExc_ValueError,"Sequence elements must be numbers");
+ Py_DECREF(o);
+ SWIG_fail;
+ }
+ Py_DECREF(o);
+ }
+ }
+%typemap(freearg) templ<T>* {
+ // templ<T>* type_map freearg
+ if ($1 && $1->array && $1->can_delete)
+ {
+ free($1->array);
+ }
+}
+
+%typemap(out) templ<T>* {
+ // templ* type_map out
+ $result= PyList_New( $1->array_len );
+ for (unsigned int i = 0; i < $1->array_len; i++)
+ {
+ PyObject *o = SetFunc( (cast) $1->array[i]);
+ PyList_SetItem($result,i,o);
+ }
+ if ($1 && $1->array && $1->can_delete)
+ {
+ free($1->array);
+ }
+ if ($1) delete $1;
+
+}
+
+%typemap(argout) templ<T>* ARGOUT{
+ // templ* type_map out
+ $result= PyList_New( $1->array_len );
+ for (unsigned int i = 0; i < $1->array_len; i++)
+ {
+ PyObject *o = SetFunc( (cast) $1->array[i]);
+ PyList_SetItem($result,i,o);
+ }
+ if ($1 && $1->array && $1->can_delete)
+ {
+ free($1->array);
+ }
+ if ($1) delete $1;
+}
+
+%enddef
+
+%define HELPER_ARRAY_ARRAY_TEMPLATE(templ, T, GetFunc, SetFunc, cast)
+%typemap(in) templ< T >* ( templ<T> temp) {
+ // templ<T>* type_map
+ unsigned int i;
+ unsigned int j;
+ unsigned int dim;
+ unsigned int num;
+ if (!PySequence_Check($input)) {
+ PyErr_SetString(PyExc_ValueError,"Expected a sequence");
+ SWIG_fail;
+ }
+ if (PySequence_Length($input) == 0) {
+ PyErr_SetString(PyExc_ValueError,"Size mismatch. Expected some elements");
+ SWIG_fail;
+ }
+ $1=&temp;
+ num=PySequence_Length($input);
+ $1->array_num=num;
+
+ PyObject* o0=PySequence_GetItem($input,0);
+ if (!PySequence_Check(o0)) {
+ PyErr_SetString(PyExc_ValueError,"Expected an inner sequence");
+ Py_DECREF(o0);
+ SWIG_fail;
+ }
+ dim=PySequence_Length(o0);
+ Py_DECREF(o0);
+
+ $1->array_len=dim;
+ $1->arrays = (T **) calloc(num,sizeof(T*));
+
+ for (j = 0; j< num; j++)
+ {
+ PyObject* o1=PySequence_GetItem($input,j);
+ if (!PySequence_Check(o1)) {
+ PyErr_SetString(PyExc_ValueError,"Expected an inner sequence");
+ Py_DECREF(o1);
+ SWIG_fail;
+ }
+ if ((unsigned int)PySequence_Length(o1) != dim) {
+ PyErr_SetString(PyExc_ValueError,"Size mismatch. All items must be of the same size");
+ Py_DECREF(o1);
+ SWIG_fail;
+ }
+ $1->arrays[j] = (T*) malloc(dim*sizeof(T));
+ for (i = 0; i < dim; i++) {
+ PyObject *o = PySequence_GetItem(o1,i);
+ if (PyNumber_Check(o)) {
+ $1->arrays[j][i] = (T) GetFunc(o);
+ } else {
+ PyErr_SetString(PyExc_ValueError,"Sequence elements must be numbers");
+ Py_DECREF(o);
+ Py_DECREF(o1);
+ SWIG_fail;
+ }
+ Py_DECREF(o);
+ }
+ Py_DECREF(o1);
+ }
+}
+%typemap(freearg) templ< T >* {
+ // templ* type_map freearg
+ unsigned int i;
+ if ($1 && $1->arrays && $1->can_delete)
+ {
+ for (i=0; i < $1->array_num;++i)
+ if ($1->arrays[i])
+ free($1->arrays[i]);
+ free($1->arrays);
+ }
+}
+%typemap(out) templ<T>* {
+ // templ* type_map out
+ $result= PyList_New( $1->array_num );
+ for (unsigned int j = 0; j < $1->array_num; ++j)
+ {
+ PyObject *l= PyList_New( $1->array_len );
+ PyList_SetItem($result,j,l);
+ for (unsigned int i = 0; i < $1->array_len; i++)
+ {
+ PyObject *o = SetFunc($1->arrays[j][i] );
+ //PyObject *o = SetFunc($1->arrays[i][j] );
+ PyList_SetItem(l,i,o);
+ }
+ }
+ unsigned int i;
+ if ($1 && $1->arrays && $1->can_delete)
+ {
+ for (i=0; i < $1->array_num;++i)
+ if ($1->arrays[i])
+ free($1->arrays[i]);
+ free($1->arrays);
+ }
+ if ($1) delete $1;
+}
+%enddef
+
+%import "../include/doublefann.h"
+%import "../include/fann.h"
+%import "../include/fann_io.h"
+%import "../include/fann_train.h"
+%import "../include/fann_data.h"
+%import "../include/fann_cascade.h"
+%import "../include/fann_error.h"
+%import "../include/fann_activation.h"
+
+HELPER_ARRAY_TEMPLATE( FANN::helper_array, unsigned int, PyInt_AsLong , PyInt_FromLong , long );
+HELPER_ARRAY_TEMPLATE( FANN::helper_array, fann_type , PyFloat_AsDouble, PyFloat_FromDouble, double );
+
+HELPER_ARRAY_ARRAY_TEMPLATE( FANN::helper_array_array, fann_type , PyFloat_AsDouble, PyFloat_FromDouble, double );
+
+%rename(neural_net_parent) FANN::neural_net;
+%rename(neural_net) FANN::Neural_net;
+
+%rename(training_data_parent) FANN::training_data;
+%rename(training_data) FANN::Training_data;
+
+%include "../include/fann_cpp.h"
+%include "fann_cpp_subclass.h"
+
+/* ex: set ts=4: set sw=4: set cin */
diff --git a/fann2/fann_cpp_subclass.h b/fann2/fann_cpp_subclass.h
new file mode 100644
index 0000000..0222508
--- /dev/null
+++ b/fann2/fann_cpp_subclass.h
@@ -0,0 +1,580 @@
+#ifndef FANN_CPP_SUBCLASS_H_INCLUDED
+#define FANN_CPP_SUBCLASS_H_INCLUDED
+
+#include <stdarg.h>
+#include <string>
+#include <fann_cpp.h>
+
+#include <iostream>
+/* Namespace: FANN
+ The FANN namespace groups the C++ wrapper definitions */
+namespace FANN
+{
+
+ template <typename T>
+ class helper_array
+ {
+ public:
+ helper_array()
+ {
+ array=0;
+ array_len=0;
+ can_delete=true;
+ }
+ void set (T * array, unsigned int len)
+ {
+ this->array=array;
+ this->array_len=array_len;
+ }
+ T* array;
+ unsigned int array_len;
+ bool can_delete;
+ };
+
+ template <typename T>
+ class helper_array_array
+ {
+ public:
+ helper_array_array()
+ {
+ arrays=0;
+ array_len=0;
+ array_num=0;
+ can_delete=false;
+ }
+ void set (T ** arrays, unsigned int len, unsigned int nun)
+ {
+ this->arrays=arrays;
+ this->array_len=array_len;
+ this->array_num=array_num;
+ }
+ T** arrays;
+ unsigned int array_len;
+ unsigned int array_num;
+ bool can_delete;
+ };
+
+ /* Forward declaration of class neural_net and training_data */
+ class Neural_net;
+ class Training_data;
+
+
+ /*************************************************************************/
+
+ /* Class: training_data
+
+ Encapsulation of a training data set <struct fann_train_data> and
+ associated C API functions.
+ */
+ class Training_data : public training_data
+ {
+ public:
+ /* Constructor: training_data
+
+ Default constructor creates an empty neural net.
+ Use <read_train_from_file>, <set_train_data> or <create_train_from_callback> to initialize.
+ */
+ Training_data() : training_data()
+ {
+ }
+
+ /* Constructor: training_data
+
+ Copy constructor constructs a copy of the training data.
+ Corresponds to the C API <fann_duplicate_train_data> function.
+ */
+ Training_data(const Training_data &data)
+ {
+ destroy_train();
+ if (data.train_data != NULL)
+ {
+ train_data = fann_duplicate_train_data(data.train_data);
+ }
+ }
+
+ /* Destructor: ~training_data
+
+ Provides automatic cleanup of data.
+ Define USE_VIRTUAL_DESTRUCTOR if you need the destructor to be virtual.
+
+ See also:
+ <destroy>
+ */
+#ifdef USE_VIRTUAL_DESTRUCTOR
+ virtual
+#endif
+ ~Training_data()
+ {
+ destroy_train();
+ }
+
+
+
+ /* Grant access to the encapsulated data since many situations
+ and applications creates the data from sources other than files
+ or uses the training data for testing and related functions */
+
+ /* Method: get_input
+
+ Returns:
+ A pointer to the array of input training data
+
+ See also:
+ <get_output>, <set_train_data>
+ */
+ helper_array_array<fann_type>* get_input()
+ {
+ if (train_data == NULL)
+ {
+ return NULL;
+ }
+ else
+ {
+ helper_array_array<fann_type>* ret = new helper_array_array<fann_type>;
+
+ ret->arrays=train_data->input;
+ ret->array_num=train_data->num_data;
+ ret->array_len=train_data->num_input;
+ ret->can_delete=false;
+ return ret;
+ }
+ }
+
+ /* Method: get_output
+
+ Returns:
+ A pointer to the array of output training data
+
+ See also:
+ <get_input>, <set_train_data>
+ */
+
+ helper_array_array<fann_type>* get_output()
+ {
+ if (train_data == NULL)
+ {
+ return NULL;
+ }
+ else
+ {
+ helper_array_array<fann_type>* ret = new helper_array_array<fann_type>;
+
+ ret->arrays=train_data->output;
+ ret->array_num=train_data->num_data;
+ ret->array_len=train_data->num_output;
+ ret->can_delete=false;
+ return ret;
+ }
+ }
+
+
+ /* Method: set_train_data
+
+ Set the training data to the input and output data provided.
+
+ A copy of the data is made so there are no restrictions on the
+ allocation of the input/output data and the caller is responsible
+ for the deallocation of the data pointed to by input and output.
+
+ See also:
+ <get_input>, <get_output>
+ */
+
+ void set_train_data(helper_array_array< fann_type >* input,
+ helper_array_array< fann_type >* output)
+ {
+ if (input->array_num!=output->array_num)
+ {
+ std::cerr<<"Error: input and output must have the same dimension!"<<std::endl;
+ return;
+ }
+ input->can_delete=true;
+ output->can_delete=true;
+
+ training_data::set_train_data(input->array_num, input->array_len, input->arrays, output->array_len, output->arrays);
+ }
+
+
+ };
+
+ /*************************************************************************/
+
+ /* Class: Neural_net
+
+ Encapsulation of a neural network <struct fann> and
+ associated C API functions.
+ */
+ class Neural_net : public neural_net
+ {
+ public:
+ /* Constructor: neural_net
+
+ Default constructor creates an empty neural net.
+ Use one of the create functions to create the neural network.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut>,
+ <create_standard_array>, <create_sparse_array>, <create_shortcut_array>
+ */
+ Neural_net() : neural_net()
+ {
+ }
+
+ /* Destructor: ~neural_net
+
+ Provides automatic cleanup of data.
+ Define USE_VIRTUAL_DESTRUCTOR if you need the destructor to be virtual.
+
+ See also:
+ <destroy>
+ */
+#ifdef USE_VIRTUAL_DESTRUCTOR
+ virtual
+#endif
+ ~Neural_net()
+ {
+ destroy();
+ }
+
+
+ /* Method: create_standard_array
+
+ Just like <create_standard>, but with an array of layer sizes
+ instead of individual parameters.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut>,
+ <fann_create_standard>
+
+ This function appears in FANN >= 2.0.0.
+ */
+
+ bool create_standard_array( helper_array<unsigned int>* layers)
+ {
+ return neural_net::create_standard_array( layers->array_len, layers->array);
+ }
+
+ /* Method: create_sparse_array
+ Just like <create_sparse>, but with an array of layer sizes
+ instead of individual parameters.
+
+ See <create_sparse> for a description of the parameters.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut>,
+ <fann_create_sparse_array>
+
+ This function appears in FANN >= 2.0.0.
+ */
+
+ bool create_sparse_array(float connection_rate,
+ helper_array<unsigned int>* layers)
+ {
+ return neural_net::create_sparse_array( connection_rate, layers->array_len, layers->array);
+ }
+
+ /* Method: create_shortcut_array
+
+ Just like <create_shortcut>, but with an array of layer sizes
+ instead of individual parameters.
+
+ See <create_standard_array> for a description of the parameters.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut>,
+ <fann_create_shortcut_array>
+
+ This function appears in FANN >= 2.0.0.
+ */
+
+ bool create_shortcut_array( helper_array<unsigned int>* layers)
+ {
+ return neural_net::create_shortcut_array( layers->array_len, layers->array);
+ }
+
+ /* Method: run
+
+ Will run input through the neural network, returning an array of outputs, the number of which being
+ equal to the number of neurons in the output layer.
+
+ See also:
+ <test>, <fann_run>
+
+ This function appears in FANN >= 1.0.0.
+ */
+
+ helper_array<fann_type>* run(helper_array<fann_type> *input)
+ {
+ if (ann == NULL && input->array_len!=ann->num_input)
+ {
+ return NULL;
+ }
+ helper_array<fann_type>* res= new helper_array<fann_type>;
+ res->array=fann_run(ann, input->array);
+ res->array_len=ann->num_output;
+ res->can_delete=false;
+ return res;
+ }
+
+
+
+#ifndef FIXEDFANN
+ /* Method: train
+
+ Train one iteration with a set of inputs, and a set of desired outputs.
+ This training is always incremental training (see <FANN::training_algorithm_enum>),
+ since only one pattern is presented.
+
+ Parameters:
+ ann - The neural network structure
+ input - an array of inputs. This array must be exactly <fann_get_num_input> long.
+ desired_output - an array of desired outputs. This array must be exactly <fann_get_num_output> long.
+
+ See also:
+ <train_on_data>, <train_epoch>, <fann_train>
+
+ This function appears in FANN >= 1.0.0.
+ */
+
+ void train(helper_array<fann_type> *input, helper_array<fann_type> *desired_output)
+ {
+ if (ann != NULL && input->array_len==ann->num_input && desired_output->array_len==ann->num_output)
+ {
+ fann_train(ann, input->array, desired_output->array);
+ }
+ }
+
+#endif /* NOT FIXEDFANN */
+
+ /* Method: test
+
+ Test with a set of inputs, and a set of desired outputs.
+ This operation updates the mean square error, but does not
+ change the network in any way.
+
+ See also:
+ <test_data>, <train>, <fann_test>
+
+ This function appears in FANN >= 1.0.0.
+ */
+
+ helper_array<fann_type>* test(helper_array<fann_type> *input, helper_array<fann_type>* desired_output)
+ {
+ if (ann == NULL)
+ {
+ return NULL;
+ }
+ helper_array<fann_type>* res= new helper_array<fann_type>;
+ res->array=fann_test(ann, input->array, desired_output->array);
+ res->array_len=ann->num_output;
+ res->can_delete=false;
+ return res;
+ }
+
+
+ /*************************************************************************************************************/
+
+
+ /* Method: get_layer_array
+
+ Get the number of neurons in each layer in the network.
+
+ Bias is not included so the layers match the create methods.
+
+ The layers array must be preallocated to at least
+ sizeof(unsigned int) * get_num_layers() long.
+
+ See also:
+ <fann_get_layer_array>
+
+ This function appears in FANN >= 2.1.0
+ */
+
+ void get_layer_array(helper_array<unsigned int>* ARGOUT)
+ {
+ if (ann != NULL)
+ {
+ ARGOUT->array_len = fann_get_num_layers(ann);
+ ARGOUT->array = (unsigned int*) malloc(sizeof(unsigned int)*
+ ARGOUT->array_len);
+ fann_get_layer_array(ann, ARGOUT->array);
+ }
+ }
+
+ /* Method: get_bias_array
+
+ Get the number of bias in each layer in the network.
+
+ The bias array must be preallocated to at least
+ sizeof(unsigned int) * get_num_layers() long.
+
+ See also:
+ <fann_get_bias_array>
+
+ This function appears in FANN >= 2.1.0
+ */
+ void get_bias_array(helper_array<unsigned int>* ARGOUT)
+ {
+ if (ann != NULL)
+ {
+ ARGOUT->array_len = fann_get_num_layers(ann);
+ ARGOUT->array = (unsigned int*) malloc(sizeof(unsigned int)*
+ ARGOUT->array_len);
+ fann_get_bias_array(ann, ARGOUT->array);
+ }
+ }
+
+ /* Method: get_connection_array
+
+ Get the connections in the network.
+
+ The connections array must be preallocated to at least
+ sizeof(struct fann_connection) * get_total_connections() long.
+
+ See also:
+ <fann_get_connection_array>
+
+ This function appears in FANN >= 2.1.0
+ */
+
+ void get_connection_array(helper_array<connection> *ARGOUT)
+ {
+ if (ann != NULL)
+ {
+ ARGOUT->array_len = fann_get_total_connections(ann);
+ ARGOUT->array = (connection*) malloc(sizeof(connection)*
+ ARGOUT->array_len);
+ fann_get_connection_array(ann, ARGOUT->array);
+ }
+ }
+ /* Method: set_weight_array
+
+ Set connections in the network.
+
+ Only the weights can be changed, connections and weights are ignored
+ if they do not already exist in the network.
+
+ The array must have sizeof(struct fann_connection) * num_connections size.
+
+ See also:
+ <fann_set_weight_array>
+
+ This function appears in FANN >= 2.1.0
+ */
+ void set_weight_array(helper_array<connection> *connections)
+ {
+ if (ann != NULL)
+ {
+ fann_set_weight_array(ann, connections->array, connections->array_len);
+ }
+ }
+
+ /*********************************************************************/
+
+#ifdef TODO
+ /* Method: get_cascade_activation_functions
+
+ The cascade activation functions array is an array of the different activation functions used by
+ the candidates.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ See also:
+ <get_cascade_activation_functions_count>, <set_cascade_activation_functions>,
+ <FANN::activation_function_enum>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ activation_function_enum * get_cascade_activation_functions()
+ {
+ enum fann_activationfunc_enum *activation_functions = NULL;
+ if (ann != NULL)
+ {
+ activation_functions = fann_get_cascade_activation_functions(ann);
+ }
+ return reinterpret_cast<activation_function_enum *>(activation_functions);
+ }
+
+ /* Method: set_cascade_activation_functions
+
+ Sets the array of cascade candidate activation functions. The array must be just as long
+ as defined by the count.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ See also:
+ <get_cascade_activation_steepnesses_count>, <get_cascade_activation_steepnesses>,
+ <fann_set_cascade_activation_functions>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_activation_functions(activation_function_enum *cascade_activation_functions,
+ unsigned int cascade_activation_functions_count)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_activation_functions(ann,
+ reinterpret_cast<enum fann_activationfunc_enum *>(cascade_activation_functions),
+ cascade_activation_functions_count);
+ }
+ }
+#endif
+ /* Method: get_cascade_activation_steepnesses
+
+ The cascade activation steepnesses array is an array of the different activation functions used by
+ the candidates.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ The default activation steepnesses is {0.25, 0.50, 0.75, 1.00}
+
+ See also:
+ <set_cascade_activation_steepnesses>, <get_cascade_activation_steepnesses_count>,
+ <fann_get_cascade_activation_steepnesses>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ helper_array<fann_type> *get_cascade_activation_steepnesses()
+ {
+ helper_array<fann_type> *activation_steepnesses = NULL;
+ if (ann != NULL)
+ {
+ activation_steepnesses->array_len = fann_get_cascade_activation_steepnesses_count(ann);
+ activation_steepnesses->array = fann_get_cascade_activation_steepnesses(ann);
+ }
+ return activation_steepnesses;
+ }
+
+ /* Method: set_cascade_activation_steepnesses
+
+ Sets the array of cascade candidate activation steepnesses. The array must be just as long
+ as defined by the count.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ See also:
+ <get_cascade_activation_steepnesses>, <get_cascade_activation_steepnesses_count>,
+ <fann_set_cascade_activation_steepnesses>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_activation_steepnesses(helper_array<fann_type> *cascade_activation_steepnesses)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_activation_steepnesses(ann,
+ cascade_activation_steepnesses->array, cascade_activation_steepnesses->array_len);
+ }
+ }
+
+
+ };
+
+ /*************************************************************************/
+};
+
+#endif /* FANN_CPP_SUBCLASS_H_INCLUDED */
diff --git a/include/compat_time.h b/include/compat_time.h
new file mode 100644
index 0000000..55a6197
--- /dev/null
+++ b/include/compat_time.h
@@ -0,0 +1,141 @@
+/*
+
+Originally timeval.h by Wu Yongwei
+
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/*
+ * timeval.h 1.0 01/12/19
+ *
+ * Defines gettimeofday, timeval, etc. for Win32
+ *
+ * By Wu Yongwei
+ *
+ */
+
+#ifndef _TIMEVAL_H
+#define _TIMEVAL_H
+
+#ifdef _WIN32
+
+/* Modified to compile as ANSI C without include of windows.h
+ If this gives problems with future Windows/MSC versions, then
+ uncomment the USE_WINDOWS_H definition to switch back. */
+/* #define USE_WINDOWS_H */
+#ifdef USE_WINDOWS_H
+#define WIN32_LEAN_AND_MEAN
+#include <windows.h>
+#else /*
*/
+#ifndef _INC_WINDOWS
+#define VOID void
+#define WINAPI __stdcall
+#define OUT
+#define WINBASEAPI
+
typedef long LONG;
+
typedef unsigned long DWORD;
+
typedef __int64 LONGLONG;
+
typedef struct _FILETIME
+{
+
DWORD dwLowDateTime;
+
DWORD dwHighDateTime;
+
} FILETIME, *LPFILETIME;
+
typedef union _LARGE_INTEGER
+{
+
+ /* Removed unnamed struct,
+ * it is not ANSI C compatible */
+ /* struct {
+ * DWORD LowPart;
+ * LONG HighPart;
+ * }; */
+ struct
+ {
+
DWORD LowPart;
+
LONG HighPart;
+
} u;
+
LONGLONG QuadPart;
+
} LARGE_INTEGER;
+
+
WINBASEAPI VOID WINAPI
GetSystemTimeAsFileTime(OUT LPFILETIME lpSystemTimeAsFileTime);
+
+#endif /* _INC_WINDOWS */
+#endif /* USE_WINDOWS_H */
+
+#include <time.h>
+
+#ifndef __GNUC__
+#define EPOCHFILETIME (116444736000000000i64)
+#else /*
*/
+#define EPOCHFILETIME (116444736000000000LL)
+#endif /*
*/
+
struct timeval
+{
+
long tv_sec; /* seconds */
+
long tv_usec; /* microseconds */
+
};
+
struct timezone
+{
+
int tz_minuteswest; /* minutes W of Greenwich */
+
int tz_dsttime; /* type of dst correction */
+
};
+
__inline int gettimeofday(struct timeval *tv, struct timezone *tz)
+{
+
FILETIME ft;
+
LARGE_INTEGER li;
+
__int64 t;
+
static int tzflag;
+
+
if(tv)
+
+ {
+
GetSystemTimeAsFileTime(&ft);
+
+ /* The following two lines have been modified to use the named
+ * union member. Unnamed members are not ANSI C compatible. */
+ li.u.LowPart = ft.dwLowDateTime;
+
li.u.HighPart = ft.dwHighDateTime;
+
t = li.QuadPart; /* In 100-nanosecond intervals */
+
t -= EPOCHFILETIME; /* Offset to the Epoch time */
+
t /= 10; /* In microseconds */
+
tv->tv_sec = (long) (t / 1000000);
+
tv->tv_usec = (long) (t % 1000000);
+
}
+
if(tz)
+
+ {
+
if(!tzflag)
+
+ {
+
_tzset();
+
tzflag++;
+
}
+
tz->tz_minuteswest = _timezone / 60;
+
tz->tz_dsttime = _daylight;
+
}
+
return 0;
+
}
+
+
+#else /* _WIN32 */
+
+#include <sys/time.h>
+
+#endif /* _WIN32 */
+
+#endif /* _TIMEVAL_H */
diff --git a/include/config.h b/include/config.h
new file mode 100644
index 0000000..f2fb1c8
--- /dev/null
+++ b/include/config.h
@@ -0,0 +1,8 @@
+/* Name of package */
+/* #undef PACKAGE */
+
+/* Version number of package */
+#define VERSION "2.2.0"
+
+/* Define for the x86_64 CPU famyly */
+/* #undef X86_64 */
diff --git a/include/doublefann.h b/include/doublefann.h
new file mode 100644
index 0000000..891420e
--- /dev/null
+++ b/include/doublefann.h
@@ -0,0 +1,33 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __doublefann_h__
+#define __doublefann_h__
+
+typedef double fann_type;
+
+#undef DOUBLEFANN
+#define DOUBLEFANN
+#define FANNPRINTF "%.20e"
+#define FANNSCANF "%le"
+
+#define FANN_INCLUDE
+#include "fann.h"
+
+#endif
diff --git a/include/fann.h b/include/fann.h
new file mode 100644
index 0000000..8531dd3
--- /dev/null
+++ b/include/fann.h
@@ -0,0 +1,613 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/* This file defines the user interface to the fann library.
+ It is included from fixedfann.h, floatfann.h and doublefann.h and should
+ NOT be included directly. If included directly it will react as if
+ floatfann.h was included.
+*/
+
+/* Section: FANN Creation/Execution
+
+ The FANN library is designed to be very easy to use.
+ A feedforward ann can be created by a simple <fann_create_standard> function, while
+ other ANNs can be created just as easily. The ANNs can be trained by <fann_train_on_file>
+ and executed by <fann_run>.
+
+ All of this can be done without much knowledge of the internals of ANNs, although the ANNs created will
+ still be powerfull and effective. If you have more knowledge about ANNs, and desire more control, almost
+ every part of the ANNs can be parametized to create specialized and highly optimal ANNs.
+ */
+/* Group: Creation, Destruction & Execution */
+
+#ifndef FANN_INCLUDE
+/* just to allow for inclusion of fann.h in normal stuations where only floats are needed */
+#ifdef FIXEDFANN
+#include "fixedfann.h"
+#else
+#include "floatfann.h"
+#endif /* FIXEDFANN */
+
+#else
+
+/* COMPAT_TIME REPLACEMENT */
+#ifndef _WIN32
+#include <sys/time.h>
+#else /* _WIN32 */
+#if !defined(_MSC_EXTENSIONS) && !defined(_INC_WINDOWS)
+extern unsigned long __stdcall GetTickCount(void);
+
+#else /* _MSC_EXTENSIONS */
+#define WIN32_LEAN_AND_MEAN
+#include <windows.h>
+#endif /* _MSC_EXTENSIONS */
+#endif /* _WIN32 */
+
+#ifndef __fann_h__
+#define __fann_h__
+
+#ifdef __cplusplus
+extern "C"
+{
+
+#ifndef __cplusplus
+} /* to fool automatic indention engines */
+#endif
+#endif /* __cplusplus */
+
+#ifndef NULL
+#define NULL 0
+#endif /* NULL */
+
+/* ----- Macros used to define DLL external entrypoints ----- */
+/*
+ DLL Export, import and calling convention for Windows.
+ Only defined for Microsoft VC++ FANN_EXTERNAL indicates
+ that a function will be exported/imported from a dll
+ FANN_API ensures that the DLL calling convention
+ will be used for a function regardless of the calling convention
+ used when compiling.
+
+ For a function to be exported from a DLL its prototype and
+ declaration must be like this:
+ FANN_EXTERNAL void FANN_API function(char *argument)
+
+ The following ifdef block is a way of creating macros which
+ make exporting from a DLL simple. All files within a DLL are
+ compiled with the FANN_DLL_EXPORTS symbol defined on the
+ command line. This symbol should not be defined on any project
+ that uses this DLL. This way any other project whose source
+ files include this file see FANN_EXTERNAL functions as being imported
+ from a DLL, whereas a DLL sees symbols defined with this
+ macro as being exported which makes calls more efficient.
+ The __stdcall calling convention is used for functions in a
+ windows DLL.
+
+ The callback functions for fann_set_callback must be declared as FANN_API
+ so the DLL and the application program both use the same
+ calling convention.
+*/
+
+/*
+ The following sets the default for MSVC++ 2003 or later to use
+ the fann dll's. To use a lib or fixedfann.c, floatfann.c or doublefann.c
+ with those compilers FANN_NO_DLL has to be defined before
+ including the fann headers.
+ The default for previous MSVC compilers such as VC++ 6 is not
+ to use dll's. To use dll's FANN_USE_DLL has to be defined before
+ including the fann headers.
+*/
+#if (_MSC_VER > 1300)
+#ifndef FANN_NO_DLL
+#define FANN_USE_DLL
+#endif /* FANN_USE_LIB */
+#endif /* _MSC_VER */
+#if defined(_MSC_VER) && (defined(FANN_USE_DLL) || defined(FANN_DLL_EXPORTS))
+#ifdef FANN_DLL_EXPORTS
+#define FANN_EXTERNAL __declspec(dllexport)
+#else /* */
+#define FANN_EXTERNAL __declspec(dllimport)
+#endif /* FANN_DLL_EXPORTS*/
+#define FANN_API __stdcall
+#else /* */
+#define FANN_EXTERNAL
+#define FANN_API
+#endif /* _MSC_VER */
+/* ----- End of macros used to define DLL external entrypoints ----- */
+
+#include "fann_error.h"
+#include "fann_activation.h"
+#include "fann_data.h"
+#include "fann_internal.h"
+#include "fann_train.h"
+#include "fann_cascade.h"
+#include "fann_io.h"
+
+/* Function: fann_create_standard
+
+ Creates a standard fully connected backpropagation neural network.
+
+ There will be a bias neuron in each layer (except the output layer),
+ and this bias neuron will be connected to all neurons in the next layer.
+ When running the network, the bias nodes always emits 1.
+
+ To destroy a <struct fann> use the <fann_destroy> function.
+
+ Parameters:
+ num_layers - The total number of layers including the input and the output layer.
+ ... - Integer values determining the number of neurons in each layer starting with the
+ input layer and ending with the output layer.
+
+ Returns:
+ A pointer to the newly created <struct fann>.
+
+ Example:
+ > // Creating an ANN with 2 input neurons, 1 output neuron,
+ > // and two hidden neurons with 8 and 9 neurons
+ > struct fann *ann = fann_create_standard(4, 2, 8, 9, 1);
+
+ See also:
+ <fann_create_standard_array>, <fann_create_sparse>, <fann_create_shortcut>
+
+ This function appears in FANN >= 2.0.0.
+*/
+FANN_EXTERNAL struct fann *FANN_API fann_create_standard(unsigned int num_layers, ...);
+
+/* Function: fann_create_standard_array
+ Just like <fann_create_standard>, but with an array of layer sizes
+ instead of individual parameters.
+
+ Example:
+ > // Creating an ANN with 2 input neurons, 1 output neuron,
+ > // and two hidden neurons with 8 and 9 neurons
+ > unsigned int layers[4] = {2, 8, 9, 1};
+ > struct fann *ann = fann_create_standard_array(4, layers);
+
+ See also:
+ <fann_create_standard>, <fann_create_sparse>, <fann_create_shortcut>
+
+ This function appears in FANN >= 2.0.0.
+*/
+FANN_EXTERNAL struct fann *FANN_API fann_create_standard_array(unsigned int num_layers,
+ const unsigned int *layers);
+
+/* Function: fann_create_sparse
+
+ Creates a standard backpropagation neural network, which is not fully connected.
+
+ Parameters:
+ connection_rate - The connection rate controls how many connections there will be in the
+ network. If the connection rate is set to 1, the network will be fully
+ connected, but if it is set to 0.5 only half of the connections will be set.
+ A connection rate of 1 will yield the same result as <fann_create_standard>
+ num_layers - The total number of layers including the input and the output layer.
+ ... - Integer values determining the number of neurons in each layer starting with the
+ input layer and ending with the output layer.
+
+ Returns:
+ A pointer to the newly created <struct fann>.
+
+ See also:
+ <fann_create_sparse_array>, <fann_create_standard>, <fann_create_shortcut>
+
+ This function appears in FANN >= 2.0.0.
+*/
+FANN_EXTERNAL struct fann *FANN_API fann_create_sparse(float connection_rate,
+ unsigned int num_layers, ...);
+
+
+/* Function: fann_create_sparse_array
+ Just like <fann_create_sparse>, but with an array of layer sizes
+ instead of individual parameters.
+
+ See <fann_create_standard_array> for a description of the parameters.
+
+ See also:
+ <fann_create_sparse>, <fann_create_standard>, <fann_create_shortcut>
+
+ This function appears in FANN >= 2.0.0.
+*/
+FANN_EXTERNAL struct fann *FANN_API fann_create_sparse_array(float connection_rate,
+ unsigned int num_layers,
+ const unsigned int *layers);
+
+/* Function: fann_create_shortcut
+
+ Creates a standard backpropagation neural network, which is not fully connected and which
+ also has shortcut connections.
+
+ Shortcut connections are connections that skip layers. A fully connected network with shortcut
+ connections, is a network where all neurons are connected to all neurons in later layers.
+ Including direct connections from the input layer to the output layer.
+
+ See <fann_create_standard> for a description of the parameters.
+
+ See also:
+ <fann_create_shortcut_array>, <fann_create_standard>, <fann_create_sparse>,
+
+ This function appears in FANN >= 2.0.0.
+*/
+FANN_EXTERNAL struct fann *FANN_API fann_create_shortcut(unsigned int num_layers, ...);
+
+/* Function: fann_create_shortcut_array
+ Just like <fann_create_shortcut>, but with an array of layer sizes
+ instead of individual parameters.
+
+ See <fann_create_standard_array> for a description of the parameters.
+
+ See also:
+ <fann_create_shortcut>, <fann_create_standard>, <fann_create_sparse>
+
+ This function appears in FANN >= 2.0.0.
+*/
+FANN_EXTERNAL struct fann *FANN_API fann_create_shortcut_array(unsigned int num_layers,
+ const unsigned int *layers);
+/* Function: fann_destroy
+ Destroys the entire network and properly freeing all the associated memmory.
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL void FANN_API fann_destroy(struct fann *ann);
+
+
+/* Function: fann_copy
+ Creates a copy of a fann structure.
+
+ Data in the user data <fann_set_user_data> is not copied, but the user data pointer is copied.
+
+ This function appears in FANN >= 2.2.0.
+*/
+FANN_EXTERNAL struct fann * FANN_API fann_copy(struct fann *ann);
+
+
+/* Function: fann_run
+ Will run input through the neural network, returning an array of outputs, the number of which being
+ equal to the number of neurons in the output layer.
+
+ See also:
+ <fann_test>
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL fann_type * FANN_API fann_run(struct fann *ann, fann_type * input);
+
+/* Function: fann_randomize_weights
+ Give each connection a random weight between *min_weight* and *max_weight*
+
+ From the beginning the weights are random between -0.1 and 0.1.
+
+ See also:
+ <fann_init_weights>
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL void FANN_API fann_randomize_weights(struct fann *ann, fann_type min_weight,
+ fann_type max_weight);
+
+/* Function: fann_init_weights
+ Initialize the weights using Widrow + Nguyen's algorithm.
+
+ This function behaves similarly to fann_randomize_weights. It will use the algorithm developed
+ by Derrick Nguyen and Bernard Widrow to set the weights in such a way
+ as to speed up training. This technique is not always successful, and in some cases can be less
+ efficient than a purely random initialization.
+
+ The algorithm requires access to the range of the input data (ie, largest and smallest input),
+ and therefore accepts a second argument, data, which is the training data that will be used to
+ train the network.
+
+ See also:
+ <fann_randomize_weights>, <fann_read_train_from_file>
+
+ This function appears in FANN >= 1.1.0.
+*/
+FANN_EXTERNAL void FANN_API fann_init_weights(struct fann *ann, struct fann_train_data *train_data);
+
+/* Function: fann_print_connections
+ Will print the connections of the ann in a compact matrix, for easy viewing of the internals
+ of the ann.
+
+ The output from fann_print_connections on a small (2 2 1) network trained on the xor problem
+ >Layer / Neuron 012345
+ >L 1 / N 3 BBa...
+ >L 1 / N 4 BBA...
+ >L 1 / N 5 ......
+ >L 2 / N 6 ...BBA
+ >L 2 / N 7 ......
+
+ This network have five real neurons and two bias neurons. This gives a total of seven neurons
+ named from 0 to 6. The connections between these neurons can be seen in the matrix. "." is a
+ place where there is no connection, while a character tells how strong the connection is on a
+ scale from a-z. The two real neurons in the hidden layer (neuron 3 and 4 in layer 1) has
+ connection from the three neurons in the previous layer as is visible in the first two lines.
+ The output neuron (6) has connections form the three neurons in the hidden layer 3 - 5 as is
+ visible in the fourth line.
+
+ To simplify the matrix output neurons is not visible as neurons that connections can come from,
+ and input and bias neurons are not visible as neurons that connections can go to.
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL void FANN_API fann_print_connections(struct fann *ann);
+
+/* Group: Parameters */
+/* Function: fann_print_parameters
+
+ Prints all of the parameters and options of the ANN
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL void FANN_API fann_print_parameters(struct fann *ann);
+
+
+/* Function: fann_get_num_input
+
+ Get the number of input neurons.
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL unsigned int FANN_API fann_get_num_input(struct fann *ann);
+
+
+/* Function: fann_get_num_output
+
+ Get the number of output neurons.
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL unsigned int FANN_API fann_get_num_output(struct fann *ann);
+
+
+/* Function: fann_get_total_neurons
+
+ Get the total number of neurons in the entire network. This number does also include the
+ bias neurons, so a 2-4-2 network has 2+4+2 +2(bias) = 10 neurons.
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL unsigned int FANN_API fann_get_total_neurons(struct fann *ann);
+
+
+/* Function: fann_get_total_connections
+
+ Get the total number of connections in the entire network.
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL unsigned int FANN_API fann_get_total_connections(struct fann *ann);
+
+/* Function: fann_get_network_type
+
+ Get the type of neural network it was created as.
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ Returns:
+ The neural network type from enum <fann_network_type_enum>
+
+ See Also:
+ <fann_network_type_enum>
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL enum fann_nettype_enum FANN_API fann_get_network_type(struct fann *ann);
+
+/* Function: fann_get_connection_rate
+
+ Get the connection rate used when the network was created
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ Returns:
+ The connection rate
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL float FANN_API fann_get_connection_rate(struct fann *ann);
+
+/* Function: fann_get_num_layers
+
+ Get the number of layers in the network
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ Returns:
+ The number of layers in the neural network
+
+ Example:
+ > // Obtain the number of layers in a neural network
+ > struct fann *ann = fann_create_standard(4, 2, 8, 9, 1);
+ > unsigned int num_layers = fann_get_num_layers(ann);
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL unsigned int FANN_API fann_get_num_layers(struct fann *ann);
+
+/*Function: fann_get_layer_array
+
+ Get the number of neurons in each layer in the network.
+
+ Bias is not included so the layers match the fann_create functions.
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ The layers array must be preallocated to at least
+ sizeof(unsigned int) * fann_num_layers() long.
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void FANN_API fann_get_layer_array(struct fann *ann, unsigned int *layers);
+
+/* Function: fann_get_bias_array
+
+ Get the number of bias in each layer in the network.
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ The bias array must be preallocated to at least
+ sizeof(unsigned int) * fann_num_layers() long.
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void FANN_API fann_get_bias_array(struct fann *ann, unsigned int *bias);
+
+/* Function: fann_get_connection_array
+
+ Get the connections in the network.
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ The connections array must be preallocated to at least
+ sizeof(struct fann_connection) * fann_get_total_connections() long.
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void FANN_API fann_get_connection_array(struct fann *ann,
+ struct fann_connection *connections);
+
+/* Function: fann_set_weight_array
+
+ Set connections in the network.
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ Only the weights can be changed, connections and weights are ignored
+ if they do not already exist in the network.
+
+ The array must have sizeof(struct fann_connection) * num_connections size.
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void FANN_API fann_set_weight_array(struct fann *ann,
+ struct fann_connection *connections, unsigned int num_connections);
+
+/* Function: fann_set_weight
+
+ Set a connection in the network.
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ Only the weights can be changed. The connection/weight is
+ ignored if it does not already exist in the network.
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void FANN_API fann_set_weight(struct fann *ann,
+ unsigned int from_neuron, unsigned int to_neuron, fann_type weight);
+
+/* Function: fann_set_user_data
+
+ Store a pointer to user defined data. The pointer can be
+ retrieved with <fann_get_user_data> for example in a
+ callback. It is the user's responsibility to allocate and
+ deallocate any data that the pointer might point to.
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+ user_data - A void pointer to user defined data.
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void FANN_API fann_set_user_data(struct fann *ann, void *user_data);
+
+/* Function: fann_get_user_data
+
+ Get a pointer to user defined data that was previously set
+ with <fann_set_user_data>. It is the user's responsibility to
+ allocate and deallocate any data that the pointer might point to.
+
+ Parameters:
+ ann - A previously created neural network structure of
+ type <struct fann> pointer.
+
+ Returns:
+ A void pointer to user defined data.
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void * FANN_API fann_get_user_data(struct fann *ann);
+
+#ifdef FIXEDFANN
+
+/* Function: fann_get_decimal_point
+
+ Returns the position of the decimal point in the ann.
+
+ This function is only available when the ANN is in fixed point mode.
+
+ The decimal point is described in greater detail in the tutorial <Fixed Point Usage>.
+
+ See also:
+ <Fixed Point Usage>, <fann_get_multiplier>, <fann_save_to_fixed>, <fann_save_train_to_fixed>
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL unsigned int FANN_API fann_get_decimal_point(struct fann *ann);
+
+
+/* Function: fann_get_multiplier
+
+ returns the multiplier that fix point data is multiplied with.
+
+ This function is only available when the ANN is in fixed point mode.
+
+ The multiplier is the used to convert between floating point and fixed point notation.
+ A floating point number is multiplied with the multiplier in order to get the fixed point
+ number and visa versa.
+
+ The multiplier is described in greater detail in the tutorial <Fixed Point Usage>.
+
+ See also:
+ <Fixed Point Usage>, <fann_get_decimal_point>, <fann_save_to_fixed>, <fann_save_train_to_fixed>
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL unsigned int FANN_API fann_get_multiplier(struct fann *ann);
+
+#endif /* FIXEDFANN */
+
+#ifdef __cplusplus
+#ifndef __cplusplus
+/* to fool automatic indention engines */
+{
+
+#endif
+}
+#endif /* __cplusplus */
+
+#endif /* __fann_h__ */
+
+#endif /* NOT FANN_INCLUDE */
diff --git a/include/fann_activation.h b/include/fann_activation.h
new file mode 100644
index 0000000..ae1443f
--- /dev/null
+++ b/include/fann_activation.h
@@ -0,0 +1,144 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __fann_activation_h__
+#define __fann_activation_h__
+/* internal include file, not to be included directly
+ */
+
+/* Implementation of the activation functions
+ */
+
+/* stepwise linear functions used for some of the activation functions */
+
+/* defines used for the stepwise linear functions */
+
+#define fann_linear_func(v1, r1, v2, r2, sum) (((((r2)-(r1)) * ((sum)-(v1)))/((v2)-(v1))) + (r1))
+#define fann_stepwise(v1, v2, v3, v4, v5, v6, r1, r2, r3, r4, r5, r6, min, max, sum) (sum < v5 ? (sum < v3 ? (sum < v2 ? (sum < v1 ? min : fann_linear_func(v1, r1, v2, r2, sum)) : fann_linear_func(v2, r2, v3, r3, sum)) : (sum < v4 ? fann_linear_func(v3, r3, v4, r4, sum) : fann_linear_func(v4, r4, v5, r5, sum))) : (sum < v6 ? fann_linear_func(v5, r5, v6, r6, sum) : max))
+
+/* FANN_LINEAR */
+/* #define fann_linear(steepness, sum) fann_mult(steepness, sum) */
+#define fann_linear_derive(steepness, value) (steepness)
+
+/* FANN_SIGMOID */
+/* #define fann_sigmoid(steepness, sum) (1.0f/(1.0f + exp(-2.0f * steepness * sum))) */
+#define fann_sigmoid_real(sum) (1.0f/(1.0f + exp(-2.0f * sum)))
+#define fann_sigmoid_derive(steepness, value) (2.0f * steepness * value * (1.0f - value))
+
+/* FANN_SIGMOID_SYMMETRIC */
+/* #define fann_sigmoid_symmetric(steepness, sum) (2.0f/(1.0f + exp(-2.0f * steepness * sum)) - 1.0f) */
+#define fann_sigmoid_symmetric_real(sum) (2.0f/(1.0f + exp(-2.0f * sum)) - 1.0f)
+#define fann_sigmoid_symmetric_derive(steepness, value) steepness * (1.0f - (value*value))
+
+/* FANN_GAUSSIAN */
+/* #define fann_gaussian(steepness, sum) (exp(-sum * steepness * sum * steepness)) */
+#define fann_gaussian_real(sum) (exp(-sum * sum))
+#define fann_gaussian_derive(steepness, value, sum) (-2.0f * sum * value * steepness * steepness)
+
+/* FANN_GAUSSIAN_SYMMETRIC */
+/* #define fann_gaussian_symmetric(steepness, sum) ((exp(-sum * steepness * sum * steepness)*2.0)-1.0) */
+#define fann_gaussian_symmetric_real(sum) ((exp(-sum * sum)*2.0f)-1.0f)
+#define fann_gaussian_symmetric_derive(steepness, value, sum) (-2.0f * sum * (value+1.0f) * steepness * steepness)
+
+/* FANN_ELLIOT */
+/* #define fann_elliot(steepness, sum) (((sum * steepness) / 2.0f) / (1.0f + fann_abs(sum * steepness)) + 0.5f) */
+#define fann_elliot_real(sum) (((sum) / 2.0f) / (1.0f + fann_abs(sum)) + 0.5f)
+#define fann_elliot_derive(steepness, value, sum) (steepness * 1.0f / (2.0f * (1.0f + fann_abs(sum)) * (1.0f + fann_abs(sum))))
+
+/* FANN_ELLIOT_SYMMETRIC */
+/* #define fann_elliot_symmetric(steepness, sum) ((sum * steepness) / (1.0f + fann_abs(sum * steepness)))*/
+#define fann_elliot_symmetric_real(sum) ((sum) / (1.0f + fann_abs(sum)))
+#define fann_elliot_symmetric_derive(steepness, value, sum) (steepness * 1.0f / ((1.0f + fann_abs(sum)) * (1.0f + fann_abs(sum))))
+
+/* FANN_SIN_SYMMETRIC */
+#define fann_sin_symmetric_real(sum) (sin(sum))
+#define fann_sin_symmetric_derive(steepness, sum) (steepness*cos(steepness*sum))
+
+/* FANN_COS_SYMMETRIC */
+#define fann_cos_symmetric_real(sum) (cos(sum))
+#define fann_cos_symmetric_derive(steepness, sum) (steepness*-sin(steepness*sum))
+
+/* FANN_SIN */
+#define fann_sin_real(sum) (sin(sum)/2.0f+0.5f)
+#define fann_sin_derive(steepness, sum) (steepness*cos(steepness*sum)/2.0f)
+
+/* FANN_COS */
+#define fann_cos_real(sum) (cos(sum)/2.0f+0.5f)
+#define fann_cos_derive(steepness, sum) (steepness*-sin(steepness*sum)/2.0f)
+
+#define fann_activation_switch(activation_function, value, result) \
+switch(activation_function) \
+{ \
+ case FANN_LINEAR: \
+ result = (fann_type)value; \
+ break; \
+ case FANN_LINEAR_PIECE: \
+ result = (fann_type)((value < 0) ? 0 : (value > 1) ? 1 : value); \
+ break; \
+ case FANN_LINEAR_PIECE_SYMMETRIC: \
+ result = (fann_type)((value < -1) ? -1 : (value > 1) ? 1 : value); \
+ break; \
+ case FANN_SIGMOID: \
+ result = (fann_type)fann_sigmoid_real(value); \
+ break; \
+ case FANN_SIGMOID_SYMMETRIC: \
+ result = (fann_type)fann_sigmoid_symmetric_real(value); \
+ break; \
+ case FANN_SIGMOID_SYMMETRIC_STEPWISE: \
+ result = (fann_type)fann_stepwise(-2.64665293693542480469e+00, -1.47221934795379638672e+00, -5.49306154251098632812e-01, 5.49306154251098632812e-01, 1.47221934795379638672e+00, 2.64665293693542480469e+00, -9.90000009536743164062e-01, -8.99999976158142089844e-01, -5.00000000000000000000e-01, 5.00000000000000000000e-01, 8.99999976158142089844e-01, 9.90000009536743164062e-01, -1, 1, value); \
+ break; \
+ case FANN_SIGMOID_STEPWISE: \
+ result = (fann_type)fann_stepwise(-2.64665246009826660156e+00, -1.47221946716308593750e+00, -5.49306154251098632812e-01, 5.49306154251098632812e-01, 1.47221934795379638672e+00, 2.64665293693542480469e+00, 4.99999988824129104614e-03, 5.00000007450580596924e-02, 2.50000000000000000000e-01, 7.50000000000000000000e-01, 9.49999988079071044922e-01, 9.95000004768371582031e-01, 0, 1, value); \
+ break; \
+ case FANN_THRESHOLD: \
+ result = (fann_type)((value < 0) ? 0 : 1); \
+ break; \
+ case FANN_THRESHOLD_SYMMETRIC: \
+ result = (fann_type)((value < 0) ? -1 : 1); \
+ break; \
+ case FANN_GAUSSIAN: \
+ result = (fann_type)fann_gaussian_real(value); \
+ break; \
+ case FANN_GAUSSIAN_SYMMETRIC: \
+ result = (fann_type)fann_gaussian_symmetric_real(value); \
+ break; \
+ case FANN_ELLIOT: \
+ result = (fann_type)fann_elliot_real(value); \
+ break; \
+ case FANN_ELLIOT_SYMMETRIC: \
+ result = (fann_type)fann_elliot_symmetric_real(value); \
+ break; \
+ case FANN_SIN_SYMMETRIC: \
+ result = (fann_type)fann_sin_symmetric_real(value); \
+ break; \
+ case FANN_COS_SYMMETRIC: \
+ result = (fann_type)fann_cos_symmetric_real(value); \
+ break; \
+ case FANN_SIN: \
+ result = (fann_type)fann_sin_real(value); \
+ break; \
+ case FANN_COS: \
+ result = (fann_type)fann_cos_real(value); \
+ break; \
+ case FANN_GAUSSIAN_STEPWISE: \
+ result = 0; \
+ break; \
+}
+
+#endif
diff --git a/include/fann_cascade.h b/include/fann_cascade.h
new file mode 100644
index 0000000..dd89822
--- /dev/null
+++ b/include/fann_cascade.h
@@ -0,0 +1,557 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __fann_cascade_h__
+#define __fann_cascade_h__
+
+/* Section: FANN Cascade Training
+ Cascade training differs from ordinary training in the sense that it starts with an empty neural network
+ and then adds neurons one by one, while it trains the neural network. The main benefit of this approach,
+ is that you do not have to guess the number of hidden layers and neurons prior to training, but cascade
+ training have also proved better at solving some problems.
+
+ The basic idea of cascade training is that a number of candidate neurons are trained separate from the
+ real network, then the most promissing of these candidate neurons is inserted into the neural network.
+ Then the output connections are trained and new candidate neurons is prepared. The candidate neurons are
+ created as shorcut connected neurons in a new hidden layer, which means that the final neural network
+ will consist of a number of hidden layers with one shorcut connected neuron in each.
+*/
+
+/* Group: Cascade Training */
+
+/* Function: fann_cascadetrain_on_data
+
+ Trains on an entire dataset, for a period of time using the Cascade2 training algorithm.
+ This algorithm adds neurons to the neural network while training, which means that it
+ needs to start with an ANN without any hidden layers. The neural network should also use
+ shortcut connections, so <fann_create_shortcut> should be used to create the ANN like this:
+ >struct fann *ann = fann_create_shortcut(2, fann_num_input_train_data(train_data), fann_num_output_train_data(train_data));
+
+ This training uses the parameters set using the fann_set_cascade_..., but it also uses another
+ training algorithm as it's internal training algorithm. This algorithm can be set to either
+ FANN_TRAIN_RPROP or FANN_TRAIN_QUICKPROP by <fann_set_training_algorithm>, and the parameters
+ set for these training algorithms will also affect the cascade training.
+
+ Parameters:
+ ann - The neural network
+ data - The data, which should be used during training
+ max_neuron - The maximum number of neurons to be added to neural network
+ neurons_between_reports - The number of neurons between printing a status report to stdout.
+ A value of zero means no reports should be printed.
+ desired_error - The desired <fann_get_MSE> or <fann_get_bit_fail>, depending on which stop function
+ is chosen by <fann_set_train_stop_function>.
+
+ Instead of printing out reports every neurons_between_reports, a callback function can be called
+ (see <fann_set_callback>).
+
+ See also:
+ <fann_train_on_data>, <fann_cascadetrain_on_file>, <Parameters>
+
+ This function appears in FANN >= 2.0.0.
+*/
+FANN_EXTERNAL void FANN_API fann_cascadetrain_on_data(struct fann *ann,
+ struct fann_train_data *data,
+ unsigned int max_neurons,
+ unsigned int neurons_between_reports,
+ float desired_error);
+
+/* Function: fann_cascadetrain_on_file
+
+ Does the same as <fann_cascadetrain_on_data>, but reads the training data directly from a file.
+
+ See also:
+ <fann_cascadetrain_on_data>
+
+ This function appears in FANN >= 2.0.0.
+*/
+FANN_EXTERNAL void FANN_API fann_cascadetrain_on_file(struct fann *ann, const char *filename,
+ unsigned int max_neurons,
+ unsigned int neurons_between_reports,
+ float desired_error);
+
+/* Group: Parameters */
+
+/* Function: fann_get_cascade_output_change_fraction
+
+ The cascade output change fraction is a number between 0 and 1 determining how large a fraction
+ the <fann_get_MSE> value should change within <fann_get_cascade_output_stagnation_epochs> during
+ training of the output connections, in order for the training not to stagnate. If the training
+ stagnates, the training of the output connections will be ended and new candidates will be prepared.
+
+ This means:
+ If the MSE does not change by a fraction of <fann_get_cascade_output_change_fraction> during a
+ period of <fann_get_cascade_output_stagnation_epochs>, the training of the output connections
+ is stopped because the training has stagnated.
+
+ If the cascade output change fraction is low, the output connections will be trained more and if the
+ fraction is high they will be trained less.
+
+ The default cascade output change fraction is 0.01, which is equalent to a 1% change in MSE.
+
+ See also:
+ <fann_set_cascade_output_change_fraction>, <fann_get_MSE>, <fann_get_cascade_output_stagnation_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_cascade_output_change_fraction(struct fann *ann);
+
+
+/* Function: fann_set_cascade_output_change_fraction
+
+ Sets the cascade output change fraction.
+
+ See also:
+ <fann_get_cascade_output_change_fraction>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_output_change_fraction(struct fann *ann,
+ float cascade_output_change_fraction);
+
+/* Function: fann_get_cascade_output_stagnation_epochs
+
+ The number of cascade output stagnation epochs determines the number of epochs training is allowed to
+ continue without changing the MSE by a fraction of <fann_get_cascade_output_change_fraction>.
+
+ See more info about this parameter in <fann_get_cascade_output_change_fraction>.
+
+ The default number of cascade output stagnation epochs is 12.
+
+ See also:
+ <fann_set_cascade_output_stagnation_epochs>, <fann_get_cascade_output_change_fraction>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_output_stagnation_epochs(struct fann *ann);
+
+
+/* Function: fann_set_cascade_output_stagnation_epochs
+
+ Sets the number of cascade output stagnation epochs.
+
+ See also:
+ <fann_get_cascade_output_stagnation_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_output_stagnation_epochs(struct fann *ann,
+ unsigned int cascade_output_stagnation_epochs);
+
+
+/* Function: fann_get_cascade_candidate_change_fraction
+
+ The cascade candidate change fraction is a number between 0 and 1 determining how large a fraction
+ the <fann_get_MSE> value should change within <fann_get_cascade_candidate_stagnation_epochs> during
+ training of the candidate neurons, in order for the training not to stagnate. If the training
+ stagnates, the training of the candidate neurons will be ended and the best candidate will be selected.
+
+ This means:
+ If the MSE does not change by a fraction of <fann_get_cascade_candidate_change_fraction> during a
+ period of <fann_get_cascade_candidate_stagnation_epochs>, the training of the candidate neurons
+ is stopped because the training has stagnated.
+
+ If the cascade candidate change fraction is low, the candidate neurons will be trained more and if the
+ fraction is high they will be trained less.
+
+ The default cascade candidate change fraction is 0.01, which is equalent to a 1% change in MSE.
+
+ See also:
+ <fann_set_cascade_candidate_change_fraction>, <fann_get_MSE>, <fann_get_cascade_candidate_stagnation_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_cascade_candidate_change_fraction(struct fann *ann);
+
+
+/* Function: fann_set_cascade_candidate_change_fraction
+
+ Sets the cascade candidate change fraction.
+
+ See also:
+ <fann_get_cascade_candidate_change_fraction>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_candidate_change_fraction(struct fann *ann,
+ float cascade_candidate_change_fraction);
+
+/* Function: fann_get_cascade_candidate_stagnation_epochs
+
+ The number of cascade candidate stagnation epochs determines the number of epochs training is allowed to
+ continue without changing the MSE by a fraction of <fann_get_cascade_candidate_change_fraction>.
+
+ See more info about this parameter in <fann_get_cascade_candidate_change_fraction>.
+
+ The default number of cascade candidate stagnation epochs is 12.
+
+ See also:
+ <fann_set_cascade_candidate_stagnation_epochs>, <fann_get_cascade_candidate_change_fraction>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_candidate_stagnation_epochs(struct fann *ann);
+
+
+/* Function: fann_set_cascade_candidate_stagnation_epochs
+
+ Sets the number of cascade candidate stagnation epochs.
+
+ See also:
+ <fann_get_cascade_candidate_stagnation_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_candidate_stagnation_epochs(struct fann *ann,
+ unsigned int cascade_candidate_stagnation_epochs);
+
+
+/* Function: fann_get_cascade_weight_multiplier
+
+ The weight multiplier is a parameter which is used to multiply the weights from the candidate neuron
+ before adding the neuron to the neural network. This parameter is usually between 0 and 1, and is used
+ to make the training a bit less aggressive.
+
+ The default weight multiplier is 0.4
+
+ See also:
+ <fann_set_cascade_weight_multiplier>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL fann_type FANN_API fann_get_cascade_weight_multiplier(struct fann *ann);
+
+
+/* Function: fann_set_cascade_weight_multiplier
+
+ Sets the weight multiplier.
+
+ See also:
+ <fann_get_cascade_weight_multiplier>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_weight_multiplier(struct fann *ann,
+ fann_type cascade_weight_multiplier);
+
+
+/* Function: fann_get_cascade_candidate_limit
+
+ The candidate limit is a limit for how much the candidate neuron may be trained.
+ The limit is a limit on the proportion between the MSE and candidate score.
+
+ Set this to a lower value to avoid overfitting and to a higher if overfitting is
+ not a problem.
+
+ The default candidate limit is 1000.0
+
+ See also:
+ <fann_set_cascade_candidate_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL fann_type FANN_API fann_get_cascade_candidate_limit(struct fann *ann);
+
+
+/* Function: fann_set_cascade_candidate_limit
+
+ Sets the candidate limit.
+
+ See also:
+ <fann_get_cascade_candidate_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_candidate_limit(struct fann *ann,
+ fann_type cascade_candidate_limit);
+
+
+/* Function: fann_get_cascade_max_out_epochs
+
+ The maximum out epochs determines the maximum number of epochs the output connections
+ may be trained after adding a new candidate neuron.
+
+ The default max out epochs is 150
+
+ See also:
+ <fann_set_cascade_max_out_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_max_out_epochs(struct fann *ann);
+
+
+/* Function: fann_set_cascade_max_out_epochs
+
+ Sets the maximum out epochs.
+
+ See also:
+ <fann_get_cascade_max_out_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_max_out_epochs(struct fann *ann,
+ unsigned int cascade_max_out_epochs);
+
+
+/* Function: fann_get_cascade_min_out_epochs
+
+ The minimum out epochs determines the minimum number of epochs the output connections
+ must be trained after adding a new candidate neuron.
+
+ The default min out epochs is 50
+
+ See also:
+ <fann_set_cascade_min_out_epochs>
+
+ This function appears in FANN >= 2.2.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_min_out_epochs(struct fann *ann);
+
+
+/* Function: fann_set_cascade_min_out_epochs
+
+ Sets the minimum out epochs.
+
+ See also:
+ <fann_get_cascade_min_out_epochs>
+
+ This function appears in FANN >= 2.2.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_min_out_epochs(struct fann *ann,
+ unsigned int cascade_min_out_epochs);
+
+/* Function: fann_get_cascade_max_cand_epochs
+
+ The maximum candidate epochs determines the maximum number of epochs the input
+ connections to the candidates may be trained before adding a new candidate neuron.
+
+ The default max candidate epochs is 150
+
+ See also:
+ <fann_set_cascade_max_cand_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_max_cand_epochs(struct fann *ann);
+
+
+/* Function: fann_set_cascade_max_cand_epochs
+
+ Sets the max candidate epochs.
+
+ See also:
+ <fann_get_cascade_max_cand_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_max_cand_epochs(struct fann *ann,
+ unsigned int cascade_max_cand_epochs);
+
+
+/* Function: fann_get_cascade_min_cand_epochs
+
+ The minimum candidate epochs determines the minimum number of epochs the input
+ connections to the candidates may be trained before adding a new candidate neuron.
+
+ The default min candidate epochs is 50
+
+ See also:
+ <fann_set_cascade_min_cand_epochs>
+
+ This function appears in FANN >= 2.2.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_min_cand_epochs(struct fann *ann);
+
+
+/* Function: fann_set_cascade_min_cand_epochs
+
+ Sets the min candidate epochs.
+
+ See also:
+ <fann_get_cascade_min_cand_epochs>
+
+ This function appears in FANN >= 2.2.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_min_cand_epochs(struct fann *ann,
+ unsigned int cascade_min_cand_epochs);
+
+/* Function: fann_get_cascade_num_candidates
+
+ The number of candidates used during training (calculated by multiplying <fann_get_cascade_activation_functions_count>,
+ <fann_get_cascade_activation_steepnesses_count> and <fann_get_cascade_num_candidate_groups>).
+
+ The actual candidates is defined by the <fann_get_cascade_activation_functions> and
+ <fann_get_cascade_activation_steepnesses> arrays. These arrays define the activation functions
+ and activation steepnesses used for the candidate neurons. If there are 2 activation functions
+ in the activation function array and 3 steepnesses in the steepness array, then there will be
+ 2x3=6 different candidates which will be trained. These 6 different candidates can be copied into
+ several candidate groups, where the only difference between these groups is the initial weights.
+ If the number of groups is set to 2, then the number of candidate neurons will be 2x3x2=12. The
+ number of candidate groups is defined by <fann_set_cascade_num_candidate_groups>.
+
+ The default number of candidates is 6x4x2 = 48
+
+ See also:
+ <fann_get_cascade_activation_functions>, <fann_get_cascade_activation_functions_count>,
+ <fann_get_cascade_activation_steepnesses>, <fann_get_cascade_activation_steepnesses_count>,
+ <fann_get_cascade_num_candidate_groups>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_num_candidates(struct fann *ann);
+
+/* Function: fann_get_cascade_activation_functions_count
+
+ The number of activation functions in the <fann_get_cascade_activation_functions> array.
+
+ The default number of activation functions is 6.
+
+ See also:
+ <fann_get_cascade_activation_functions>, <fann_set_cascade_activation_functions>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_activation_functions_count(struct fann *ann);
+
+
+/* Function: fann_get_cascade_activation_functions
+
+ The cascade activation functions array is an array of the different activation functions used by
+ the candidates.
+
+ See <fann_get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ The default activation functions is {FANN_SIGMOID, FANN_SIGMOID_SYMMETRIC, FANN_GAUSSIAN, FANN_GAUSSIAN_SYMMETRIC, FANN_ELLIOT, FANN_ELLIOT_SYMMETRIC}
+
+ See also:
+ <fann_get_cascade_activation_functions_count>, <fann_set_cascade_activation_functions>,
+ <fann_activationfunc_enum>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL enum fann_activationfunc_enum * FANN_API fann_get_cascade_activation_functions(
+ struct fann *ann);
+
+
+/* Function: fann_set_cascade_activation_functions
+
+ Sets the array of cascade candidate activation functions. The array must be just as long
+ as defined by the count.
+
+ See <fann_get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ See also:
+ <fann_get_cascade_activation_steepnesses_count>, <fann_get_cascade_activation_steepnesses>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_activation_functions(struct fann *ann,
+ enum fann_activationfunc_enum *
+ cascade_activation_functions,
+ unsigned int
+ cascade_activation_functions_count);
+
+
+/* Function: fann_get_cascade_activation_steepnesses_count
+
+ The number of activation steepnesses in the <fann_get_cascade_activation_functions> array.
+
+ The default number of activation steepnesses is 4.
+
+ See also:
+ <fann_get_cascade_activation_steepnesses>, <fann_set_cascade_activation_functions>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_activation_steepnesses_count(struct fann *ann);
+
+
+/* Function: fann_get_cascade_activation_steepnesses
+
+ The cascade activation steepnesses array is an array of the different activation functions used by
+ the candidates.
+
+ See <fann_get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ The default activation steepnesses is {0.25, 0.50, 0.75, 1.00}
+
+ See also:
+ <fann_set_cascade_activation_steepnesses>, <fann_get_cascade_activation_steepnesses_count>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL fann_type * FANN_API fann_get_cascade_activation_steepnesses(struct fann *ann);
+
+
+/* Function: fann_set_cascade_activation_steepnesses
+
+ Sets the array of cascade candidate activation steepnesses. The array must be just as long
+ as defined by the count.
+
+ See <fann_get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ See also:
+ <fann_get_cascade_activation_steepnesses>, <fann_get_cascade_activation_steepnesses_count>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_activation_steepnesses(struct fann *ann,
+ fann_type *
+ cascade_activation_steepnesses,
+ unsigned int
+ cascade_activation_steepnesses_count);
+
+/* Function: fann_get_cascade_num_candidate_groups
+
+ The number of candidate groups is the number of groups of identical candidates which will be used
+ during training.
+
+ This number can be used to have more candidates without having to define new parameters for the candidates.
+
+ See <fann_get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this parameter.
+
+ The default number of candidate groups is 2
+
+ See also:
+ <fann_set_cascade_num_candidate_groups>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_get_cascade_num_candidate_groups(struct fann *ann);
+
+
+/* Function: fann_set_cascade_num_candidate_groups
+
+ Sets the number of candidate groups.
+
+ See also:
+ <fann_get_cascade_num_candidate_groups>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_cascade_num_candidate_groups(struct fann *ann,
+ unsigned int cascade_num_candidate_groups);
+
+
+#endif
diff --git a/include/fann_cpp.h b/include/fann_cpp.h
new file mode 100644
index 0000000..643e1a3
--- /dev/null
+++ b/include/fann_cpp.h
@@ -0,0 +1,3709 @@
+#ifndef FANN_CPP_H_INCLUDED
+#define FANN_CPP_H_INCLUDED
+
+/*
+ *
+ * Fast Artificial Neural Network (fann) C++ Wrapper
+ * Copyright (C) 2004-2006 created by freegoldbar (at) yahoo dot com
+ *
+ * This wrapper is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This wrapper is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+/*
+ * Title: FANN Wrapper for C++
+ *
+ * Overview:
+ *
+ * The Fann Wrapper for C++ provides two classes: <neural_net>
+ * and <training_data>. To use the wrapper include
+ * doublefann.h, floatfann.h or fixedfann.h before the
+ * fann_cpp.h header file. To get started see xor_sample.cpp
+ * in the examples directory. The license is LGPL. Copyright (C)
+ * 2004-2006 created by <freegoldbar at yahoo.com>.
+ *
+ * Note: Notes and differences from C API
+ *
+ * - The Fann Wrapper for C++ is a minimal wrapper without use of
+ * templates or exception handling for efficient use in any environment.
+ * Benefits include stricter type checking, simpler memory
+ * management and possibly code completion in program editor.
+ * - Method names are the same as the function names in the C
+ * API except the fann_ prefix has been removed. Enums in the
+ * namespace are similarly defined without the FANN_ prefix.
+ * - The arguments to the methods are the same as the C API
+ * except that the struct fann *ann/struct fann_train_data *data
+ * arguments are encapsulated so they are not present in the
+ * method signatures or are translated into class references.
+ * - The various create methods return a boolean set to true to
+ * indicate that the neural network was created, false otherwise.
+ * The same goes for the read_train_from_file method.
+ * - The neural network and training data is automatically cleaned
+ * up in the destructors and create/read methods.
+ * - To make the destructors virtual define USE_VIRTUAL_DESTRUCTOR
+ * before including the header file.
+ * - Additional methods are available on the training_data class to
+ * give access to the underlying training data. They are get_input,
+ * get_output and set_train_data. Finally fann_duplicate_train_data
+ * has been replaced by a copy constructor.
+ *
+ * Note: Changes
+ *
+ * Version 2.2.0:
+ * - General update to fann C library 2.2.0 with support for new functionality
+ *
+ * Version 2.1.0:
+ * - General update to fann C library 2.1.0 with support for new functionality
+ * - Due to changes in the C API the C++ API is not fully backward compatible:
+ * The create methods have changed names and parameters.
+ * The training callback function has different parameters and a set_callback.
+ * Some <training_data> methods have updated names.
+ * Get activation function and steepness is available for neurons, not layers.
+ * - Extensions are now part of fann so there is no fann_extensions.h
+ *
+ * Version 1.2.0:
+ * - Changed char pointers to const std::string references
+ * - Added const_casts where the C API required it
+ * - Initialized enums from the C enums instead of numeric constants
+ * - Added a method set_train_data that copies and allocates training
+ * - data in a way that is compatible with the way the C API deallocates
+ * - the data thus making it possible to change training data.
+ * - The get_rprop_increase_factor method did not return its value
+ *
+ * Version 1.0.0:
+ * - Initial version
+ *
+ */
+
+#include <stdarg.h>
+#include <string>
+
+/* Namespace: FANN
+ The FANN namespace groups the C++ wrapper definitions */
+namespace FANN
+{
+ /* Enum: error_function_enum
+ Error function used during training.
+
+ ERRORFUNC_LINEAR - Standard linear error function.
+ ERRORFUNC_TANH - Tanh error function, usually better
+ but can require a lower learning rate. This error function agressively targets outputs that
+ differ much from the desired, while not targetting outputs that only differ a little that much.
+ This activation function is not recommended for cascade training and incremental training.
+
+ See also:
+ <neural_net::set_train_error_function>, <neural_net::get_train_error_function>
+ */
+ enum error_function_enum {
+ ERRORFUNC_LINEAR = FANN_ERRORFUNC_LINEAR,
+ ERRORFUNC_TANH
+ };
+
+ /* Enum: stop_function_enum
+ Stop criteria used during training.
+
+ STOPFUNC_MSE - Stop criteria is Mean Square Error (MSE) value.
+ STOPFUNC_BIT - Stop criteria is number of bits that fail. The number of bits; means the
+ number of output neurons which differ more than the bit fail limit
+ (see <neural_net::get_bit_fail_limit>, <neural_net::set_bit_fail_limit>).
+ The bits are counted in all of the training data, so this number can be higher than
+ the number of training data.
+
+ See also:
+ <neural_net::set_train_stop_function>, <neural_net::get_train_stop_function>
+ */
+ enum stop_function_enum
+ {
+ STOPFUNC_MSE = FANN_STOPFUNC_MSE,
+ STOPFUNC_BIT
+ };
+
+ /* Enum: training_algorithm_enum
+ The Training algorithms used when training on <training_data> with functions like
+ <neural_net::train_on_data> or <neural_net::train_on_file>. The incremental training
+ looks alters the weights after each time it is presented an input pattern, while batch
+ only alters the weights once after it has been presented to all the patterns.
+
+ TRAIN_INCREMENTAL - Standard backpropagation algorithm, where the weights are
+ updated after each training pattern. This means that the weights are updated many
+ times during a single epoch. For this reason some problems, will train very fast with
+ this algorithm, while other more advanced problems will not train very well.
+ TRAIN_BATCH - Standard backpropagation algorithm, where the weights are updated after
+ calculating the mean square error for the whole training set. This means that the weights
+ are only updated once during a epoch. For this reason some problems, will train slower with
+ this algorithm. But since the mean square error is calculated more correctly than in
+ incremental training, some problems will reach a better solutions with this algorithm.
+ TRAIN_RPROP - A more advanced batch training algorithm which achieves good results
+ for many problems. The RPROP training algorithm is adaptive, and does therefore not
+ use the learning_rate. Some other parameters can however be set to change the way the
+ RPROP algorithm works, but it is only recommended for users with insight in how the RPROP
+ training algorithm works. The RPROP training algorithm is described by
+ [Riedmiller and Braun, 1993], but the actual learning algorithm used here is the
+ iRPROP- training algorithm which is described by [Igel and Husken, 2000] which
+ is an variety of the standard RPROP training algorithm.
+ TRAIN_QUICKPROP - A more advanced batch training algorithm which achieves good results
+ for many problems. The quickprop training algorithm uses the learning_rate parameter
+ along with other more advanced parameters, but it is only recommended to change these
+ advanced parameters, for users with insight in how the quickprop training algorithm works.
+ The quickprop training algorithm is described by [Fahlman, 1988].
+
+ See also:
+ <neural_net::set_training_algorithm>, <neural_net::get_training_algorithm>
+ */
+ enum training_algorithm_enum {
+ TRAIN_INCREMENTAL = FANN_TRAIN_INCREMENTAL,
+ TRAIN_BATCH,
+ TRAIN_RPROP,
+ TRAIN_QUICKPROP,
+ TRAIN_SARPROP
+ };
+
+ /* Enum: activation_function_enum
+
+ The activation functions used for the neurons during training. The activation functions
+ can either be defined for a group of neurons by <neural_net::set_activation_function_hidden>
+ and <neural_net::set_activation_function_output> or it can be defined for a single neuron by
+ <neural_net::set_activation_function>.
+
+ The steepness of an activation function is defined in the same way by
+ <neural_net::set_activation_steepness_hidden>, <neural_net::set_activation_steepness_output>
+ and <neural_net::set_activation_steepness>.
+
+ The functions are described with functions where:
+ * x is the input to the activation function,
+ * y is the output,
+ * s is the steepness and
+ * d is the derivation.
+
+ FANN_LINEAR - Linear activation function.
+ * span: -inf < y < inf
+ * y = x*s, d = 1*s
+ * Can NOT be used in fixed point.
+
+ FANN_THRESHOLD - Threshold activation function.
+ * x < 0 -> y = 0, x >= 0 -> y = 1
+ * Can NOT be used during training.
+
+ FANN_THRESHOLD_SYMMETRIC - Threshold activation function.
+ * x < 0 -> y = 0, x >= 0 -> y = 1
+ * Can NOT be used during training.
+
+ FANN_SIGMOID - Sigmoid activation function.
+ * One of the most used activation functions.
+ * span: 0 < y < 1
+ * y = 1/(1 + exp(-2*s*x))
+ * d = 2*s*y*(1 - y)
+
+ FANN_SIGMOID_STEPWISE - Stepwise linear approximation to sigmoid.
+ * Faster than sigmoid but a bit less precise.
+
+ FANN_SIGMOID_SYMMETRIC - Symmetric sigmoid activation function, aka. tanh.
+ * One of the most used activation functions.
+ * span: -1 < y < 1
+ * y = tanh(s*x) = 2/(1 + exp(-2*s*x)) - 1
+ * d = s*(1-(y*y))
+
+ FANN_SIGMOID_SYMMETRIC - Stepwise linear approximation to symmetric sigmoid.
+ * Faster than symmetric sigmoid but a bit less precise.
+
+ FANN_GAUSSIAN - Gaussian activation function.
+ * 0 when x = -inf, 1 when x = 0 and 0 when x = inf
+ * span: 0 < y < 1
+ * y = exp(-x*s*x*s)
+ * d = -2*x*s*y*s
+
+ FANN_GAUSSIAN_SYMMETRIC - Symmetric gaussian activation function.
+ * -1 when x = -inf, 1 when x = 0 and 0 when x = inf
+ * span: -1 < y < 1
+ * y = exp(-x*s*x*s)*2-1
+ * d = -2*x*s*(y+1)*s
+
+ FANN_ELLIOT - Fast (sigmoid like) activation function defined by David Elliott
+ * span: 0 < y < 1
+ * y = ((x*s) / 2) / (1 + |x*s|) + 0.5
+ * d = s*1/(2*(1+|x*s|)*(1+|x*s|))
+
+ FANN_ELLIOT_SYMMETRIC - Fast (symmetric sigmoid like) activation function defined by David Elliott
+ * span: -1 < y < 1
+ * y = (x*s) / (1 + |x*s|)
+ * d = s*1/((1+|x*s|)*(1+|x*s|))
+
+ FANN_LINEAR_PIECE - Bounded linear activation function.
+ * span: 0 < y < 1
+ * y = x*s, d = 1*s
+
+ FANN_LINEAR_PIECE_SYMMETRIC - Bounded Linear activation function.
+ * span: -1 < y < 1
+ * y = x*s, d = 1*s
+
+ FANN_SIN_SYMMETRIC - Periodical sinus activation function.
+ * span: -1 <= y <= 1
+ * y = sin(x*s)
+ * d = s*cos(x*s)
+
+ FANN_COS_SYMMETRIC - Periodical cosinus activation function.
+ * span: -1 <= y <= 1
+ * y = cos(x*s)
+ * d = s*-sin(x*s)
+
+ See also:
+ <neural_net::set_activation_function_hidden>,
+ <neural_net::set_activation_function_output>
+ */
+ enum activation_function_enum {
+ LINEAR = FANN_LINEAR,
+ THRESHOLD,
+ THRESHOLD_SYMMETRIC,
+ SIGMOID,
+ SIGMOID_STEPWISE,
+ SIGMOID_SYMMETRIC,
+ SIGMOID_SYMMETRIC_STEPWISE,
+ GAUSSIAN,
+ GAUSSIAN_SYMMETRIC,
+ GAUSSIAN_STEPWISE,
+ ELLIOT,
+ ELLIOT_SYMMETRIC,
+ LINEAR_PIECE,
+ LINEAR_PIECE_SYMMETRIC,
+ SIN_SYMMETRIC,
+ COS_SYMMETRIC
+ };
+
+ /* Enum: network_type_enum
+
+ Definition of network types used by <neural_net::get_network_type>
+
+ LAYER - Each layer only has connections to the next layer
+ SHORTCUT - Each layer has connections to all following layers
+
+ See Also:
+ <neural_net::get_network_type>, <fann_get_network_type>
+
+ This enumeration appears in FANN >= 2.1.0
+ */
+ enum network_type_enum
+ {
+ LAYER = FANN_NETTYPE_LAYER,
+ SHORTCUT
+ };
+
+ /* Type: connection
+
+ Describes a connection between two neurons and its weight
+
+ from_neuron - Unique number used to identify source neuron
+ to_neuron - Unique number used to identify destination neuron
+ weight - The numerical value of the weight
+
+ See Also:
+ <neural_net::get_connection_array>, <neural_net::set_weight_array>
+
+ This structure appears in FANN >= 2.1.0
+ */
+ typedef struct fann_connection connection;
+
+ /* Forward declaration of class neural_net and training_data */
+ class neural_net;
+ class training_data;
+
+ /* Type: callback_type
+ This callback function can be called during training when using <neural_net::train_on_data>,
+ <neural_net::train_on_file> or <neural_net::cascadetrain_on_data>.
+
+ >typedef int (*callback_type) (neural_net &net, training_data &train,
+ > unsigned int max_epochs, unsigned int epochs_between_reports,
+ > float desired_error, unsigned int epochs, void *user_data);
+
+ The callback can be set by using <neural_net::set_callback> and is very usefull for doing custom
+ things during training. It is recommended to use this function when implementing custom
+ training procedures, or when visualizing the training in a GUI etc. The parameters which the
+ callback function takes is the parameters given to the <neural_net::train_on_data>, plus an epochs
+ parameter which tells how many epochs the training have taken so far.
+
+ The callback function should return an integer, if the callback function returns -1, the training
+ will terminate.
+
+ Example of a callback function that prints information to cout:
+ >int print_callback(FANN::neural_net &net, FANN::training_data &train,
+ > unsigned int max_epochs, unsigned int epochs_between_reports,
+ > float desired_error, unsigned int epochs, void *user_data)
+ >{
+ > cout << "Epochs " << setw(8) << epochs << ". "
+ > << "Current Error: " << left << net.get_MSE() << right << endl;
+ > return 0;
+ >}
+
+ See also:
+ <neural_net::set_callback>, <fann_callback_type>
+ */
+ typedef int (*callback_type) (neural_net &net, training_data &train,
+ unsigned int max_epochs, unsigned int epochs_between_reports,
+ float desired_error, unsigned int epochs, void *user_data);
+
+ /*************************************************************************/
+
+ /* Class: training_data
+
+ Encapsulation of a training data set <struct fann_train_data> and
+ associated C API functions.
+ */
+ class training_data
+ {
+ public:
+ /* Constructor: training_data
+
+ Default constructor creates an empty neural net.
+ Use <read_train_from_file>, <set_train_data> or <create_train_from_callback> to initialize.
+ */
+ training_data() : train_data(NULL)
+ {
+ }
+
+ /* Constructor: training_data
+
+ Copy constructor constructs a copy of the training data.
+ Corresponds to the C API <fann_duplicate_train_data> function.
+ */
+ training_data(const training_data &data)
+ {
+ destroy_train();
+ if (data.train_data != NULL)
+ {
+ train_data = fann_duplicate_train_data(data.train_data);
+ }
+ }
+
+ /* Destructor: ~training_data
+
+ Provides automatic cleanup of data.
+ Define USE_VIRTUAL_DESTRUCTOR if you need the destructor to be virtual.
+
+ See also:
+ <destroy>
+ */
+#ifdef USE_VIRTUAL_DESTRUCTOR
+ virtual
+#endif
+ ~training_data()
+ {
+ destroy_train();
+ }
+
+ /* Method: destroy
+
+ Destructs the training data. Called automatically by the destructor.
+
+ See also:
+ <~training_data>
+ */
+ void destroy_train()
+ {
+ if (train_data != NULL)
+ {
+ fann_destroy_train(train_data);
+ train_data = NULL;
+ }
+ }
+
+ /* Method: read_train_from_file
+ Reads a file that stores training data.
+
+ The file must be formatted like:
+ >num_train_data num_input num_output
+ >inputdata seperated by space
+ >outputdata seperated by space
+ >
+ >.
+ >.
+ >.
+ >
+ >inputdata seperated by space
+ >outputdata seperated by space
+
+ See also:
+ <neural_net::train_on_data>, <save_train>, <fann_read_train_from_file>
+
+ This function appears in FANN >= 1.0.0
+ */
+ bool read_train_from_file(const std::string &filename)
+ {
+ destroy_train();
+ train_data = fann_read_train_from_file(filename.c_str());
+ return (train_data != NULL);
+ }
+
+ /* Method: save_train
+
+ Save the training structure to a file, with the format as specified in <read_train_from_file>
+
+ Return:
+ The function returns true on success and false on failure.
+
+ See also:
+ <read_train_from_file>, <save_train_to_fixed>, <fann_save_train>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ bool save_train(const std::string &filename)
+ {
+ if (train_data == NULL)
+ {
+ return false;
+ }
+ if (fann_save_train(train_data, filename.c_str()) == -1)
+ {
+ return false;
+ }
+ return true;
+ }
+
+ /* Method: save_train_to_fixed
+
+ Saves the training structure to a fixed point data file.
+
+ This function is very usefull for testing the quality of a fixed point network.
+
+ Return:
+ The function returns true on success and false on failure.
+
+ See also:
+ <save_train>, <fann_save_train_to_fixed>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ bool save_train_to_fixed(const std::string &filename, unsigned int decimal_point)
+ {
+ if (train_data == NULL)
+ {
+ return false;
+ }
+ if (fann_save_train_to_fixed(train_data, filename.c_str(), decimal_point) == -1)
+ {
+ return false;
+ }
+ return true;
+ }
+
+ /* Method: shuffle_train_data
+
+ Shuffles training data, randomizing the order.
+ This is recommended for incremental training, while it have no influence during batch training.
+
+ This function appears in FANN >= 1.1.0.
+ */
+ void shuffle_train_data()
+ {
+ if (train_data != NULL)
+ {
+ fann_shuffle_train_data(train_data);
+ }
+ }
+
+ /* Method: merge_train_data
+
+ Merges the data into the data contained in the <training_data>.
+
+ This function appears in FANN >= 1.1.0.
+ */
+ void merge_train_data(const training_data &data)
+ {
+ fann_train_data *new_data = fann_merge_train_data(train_data, data.train_data);
+ if (new_data != NULL)
+ {
+ destroy_train();
+ train_data = new_data;
+ }
+ }
+
+ /* Method: length_train_data
+
+ Returns the number of training patterns in the <training_data>.
+
+ See also:
+ <num_input_train_data>, <num_output_train_data>, <fann_length_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int length_train_data()
+ {
+ if (train_data == NULL)
+ {
+ return 0;
+ }
+ else
+ {
+ return fann_length_train_data(train_data);
+ }
+ }
+
+ /* Method: num_input_train_data
+
+ Returns the number of inputs in each of the training patterns in the <training_data>.
+
+ See also:
+ <num_output_train_data>, <length_train_data>, <fann_num_input_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int num_input_train_data()
+ {
+ if (train_data == NULL)
+ {
+ return 0;
+ }
+ else
+ {
+ return fann_num_input_train_data(train_data);
+ }
+ }
+
+ /* Method: num_output_train_data
+
+ Returns the number of outputs in each of the training patterns in the <struct fann_train_data>.
+
+ See also:
+ <num_input_train_data>, <length_train_data>, <fann_num_output_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int num_output_train_data()
+ {
+ if (train_data == NULL)
+ {
+ return 0;
+ }
+ else
+ {
+ return fann_num_output_train_data(train_data);
+ }
+ }
+
+ /* Grant access to the encapsulated data since many situations
+ and applications creates the data from sources other than files
+ or uses the training data for testing and related functions */
+
+ /* Method: get_input
+
+ Returns:
+ A pointer to the array of input training data
+
+ See also:
+ <get_output>, <set_train_data>
+ */
+ fann_type **get_input()
+ {
+ if (train_data == NULL)
+ {
+ return NULL;
+ }
+ else
+ {
+ return train_data->input;
+ }
+ }
+
+ /* Method: get_output
+
+ Returns:
+ A pointer to the array of output training data
+
+ See also:
+ <get_input>, <set_train_data>
+ */
+ fann_type **get_output()
+ {
+ if (train_data == NULL)
+ {
+ return NULL;
+ }
+ else
+ {
+ return train_data->output;
+ }
+ }
+
+ /* Method: set_train_data
+
+ Set the training data to the input and output data provided.
+
+ A copy of the data is made so there are no restrictions on the
+ allocation of the input/output data and the caller is responsible
+ for the deallocation of the data pointed to by input and output.
+
+ Parameters:
+ num_data - The number of training data
+ num_input - The number of inputs per training data
+ num_output - The number of ouputs per training data
+ input - The set of inputs (a pointer to an array of pointers to arrays of floating point data)
+ output - The set of desired outputs (a pointer to an array of pointers to arrays of floating point data)
+
+ See also:
+ <get_input>, <get_output>
+ */
+ void set_train_data(unsigned int num_data,
+ unsigned int num_input, fann_type **input,
+ unsigned int num_output, fann_type **output)
+ {
+ // Uses the allocation method used in fann
+ struct fann_train_data *data =
+ (struct fann_train_data *)malloc(sizeof(struct fann_train_data));
+ data->input = (fann_type **)calloc(num_data, sizeof(fann_type *));
+ data->output = (fann_type **)calloc(num_data, sizeof(fann_type *));
+
+ data->num_data = num_data;
+ data->num_input = num_input;
+ data->num_output = num_output;
+
+ fann_type *data_input = (fann_type *)calloc(num_input*num_data, sizeof(fann_type));
+ fann_type *data_output = (fann_type *)calloc(num_output*num_data, sizeof(fann_type));
+
+ for (unsigned int i = 0; i < num_data; ++i)
+ {
+ data->input[i] = data_input;
+ data_input += num_input;
+ for (unsigned int j = 0; j < num_input; ++j)
+ {
+ data->input[i][j] = input[i][j];
+ }
+ data->output[i] = data_output;
+ data_output += num_output;
+ for (unsigned int j = 0; j < num_output; ++j)
+ {
+ data->output[i][j] = output[i][j];
+ }
+ }
+ set_train_data(data);
+ }
+
+private:
+ /* Set the training data to the struct fann_training_data pointer.
+ The struct has to be allocated with malloc to be compatible
+ with fann_destroy. */
+ void set_train_data(struct fann_train_data *data)
+ {
+ destroy_train();
+ train_data = data;
+ }
+
+public:
+ /*********************************************************************/
+
+ /* Method: create_train_from_callback
+ Creates the training data struct from a user supplied function.
+ As the training data are numerable (data 1, data 2...), the user must write
+ a function that receives the number of the training data set (input,output)
+ and returns the set.
+
+ Parameters:
+ num_data - The number of training data
+ num_input - The number of inputs per training data
+ num_output - The number of ouputs per training data
+ user_function - The user suplied function
+
+ Parameters for the user function:
+ num - The number of the training data set
+ num_input - The number of inputs per training data
+ num_output - The number of ouputs per training data
+ input - The set of inputs
+ output - The set of desired outputs
+
+ See also:
+ <training_data::read_train_from_file>, <neural_net::train_on_data>,
+ <fann_create_train_from_callback>
+
+ This function appears in FANN >= 2.1.0
+ */
+ void create_train_from_callback(unsigned int num_data,
+ unsigned int num_input,
+ unsigned int num_output,
+ void (FANN_API *user_function)( unsigned int,
+ unsigned int,
+ unsigned int,
+ fann_type * ,
+ fann_type * ))
+ {
+ destroy_train();
+ train_data = fann_create_train_from_callback(num_data, num_input, num_output, user_function);
+ }
+
+ /* Method: scale_input_train_data
+
+ Scales the inputs in the training data to the specified range.
+
+ See also:
+ <scale_output_train_data>, <scale_train_data>, <fann_scale_input_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void scale_input_train_data(fann_type new_min, fann_type new_max)
+ {
+ if (train_data != NULL)
+ {
+ fann_scale_input_train_data(train_data, new_min, new_max);
+ }
+ }
+
+ /* Method: scale_output_train_data
+
+ Scales the outputs in the training data to the specified range.
+
+ See also:
+ <scale_input_train_data>, <scale_train_data>, <fann_scale_output_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void scale_output_train_data(fann_type new_min, fann_type new_max)
+ {
+ if (train_data != NULL)
+ {
+ fann_scale_output_train_data(train_data, new_min, new_max);
+ }
+ }
+
+ /* Method: scale_train_data
+
+ Scales the inputs and outputs in the training data to the specified range.
+
+ See also:
+ <scale_output_train_data>, <scale_input_train_data>, <fann_scale_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void scale_train_data(fann_type new_min, fann_type new_max)
+ {
+ if (train_data != NULL)
+ {
+ fann_scale_train_data(train_data, new_min, new_max);
+ }
+ }
+
+ /* Method: subset_train_data
+
+ Changes the training data to a subset, starting at position *pos*
+ and *length* elements forward. Use the copy constructor to work
+ on a new copy of the training data.
+
+ >FANN::training_data full_data_set;
+ >full_data_set.read_train_from_file("somefile.train");
+ >FANN::training_data *small_data_set = new FANN::training_data(full_data_set);
+ >small_data_set->subset_train_data(0, 2); // Only use first two
+ >// Use small_data_set ...
+ >delete small_data_set;
+
+ See also:
+ <fann_subset_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void subset_train_data(unsigned int pos, unsigned int length)
+ {
+ if (train_data != NULL)
+ {
+ struct fann_train_data *temp = fann_subset_train_data(train_data, pos, length);
+ destroy_train();
+ train_data = temp;
+ }
+ }
+
+ /*********************************************************************/
+
+ protected:
+ /* The neural_net class has direct access to the training data */
+ friend class neural_net;
+
+ /* Pointer to the encapsulated training data */
+ struct fann_train_data* train_data;
+ };
+
+ /*************************************************************************/
+
+ /* Class: neural_net
+
+ Encapsulation of a neural network <struct fann> and
+ associated C API functions.
+ */
+ class neural_net
+ {
+ public:
+ /* Constructor: neural_net
+
+ Default constructor creates an empty neural net.
+ Use one of the create functions to create the neural network.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut>,
+ <create_standard_array>, <create_sparse_array>, <create_shortcut_array>
+ */
+ neural_net() : ann(NULL)
+ {
+ }
+
+ /* Constructor neural_net
+
+ Creates a copy the other neural_net.
+
+ See also:
+ <copy_from_struct_fann>
+ */
+ neural_net(const neural_net& other)
+ {
+ copy_from_struct_fann(other.ann);
+ }
+
+ /* Constructor: neural_net
+
+ Creates a copy the other neural_net.
+
+ See also:
+ <copy_from_struct_fann>
+ */
+ neural_net(struct fann* other)
+ {
+ copy_from_struct_fann(other);
+ }
+
+ /* Method: copy_from_struct_fann
+
+ Set the internal fann struct to a copy of other
+ */
+ void copy_from_struct_fann(struct fann* other)
+ {
+ destroy();
+ if (other != NULL)
+ ann=fann_copy(other);
+ }
+
+ /* Destructor: ~neural_net
+
+ Provides automatic cleanup of data.
+ Define USE_VIRTUAL_DESTRUCTOR if you need the destructor to be virtual.
+
+ See also:
+ <destroy>
+ */
+#ifdef USE_VIRTUAL_DESTRUCTOR
+ virtual
+#endif
+ ~neural_net()
+ {
+ destroy();
+ }
+
+ /* Method: destroy
+
+ Destructs the entire network. Called automatically by the destructor.
+
+ See also:
+ <~neural_net>
+ */
+ void destroy()
+ {
+ if (ann != NULL)
+ {
+ user_context *user_data = static_cast<user_context *>(fann_get_user_data(ann));
+ if (user_data != NULL)
+ delete user_data;
+
+ fann_destroy(ann);
+ ann = NULL;
+ }
+ }
+
+ /* Method: create_standard
+
+ Creates a standard fully connected backpropagation neural network.
+
+ There will be a bias neuron in each layer (except the output layer),
+ and this bias neuron will be connected to all neurons in the next layer.
+ When running the network, the bias nodes always emits 1.
+
+ Parameters:
+ num_layers - The total number of layers including the input and the output layer.
+ ... - Integer values determining the number of neurons in each layer starting with the
+ input layer and ending with the output layer.
+
+ Returns:
+ Boolean true if the network was created, false otherwise.
+
+ Example:
+ >const unsigned int num_layers = 3;
+ >const unsigned int num_input = 2;
+ >const unsigned int num_hidden = 3;
+ >const unsigned int num_output = 1;
+ >
+ >FANN::neural_net net;
+ >net.create_standard(num_layers, num_input, num_hidden, num_output);
+
+ See also:
+ <create_standard_array>, <create_sparse>, <create_shortcut>,
+ <fann_create_standard_array>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ bool create_standard(unsigned int num_layers, ...)
+ {
+ va_list layers;
+ unsigned int arr[num_layers];
+
+ va_start(layers, num_layers);
+ for (unsigned int ii = 0; ii < num_layers; ii++)
+ arr[ii] = va_arg(layers, unsigned int);
+ bool status = create_standard_array(num_layers, arr);
+ va_end(layers);
+ return status;
+ }
+
+ /* Method: create_standard_array
+
+ Just like <create_standard>, but with an array of layer sizes
+ instead of individual parameters.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut>,
+ <fann_create_standard>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ bool create_standard_array(unsigned int num_layers, const unsigned int * layers)
+ {
+ destroy();
+ ann = fann_create_standard_array(num_layers, layers);
+ return (ann != NULL);
+ }
+
+ /* Method: create_sparse
+
+ Creates a standard backpropagation neural network, which is not fully connected.
+
+ Parameters:
+ connection_rate - The connection rate controls how many connections there will be in the
+ network. If the connection rate is set to 1, the network will be fully
+ connected, but if it is set to 0.5 only half of the connections will be set.
+ A connection rate of 1 will yield the same result as <fann_create_standard>
+ num_layers - The total number of layers including the input and the output layer.
+ ... - Integer values determining the number of neurons in each layer starting with the
+ input layer and ending with the output layer.
+
+ Returns:
+ Boolean true if the network was created, false otherwise.
+
+ See also:
+ <create_standard>, <create_sparse_array>, <create_shortcut>,
+ <fann_create_sparse>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ bool create_sparse(float connection_rate, unsigned int num_layers, ...)
+ {
+ va_list layers;
+ unsigned int arr[num_layers];
+
+ va_start(layers, num_layers);
+ for (unsigned int ii = 0; ii < num_layers; ii++)
+ arr[ii] = va_arg(layers, unsigned int);
+ bool status = create_sparse_array(connection_rate, num_layers, arr);
+ va_end(layers);
+ return status;
+ }
+
+ /* Method: create_sparse_array
+ Just like <create_sparse>, but with an array of layer sizes
+ instead of individual parameters.
+
+ See <create_sparse> for a description of the parameters.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut>,
+ <fann_create_sparse_array>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ bool create_sparse_array(float connection_rate,
+ unsigned int num_layers, const unsigned int * layers)
+ {
+ destroy();
+ ann = fann_create_sparse_array(connection_rate, num_layers, layers);
+ return (ann != NULL);
+ }
+
+ /* Method: create_shortcut
+
+ Creates a standard backpropagation neural network, which is not fully connected and which
+ also has shortcut connections.
+
+ Shortcut connections are connections that skip layers. A fully connected network with shortcut
+ connections, is a network where all neurons are connected to all neurons in later layers.
+ Including direct connections from the input layer to the output layer.
+
+ See <create_standard> for a description of the parameters.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut_array>,
+ <fann_create_shortcut>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ bool create_shortcut(unsigned int num_layers, ...)
+ {
+ va_list layers;
+ unsigned int arr[num_layers];
+
+ va_start(layers, num_layers);
+ for (unsigned int ii = 0; ii < num_layers; ii++)
+ arr[ii] = va_arg(layers, unsigned int);
+ bool status = create_shortcut_array(num_layers, arr);
+ va_end(layers);
+ return status;
+ }
+
+ /* Method: create_shortcut_array
+
+ Just like <create_shortcut>, but with an array of layer sizes
+ instead of individual parameters.
+
+ See <create_standard_array> for a description of the parameters.
+
+ See also:
+ <create_standard>, <create_sparse>, <create_shortcut>,
+ <fann_create_shortcut_array>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ bool create_shortcut_array(unsigned int num_layers,
+ const unsigned int * layers)
+ {
+ destroy();
+ ann = fann_create_shortcut_array(num_layers, layers);
+ return (ann != NULL);
+ }
+
+ /* Method: run
+
+ Will run input through the neural network, returning an array of outputs, the number of which being
+ equal to the number of neurons in the output layer.
+
+ See also:
+ <test>, <fann_run>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ fann_type* run(fann_type *input)
+ {
+ if (ann == NULL)
+ {
+ return NULL;
+ }
+ return fann_run(ann, input);
+ }
+
+ /* Method: randomize_weights
+
+ Give each connection a random weight between *min_weight* and *max_weight*
+
+ From the beginning the weights are random between -0.1 and 0.1.
+
+ See also:
+ <init_weights>, <fann_randomize_weights>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ void randomize_weights(fann_type min_weight, fann_type max_weight)
+ {
+ if (ann != NULL)
+ {
+ fann_randomize_weights(ann, min_weight, max_weight);
+ }
+ }
+
+ /* Method: init_weights
+
+ Initialize the weights using Widrow + Nguyen's algorithm.
+
+ This function behaves similarly to fann_randomize_weights. It will use the algorithm developed
+ by Derrick Nguyen and Bernard Widrow to set the weights in such a way
+ as to speed up training. This technique is not always successful, and in some cases can be less
+ efficient than a purely random initialization.
+
+ The algorithm requires access to the range of the input data (ie, largest and smallest input),
+ and therefore accepts a second argument, data, which is the training data that will be used to
+ train the network.
+
+ See also:
+ <randomize_weights>, <training_data::read_train_from_file>,
+ <fann_init_weights>
+
+ This function appears in FANN >= 1.1.0.
+ */
+ void init_weights(const training_data &data)
+ {
+ if ((ann != NULL) && (data.train_data != NULL))
+ {
+ fann_init_weights(ann, data.train_data);
+ }
+ }
+
+ /* Method: print_connections
+
+ Will print the connections of the ann in a compact matrix, for easy viewing of the internals
+ of the ann.
+
+ The output from fann_print_connections on a small (2 2 1) network trained on the xor problem
+ >Layer / Neuron 012345
+ >L 1 / N 3 BBa...
+ >L 1 / N 4 BBA...
+ >L 1 / N 5 ......
+ >L 2 / N 6 ...BBA
+ >L 2 / N 7 ......
+
+ This network have five real neurons and two bias neurons. This gives a total of seven neurons
+ named from 0 to 6. The connections between these neurons can be seen in the matrix. "." is a
+ place where there is no connection, while a character tells how strong the connection is on a
+ scale from a-z. The two real neurons in the hidden layer (neuron 3 and 4 in layer 1) has
+ connection from the three neurons in the previous layer as is visible in the first two lines.
+ The output neuron (6) has connections form the three neurons in the hidden layer 3 - 5 as is
+ visible in the fourth line.
+
+ To simplify the matrix output neurons is not visible as neurons that connections can come from,
+ and input and bias neurons are not visible as neurons that connections can go to.
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void print_connections()
+ {
+ if (ann != NULL)
+ {
+ fann_print_connections(ann);
+ }
+ }
+
+ /* Method: create_from_file
+
+ Constructs a backpropagation neural network from a configuration file,
+ which have been saved by <save>.
+
+ See also:
+ <save>, <save_to_fixed>, <fann_create_from_file>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ bool create_from_file(const std::string &configuration_file)
+ {
+ destroy();
+ ann = fann_create_from_file(configuration_file.c_str());
+ return (ann != NULL);
+ }
+
+ /* Method: save
+
+ Save the entire network to a configuration file.
+
+ The configuration file contains all information about the neural network and enables
+ <create_from_file> to create an exact copy of the neural network and all of the
+ parameters associated with the neural network.
+
+ These two parameters (<set_callback>, <set_error_log>) are *NOT* saved
+ to the file because they cannot safely be ported to a different location. Also temporary
+ parameters generated during training like <get_MSE> is not saved.
+
+ Return:
+ The function returns 0 on success and -1 on failure.
+
+ See also:
+ <create_from_file>, <save_to_fixed>, <fann_save>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ bool save(const std::string &configuration_file)
+ {
+ if (ann == NULL)
+ {
+ return false;
+ }
+ if (fann_save(ann, configuration_file.c_str()) == -1)
+ {
+ return false;
+ }
+ return true;
+ }
+
+ /* Method: save_to_fixed
+
+ Saves the entire network to a configuration file.
+ But it is saved in fixed point format no matter which
+ format it is currently in.
+
+ This is usefull for training a network in floating points,
+ and then later executing it in fixed point.
+
+ The function returns the bit position of the fix point, which
+ can be used to find out how accurate the fixed point network will be.
+ A high value indicates high precision, and a low value indicates low
+ precision.
+
+ A negative value indicates very low precision, and a very
+ strong possibility for overflow.
+ (the actual fix point will be set to 0, since a negative
+ fix point does not make sence).
+
+ Generally, a fix point lower than 6 is bad, and should be avoided.
+ The best way to avoid this, is to have less connections to each neuron,
+ or just less neurons in each layer.
+
+ The fixed point use of this network is only intended for use on machines that
+ have no floating point processor, like an iPAQ. On normal computers the floating
+ point version is actually faster.
+
+ See also:
+ <create_from_file>, <save>, <fann_save_to_fixed>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ int save_to_fixed(const std::string &configuration_file)
+ {
+ int fixpoint = 0;
+ if (ann != NULL)
+ {
+ fixpoint = fann_save_to_fixed(ann, configuration_file.c_str());
+ }
+ return fixpoint;
+ }
+
+#ifndef FIXEDFANN
+ /* Method: train
+
+ Train one iteration with a set of inputs, and a set of desired outputs.
+ This training is always incremental training (see <FANN::training_algorithm_enum>),
+ since only one pattern is presented.
+
+ Parameters:
+ ann - The neural network structure
+ input - an array of inputs. This array must be exactly <fann_get_num_input> long.
+ desired_output - an array of desired outputs. This array must be exactly <fann_get_num_output> long.
+
+ See also:
+ <train_on_data>, <train_epoch>, <fann_train>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ void train(fann_type *input, fann_type *desired_output)
+ {
+ if (ann != NULL)
+ {
+ fann_train(ann, input, desired_output);
+ }
+ }
+
+ /* Method: train_epoch
+ Train one epoch with a set of training data.
+
+ Train one epoch with the training data stored in data. One epoch is where all of
+ the training data is considered exactly once.
+
+ This function returns the MSE error as it is calculated either before or during
+ the actual training. This is not the actual MSE after the training epoch, but since
+ calculating this will require to go through the entire training set once more, it is
+ more than adequate to use this value during training.
+
+ The training algorithm used by this function is chosen by the <fann_set_training_algorithm>
+ function.
+
+ See also:
+ <train_on_data>, <test_data>, <fann_train_epoch>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ float train_epoch(const training_data &data)
+ {
+ float mse = 0.0f;
+ if ((ann != NULL) && (data.train_data != NULL))
+ {
+ mse = fann_train_epoch(ann, data.train_data);
+ }
+ return mse;
+ }
+
+ /* Method: train_on_data
+
+ Trains on an entire dataset, for a period of time.
+
+ This training uses the training algorithm chosen by <set_training_algorithm>,
+ and the parameters set for these training algorithms.
+
+ Parameters:
+ ann - The neural network
+ data - The data, which should be used during training
+ max_epochs - The maximum number of epochs the training should continue
+ epochs_between_reports - The number of epochs between printing a status report to stdout.
+ A value of zero means no reports should be printed.
+ desired_error - The desired <get_MSE> or <get_bit_fail>, depending on which stop function
+ is chosen by <set_train_stop_function>.
+
+ Instead of printing out reports every epochs_between_reports, a callback function can be called
+ (see <set_callback>).
+
+ See also:
+ <train_on_file>, <train_epoch>, <fann_train_on_data>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ void train_on_data(const training_data &data, unsigned int max_epochs,
+ unsigned int epochs_between_reports, float desired_error)
+ {
+ if ((ann != NULL) && (data.train_data != NULL))
+ {
+ fann_train_on_data(ann, data.train_data, max_epochs,
+ epochs_between_reports, desired_error);
+ }
+ }
+
+ /* Method: train_on_file
+
+ Does the same as <train_on_data>, but reads the training data directly from a file.
+
+ See also:
+ <train_on_data>, <fann_train_on_file>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ void train_on_file(const std::string &filename, unsigned int max_epochs,
+ unsigned int epochs_between_reports, float desired_error)
+ {
+ if (ann != NULL)
+ {
+ fann_train_on_file(ann, filename.c_str(),
+ max_epochs, epochs_between_reports, desired_error);
+ }
+ }
+#endif /* NOT FIXEDFANN */
+
+ /* Method: test
+
+ Test with a set of inputs, and a set of desired outputs.
+ This operation updates the mean square error, but does not
+ change the network in any way.
+
+ See also:
+ <test_data>, <train>, <fann_test>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ fann_type * test(fann_type *input, fann_type *desired_output)
+ {
+ fann_type * output = NULL;
+ if (ann != NULL)
+ {
+ output = fann_test(ann, input, desired_output);
+ }
+ return output;
+ }
+
+ /* Method: test_data
+
+ Test a set of training data and calculates the MSE for the training data.
+
+ This function updates the MSE and the bit fail values.
+
+ See also:
+ <test>, <get_MSE>, <get_bit_fail>, <fann_test_data>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ float test_data(const training_data &data)
+ {
+ float mse = 0.0f;
+ if ((ann != NULL) && (data.train_data != NULL))
+ {
+ mse = fann_test_data(ann, data.train_data);
+ }
+ return mse;
+ }
+
+ /* Method: get_MSE
+ Reads the mean square error from the network.
+
+ Reads the mean square error from the network. This value is calculated during
+ training or testing, and can therefore sometimes be a bit off if the weights
+ have been changed since the last calculation of the value.
+
+ See also:
+ <test_data>, <fann_get_MSE>
+
+ This function appears in FANN >= 1.1.0.
+ */
+ float get_MSE()
+ {
+ float mse = 0.0f;
+ if (ann != NULL)
+ {
+ mse = fann_get_MSE(ann);
+ }
+ return mse;
+ }
+
+ /* Method: reset_MSE
+
+ Resets the mean square error from the network.
+
+ This function also resets the number of bits that fail.
+
+ See also:
+ <get_MSE>, <get_bit_fail_limit>, <fann_reset_MSE>
+
+ This function appears in FANN >= 1.1.0
+ */
+ void reset_MSE()
+ {
+ if (ann != NULL)
+ {
+ fann_reset_MSE(ann);
+ }
+ }
+
+ /* Method: set_callback
+
+ Sets the callback function for use during training. The user_data is passed to
+ the callback. It can point to arbitrary data that the callback might require and
+ can be NULL if it is not used.
+
+ See <FANN::callback_type> for more information about the callback function.
+
+ The default callback function simply prints out some status information.
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_callback(callback_type callback, void *user_data)
+ {
+ if (ann != NULL)
+ {
+ // Allocated data is also deleted in the destroy method called by the destructor
+ user_context *user_instance = static_cast<user_context *>(fann_get_user_data(ann));
+ if (user_instance != NULL)
+ delete user_instance;
+
+ user_instance = new user_context();
+ user_instance->user_callback = callback;
+ user_instance->user_data = user_data;
+ user_instance->net = this;
+ fann_set_user_data(ann, user_instance);
+
+ if (callback != NULL)
+ fann_set_callback(ann, &FANN::neural_net::internal_callback);
+ else
+ fann_set_callback(ann, NULL);
+ }
+ }
+
+ /* Method: print_parameters
+
+ Prints all of the parameters and options of the neural network
+
+ See also:
+ <fann_print_parameters>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void print_parameters()
+ {
+ if (ann != NULL)
+ {
+ fann_print_parameters(ann);
+ }
+ }
+
+ /* Method: get_training_algorithm
+
+ Return the training algorithm as described by <FANN::training_algorithm_enum>.
+ This training algorithm is used by <train_on_data> and associated functions.
+
+ Note that this algorithm is also used during <cascadetrain_on_data>, although only
+ FANN::TRAIN_RPROP and FANN::TRAIN_QUICKPROP is allowed during cascade training.
+
+ The default training algorithm is FANN::TRAIN_RPROP.
+
+ See also:
+ <set_training_algorithm>, <FANN::training_algorithm_enum>,
+ <fann_get_training_algorithm>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ training_algorithm_enum get_training_algorithm()
+ {
+ fann_train_enum training_algorithm = FANN_TRAIN_INCREMENTAL;
+ if (ann != NULL)
+ {
+ training_algorithm = fann_get_training_algorithm(ann);
+ }
+ return static_cast<training_algorithm_enum>(training_algorithm);
+ }
+
+ /* Method: set_training_algorithm
+
+ Set the training algorithm.
+
+ More info available in <get_training_algorithm>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ void set_training_algorithm(training_algorithm_enum training_algorithm)
+ {
+ if (ann != NULL)
+ {
+ fann_set_training_algorithm(ann,
+ static_cast<fann_train_enum>(training_algorithm));
+ }
+ }
+
+ /* Method: get_learning_rate
+
+ Return the learning rate.
+
+ The learning rate is used to determine how aggressive training should be for some of the
+ training algorithms (FANN::TRAIN_INCREMENTAL, FANN::TRAIN_BATCH, FANN::TRAIN_QUICKPROP).
+ Do however note that it is not used in FANN::TRAIN_RPROP.
+
+ The default learning rate is 0.7.
+
+ See also:
+ <set_learning_rate>, <set_training_algorithm>,
+ <fann_get_learning_rate>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ float get_learning_rate()
+ {
+ float learning_rate = 0.0f;
+ if (ann != NULL)
+ {
+ learning_rate = fann_get_learning_rate(ann);
+ }
+ return learning_rate;
+ }
+
+ /* Method: set_learning_rate
+
+ Set the learning rate.
+
+ More info available in <get_learning_rate>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ void set_learning_rate(float learning_rate)
+ {
+ if (ann != NULL)
+ {
+ fann_set_learning_rate(ann, learning_rate);
+ }
+ }
+
+ /*************************************************************************************************************/
+
+ /* Method: get_activation_function
+
+ Get the activation function for neuron number *neuron* in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to get activation functions for the neurons in the input layer.
+
+ Information about the individual activation functions is available at <FANN::activation_function_enum>.
+
+ Returns:
+ The activation function for the neuron or -1 if the neuron is not defined in the neural network.
+
+ See also:
+ <set_activation_function_layer>, <set_activation_function_hidden>,
+ <set_activation_function_output>, <set_activation_steepness>,
+ <set_activation_function>, <fann_get_activation_function>
+
+ This function appears in FANN >= 2.1.0
+ */
+ activation_function_enum get_activation_function(int layer, int neuron)
+ {
+ unsigned int activation_function = 0;
+ if (ann != NULL)
+ {
+ activation_function = fann_get_activation_function(ann, layer, neuron);
+ }
+ return static_cast<activation_function_enum>(activation_function);
+ }
+
+ /* Method: set_activation_function
+
+ Set the activation function for neuron number *neuron* in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to set activation functions for the neurons in the input layer.
+
+ When choosing an activation function it is important to note that the activation
+ functions have different range. FANN::SIGMOID is e.g. in the 0 - 1 range while
+ FANN::SIGMOID_SYMMETRIC is in the -1 - 1 range and FANN::LINEAR is unbound.
+
+ Information about the individual activation functions is available at <FANN::activation_function_enum>.
+
+ The default activation function is FANN::SIGMOID_STEPWISE.
+
+ See also:
+ <set_activation_function_layer>, <set_activation_function_hidden>,
+ <set_activation_function_output>, <set_activation_steepness>,
+ <get_activation_function>, <fann_set_activation_function>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_activation_function(activation_function_enum activation_function, int layer, int neuron)
+ {
+ if (ann != NULL)
+ {
+ fann_set_activation_function(ann,
+ static_cast<fann_activationfunc_enum>(activation_function), layer, neuron);
+ }
+ }
+
+ /* Method: set_activation_function_layer
+
+ Set the activation function for all the neurons in the layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to set activation functions for the neurons in the input layer.
+
+ See also:
+ <set_activation_function>, <set_activation_function_hidden>,
+ <set_activation_function_output>, <set_activation_steepness_layer>,
+ <fann_set_activation_function_layer>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_activation_function_layer(activation_function_enum activation_function, int layer)
+ {
+ if (ann != NULL)
+ {
+ fann_set_activation_function_layer(ann,
+ static_cast<fann_activationfunc_enum>(activation_function), layer);
+ }
+ }
+
+ /* Method: set_activation_function_hidden
+
+ Set the activation function for all of the hidden layers.
+
+ See also:
+ <set_activation_function>, <set_activation_function_layer>,
+ <set_activation_function_output>, <set_activation_steepness_hidden>,
+ <fann_set_activation_function_hidden>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ void set_activation_function_hidden(activation_function_enum activation_function)
+ {
+ if (ann != NULL)
+ {
+ fann_set_activation_function_hidden(ann,
+ static_cast<fann_activationfunc_enum>(activation_function));
+ }
+ }
+
+ /* Method: set_activation_function_output
+
+ Set the activation function for the output layer.
+
+ See also:
+ <set_activation_function>, <set_activation_function_layer>,
+ <set_activation_function_hidden>, <set_activation_steepness_output>,
+ <fann_set_activation_function_output>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ void set_activation_function_output(activation_function_enum activation_function)
+ {
+ if (ann != NULL)
+ {
+ fann_set_activation_function_output(ann,
+ static_cast<fann_activationfunc_enum>(activation_function));
+ }
+ }
+
+ /* Method: get_activation_steepness
+
+ Get the activation steepness for neuron number *neuron* in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to get activation steepness for the neurons in the input layer.
+
+ The steepness of an activation function says something about how fast the activation function
+ goes from the minimum to the maximum. A high value for the activation function will also
+ give a more agressive training.
+
+ When training neural networks where the output values should be at the extremes (usually 0 and 1,
+ depending on the activation function), a steep activation function can be used (e.g. 1.0).
+
+ The default activation steepness is 0.5.
+
+ Returns:
+ The activation steepness for the neuron or -1 if the neuron is not defined in the neural network.
+
+ See also:
+ <set_activation_steepness_layer>, <set_activation_steepness_hidden>,
+ <set_activation_steepness_output>, <set_activation_function>,
+ <set_activation_steepness>, <fann_get_activation_steepness>
+
+ This function appears in FANN >= 2.1.0
+ */
+ fann_type get_activation_steepness(int layer, int neuron)
+ {
+ fann_type activation_steepness = 0;
+ if (ann != NULL)
+ {
+ activation_steepness = fann_get_activation_steepness(ann, layer, neuron);
+ }
+ return activation_steepness;
+ }
+
+ /* Method: set_activation_steepness
+
+ Set the activation steepness for neuron number *neuron* in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to set activation steepness for the neurons in the input layer.
+
+ The steepness of an activation function says something about how fast the activation function
+ goes from the minimum to the maximum. A high value for the activation function will also
+ give a more agressive training.
+
+ When training neural networks where the output values should be at the extremes (usually 0 and 1,
+ depending on the activation function), a steep activation function can be used (e.g. 1.0).
+
+ The default activation steepness is 0.5.
+
+ See also:
+ <set_activation_steepness_layer>, <set_activation_steepness_hidden>,
+ <set_activation_steepness_output>, <set_activation_function>,
+ <get_activation_steepness>, <fann_set_activation_steepness>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_activation_steepness(fann_type steepness, int layer, int neuron)
+ {
+ if (ann != NULL)
+ {
+ fann_set_activation_steepness(ann, steepness, layer, neuron);
+ }
+ }
+
+ /* Method: set_activation_steepness_layer
+
+ Set the activation steepness all of the neurons in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to set activation steepness for the neurons in the input layer.
+
+ See also:
+ <set_activation_steepness>, <set_activation_steepness_hidden>,
+ <set_activation_steepness_output>, <set_activation_function_layer>,
+ <fann_set_activation_steepness_layer>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_activation_steepness_layer(fann_type steepness, int layer)
+ {
+ if (ann != NULL)
+ {
+ fann_set_activation_steepness_layer(ann, steepness, layer);
+ }
+ }
+
+ /* Method: set_activation_steepness_hidden
+
+ Set the steepness of the activation steepness in all of the hidden layers.
+
+ See also:
+ <set_activation_steepness>, <set_activation_steepness_layer>,
+ <set_activation_steepness_output>, <set_activation_function_hidden>,
+ <fann_set_activation_steepness_hidden>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_activation_steepness_hidden(fann_type steepness)
+ {
+ if (ann != NULL)
+ {
+ fann_set_activation_steepness_hidden(ann, steepness);
+ }
+ }
+
+ /* Method: set_activation_steepness_output
+
+ Set the steepness of the activation steepness in the output layer.
+
+ See also:
+ <set_activation_steepness>, <set_activation_steepness_layer>,
+ <set_activation_steepness_hidden>, <set_activation_function_output>,
+ <fann_set_activation_steepness_output>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_activation_steepness_output(fann_type steepness)
+ {
+ if (ann != NULL)
+ {
+ fann_set_activation_steepness_output(ann, steepness);
+ }
+ }
+
+ /*************************************************************************************************************/
+
+ /* Method: get_train_error_function
+
+ Returns the error function used during training.
+
+ The error functions is described further in <FANN::error_function_enum>
+
+ The default error function is FANN::ERRORFUNC_TANH
+
+ See also:
+ <set_train_error_function>, <fann_get_train_error_function>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ error_function_enum get_train_error_function()
+ {
+ fann_errorfunc_enum train_error_function = FANN_ERRORFUNC_LINEAR;
+ if (ann != NULL)
+ {
+ train_error_function = fann_get_train_error_function(ann);
+ }
+ return static_cast<error_function_enum>(train_error_function);
+ }
+
+ /* Method: set_train_error_function
+
+ Set the error function used during training.
+
+ The error functions is described further in <FANN::error_function_enum>
+
+ See also:
+ <get_train_error_function>, <fann_set_train_error_function>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_train_error_function(error_function_enum train_error_function)
+ {
+ if (ann != NULL)
+ {
+ fann_set_train_error_function(ann,
+ static_cast<fann_errorfunc_enum>(train_error_function));
+ }
+ }
+
+ /* Method: get_quickprop_decay
+
+ The decay is a small negative valued number which is the factor that the weights
+ should become smaller in each iteration during quickprop training. This is used
+ to make sure that the weights do not become too high during training.
+
+ The default decay is -0.0001.
+
+ See also:
+ <set_quickprop_decay>, <fann_get_quickprop_decay>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ float get_quickprop_decay()
+ {
+ float quickprop_decay = 0.0f;
+ if (ann != NULL)
+ {
+ quickprop_decay = fann_get_quickprop_decay(ann);
+ }
+ return quickprop_decay;
+ }
+
+ /* Method: set_quickprop_decay
+
+ Sets the quickprop decay factor.
+
+ See also:
+ <get_quickprop_decay>, <fann_set_quickprop_decay>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_quickprop_decay(float quickprop_decay)
+ {
+ if (ann != NULL)
+ {
+ fann_set_quickprop_decay(ann, quickprop_decay);
+ }
+ }
+
+ /* Method: get_quickprop_mu
+
+ The mu factor is used to increase and decrease the step-size during quickprop training.
+ The mu factor should always be above 1, since it would otherwise decrease the step-size
+ when it was suppose to increase it.
+
+ The default mu factor is 1.75.
+
+ See also:
+ <set_quickprop_mu>, <fann_get_quickprop_mu>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ float get_quickprop_mu()
+ {
+ float quickprop_mu = 0.0f;
+ if (ann != NULL)
+ {
+ quickprop_mu = fann_get_quickprop_mu(ann);
+ }
+ return quickprop_mu;
+ }
+
+ /* Method: set_quickprop_mu
+
+ Sets the quickprop mu factor.
+
+ See also:
+ <get_quickprop_mu>, <fann_set_quickprop_mu>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_quickprop_mu(float quickprop_mu)
+ {
+ if (ann != NULL)
+ {
+ fann_set_quickprop_mu(ann, quickprop_mu);
+ }
+ }
+
+ /* Method: get_rprop_increase_factor
+
+ The increase factor is a value larger than 1, which is used to
+ increase the step-size during RPROP training.
+
+ The default increase factor is 1.2.
+
+ See also:
+ <set_rprop_increase_factor>, <fann_get_rprop_increase_factor>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ float get_rprop_increase_factor()
+ {
+ float factor = 0.0f;
+ if (ann != NULL)
+ {
+ factor = fann_get_rprop_increase_factor(ann);
+ }
+ return factor;
+ }
+
+ /* Method: set_rprop_increase_factor
+
+ The increase factor used during RPROP training.
+
+ See also:
+ <get_rprop_increase_factor>, <fann_set_rprop_increase_factor>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_rprop_increase_factor(float rprop_increase_factor)
+ {
+ if (ann != NULL)
+ {
+ fann_set_rprop_increase_factor(ann, rprop_increase_factor);
+ }
+ }
+
+ /* Method: get_rprop_decrease_factor
+
+ The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training.
+
+ The default decrease factor is 0.5.
+
+ See also:
+ <set_rprop_decrease_factor>, <fann_get_rprop_decrease_factor>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ float get_rprop_decrease_factor()
+ {
+ float factor = 0.0f;
+ if (ann != NULL)
+ {
+ factor = fann_get_rprop_decrease_factor(ann);
+ }
+ return factor;
+ }
+
+ /* Method: set_rprop_decrease_factor
+
+ The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training.
+
+ See also:
+ <get_rprop_decrease_factor>, <fann_set_rprop_decrease_factor>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_rprop_decrease_factor(float rprop_decrease_factor)
+ {
+ if (ann != NULL)
+ {
+ fann_set_rprop_decrease_factor(ann, rprop_decrease_factor);
+ }
+ }
+
+ /* Method: get_rprop_delta_zero
+
+ The initial step-size is a small positive number determining how small the initial step-size may be.
+
+ The default value delta zero is 0.1.
+
+ See also:
+ <set_rprop_delta_zero>, <fann_get_rprop_delta_zero>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ float get_rprop_delta_zero()
+ {
+ float delta = 0.0f;
+ if (ann != NULL)
+ {
+ delta = fann_get_rprop_delta_zero(ann);
+ }
+ return delta;
+ }
+
+ /* Method: set_rprop_delta_zero
+
+ The initial step-size is a small positive number determining how small the initial step-size may be.
+
+ See also:
+ <get_rprop_delta_zero>, <fann_set_rprop_delta_zero>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ void set_rprop_delta_zero(float rprop_delta_zero)
+ {
+ if (ann != NULL)
+ {
+ fann_set_rprop_delta_zero(ann, rprop_delta_zero);
+ }
+ }
+ /* Method: get_rprop_delta_min
+
+ The minimum step-size is a small positive number determining how small the minimum step-size may be.
+
+ The default value delta min is 0.0.
+
+ See also:
+ <set_rprop_delta_min>, <fann_get_rprop_delta_min>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ float get_rprop_delta_min()
+ {
+ float delta = 0.0f;
+ if (ann != NULL)
+ {
+ delta = fann_get_rprop_delta_min(ann);
+ }
+ return delta;
+ }
+
+ /* Method: set_rprop_delta_min
+
+ The minimum step-size is a small positive number determining how small the minimum step-size may be.
+
+ See also:
+ <get_rprop_delta_min>, <fann_set_rprop_delta_min>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_rprop_delta_min(float rprop_delta_min)
+ {
+ if (ann != NULL)
+ {
+ fann_set_rprop_delta_min(ann, rprop_delta_min);
+ }
+ }
+
+ /* Method: get_rprop_delta_max
+
+ The maximum step-size is a positive number determining how large the maximum step-size may be.
+
+ The default delta max is 50.0.
+
+ See also:
+ <set_rprop_delta_max>, <get_rprop_delta_min>, <fann_get_rprop_delta_max>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ float get_rprop_delta_max()
+ {
+ float delta = 0.0f;
+ if (ann != NULL)
+ {
+ delta = fann_get_rprop_delta_max(ann);
+ }
+ return delta;
+ }
+
+ /* Method: set_rprop_delta_max
+
+ The maximum step-size is a positive number determining how large the maximum step-size may be.
+
+ See also:
+ <get_rprop_delta_max>, <get_rprop_delta_min>, <fann_set_rprop_delta_max>
+
+ This function appears in FANN >= 1.2.0.
+ */
+ void set_rprop_delta_max(float rprop_delta_max)
+ {
+ if (ann != NULL)
+ {
+ fann_set_rprop_delta_max(ann, rprop_delta_max);
+ }
+ }
+
+ /* Method: get_sarprop_weight_decay_shift
+
+ The sarprop weight decay shift.
+
+ The default delta max is -6.644.
+
+ See also:
+ <set_sarprop_weight_decay_shift>, <fann get_sarprop_weight_decay_shift>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ float get_sarprop_weight_decay_shift()
+ {
+ float res = 0.0f;
+ if (ann != NULL)
+ {
+ res = fann_get_rprop_delta_max(ann);
+ }
+ return res;
+ }
+
+ /* Method: set_sarprop_weight_decay_shift
+
+ Set the sarprop weight decay shift.
+
+ This function appears in FANN >= 2.1.0.
+
+ See also:
+ <get_sarprop_weight_decay_shift>, <fann_set_sarprop_weight_decay_shift>
+ */
+ void set_sarprop_weight_decay_shift(float sarprop_weight_decay_shift)
+ {
+ if (ann != NULL)
+ {
+ fann_set_sarprop_weight_decay_shift(ann, sarprop_weight_decay_shift);
+ }
+ }
+
+ /* Method: get_sarprop_step_error_threshold_factor
+
+ The sarprop step error threshold factor.
+
+ The default delta max is 0.1.
+
+ See also:
+ <set_sarprop_step_error_threshold_factor>, <fann get_sarprop_step_error_threshold_factor>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ float get_sarprop_step_error_threshold_factor()
+ {
+ float res = 0.0f;
+ if (ann != NULL)
+ {
+ res = fann_get_rprop_delta_max(ann);
+ }
+ return res;
+ }
+
+ /* Method: set_sarprop_step_error_threshold_factor
+
+ Set the sarprop step error threshold factor.
+
+ This function appears in FANN >= 2.1.0.
+
+ See also:
+ <get_sarprop_step_error_threshold_factor>, <fann_set_sarprop_step_error_threshold_factor>
+ */
+ void set_sarprop_step_error_threshold_factor(float sarprop_step_error_threshold_factor)
+ {
+ if (ann != NULL)
+ {
+ fann_set_sarprop_step_error_threshold_factor(ann, sarprop_step_error_threshold_factor);
+ }
+ }
+
+ /* Method: get_sarprop_step_error_shift
+
+ The get sarprop step error shift.
+
+ The default delta max is 1.385.
+
+ See also:
+ <set_sarprop_step_error_shift>, <fann get_sarprop_step_error_shift>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ float get_sarprop_step_error_shift()
+ {
+ float res = 0.0f;
+ if (ann != NULL)
+ {
+ res = fann_get_rprop_delta_max(ann);
+ }
+ return res;
+ }
+
+ /* Method: set_sarprop_step_error_shift
+
+ Set the sarprop step error shift.
+
+ This function appears in FANN >= 2.1.0.
+
+ See also:
+ <get_sarprop_step_error_shift>, <fann_set_sarprop_step_error_shift>
+ */
+ void set_sarprop_step_error_shift(float sarprop_step_error_shift)
+ {
+ if (ann != NULL)
+ {
+ fann_set_sarprop_step_error_shift(ann, sarprop_step_error_shift);
+ }
+ }
+
+ /* Method: get_sarprop_temperature
+
+ The sarprop weight decay shift.
+
+ The default delta max is 0.015.
+
+ See also:
+ <set_sarprop_temperature>, <fann get_sarprop_temperature>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ float get_sarprop_temperature()
+ {
+ float res = 0.0f;
+ if (ann != NULL)
+ {
+ res = fann_get_rprop_delta_max(ann);
+ }
+ return res;
+ }
+
+ /* Method: set_sarprop_temperature
+
+ Set the sarprop_temperature.
+
+ This function appears in FANN >= 2.1.0.
+
+ See also:
+ <get_sarprop_temperature>, <fann_set_sarprop_temperature>
+ */
+ void set_sarprop_temperature(float sarprop_temperature)
+ {
+ if (ann != NULL)
+ {
+ fann_set_sarprop_temperature(ann, sarprop_temperature);
+ }
+ }
+
+
+ /* Method: get_num_input
+
+ Get the number of input neurons.
+
+ This function appears in FANN >= 1.0.0.
+ */
+ unsigned int get_num_input()
+ {
+ unsigned int num_input = 0;
+ if (ann != NULL)
+ {
+ num_input = fann_get_num_input(ann);
+ }
+ return num_input;
+ }
+
+ /* Method: get_num_output
+
+ Get the number of output neurons.
+
+ This function appears in FANN >= 1.0.0.
+ */
+ unsigned int get_num_output()
+ {
+ unsigned int num_output = 0;
+ if (ann != NULL)
+ {
+ num_output = fann_get_num_output(ann);
+ }
+ return num_output;
+ }
+
+ /* Method: get_total_neurons
+
+ Get the total number of neurons in the entire network. This number does also include the
+ bias neurons, so a 2-4-2 network has 2+4+2 +2(bias) = 10 neurons.
+
+ This function appears in FANN >= 1.0.0.
+ */
+ unsigned int get_total_neurons()
+ {
+ if (ann == NULL)
+ {
+ return 0;
+ }
+ return fann_get_total_neurons(ann);
+ }
+
+ /* Method: get_total_connections
+
+ Get the total number of connections in the entire network.
+
+ This function appears in FANN >= 1.0.0.
+ */
+ unsigned int get_total_connections()
+ {
+ if (ann == NULL)
+ {
+ return 0;
+ }
+ return fann_get_total_connections(ann);
+ }
+
+#ifdef FIXEDFANN
+ /* Method: get_decimal_point
+
+ Returns the position of the decimal point in the ann.
+
+ This function is only available when the ANN is in fixed point mode.
+
+ The decimal point is described in greater detail in the tutorial <Fixed Point Usage>.
+
+ See also:
+ <Fixed Point Usage>, <get_multiplier>, <save_to_fixed>,
+ <training_data::save_train_to_fixed>, <fann_get_decimal_point>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ unsigned int get_decimal_point()
+ {
+ if (ann == NULL)
+ {
+ return 0;
+ }
+ return fann_get_decimal_point(ann);
+ }
+
+ /* Method: get_multiplier
+
+ Returns the multiplier that fix point data is multiplied with.
+
+ This function is only available when the ANN is in fixed point mode.
+
+ The multiplier is the used to convert between floating point and fixed point notation.
+ A floating point number is multiplied with the multiplier in order to get the fixed point
+ number and visa versa.
+
+ The multiplier is described in greater detail in the tutorial <Fixed Point Usage>.
+
+ See also:
+ <Fixed Point Usage>, <get_decimal_point>, <save_to_fixed>,
+ <training_data::save_train_to_fixed>, <fann_get_multiplier>
+
+ This function appears in FANN >= 1.0.0.
+ */
+ unsigned int get_multiplier()
+ {
+ if (ann == NULL)
+ {
+ return 0;
+ }
+ return fann_get_multiplier(ann);
+ }
+#endif /* FIXEDFANN */
+
+ /*********************************************************************/
+
+ /* Method: get_network_type
+
+ Get the type of neural network it was created as.
+
+ Returns:
+ The neural network type from enum <FANN::network_type_enum>
+
+ See Also:
+ <fann_get_network_type>
+
+ This function appears in FANN >= 2.1.0
+ */
+ network_type_enum get_network_type()
+ {
+ fann_nettype_enum network_type = FANN_NETTYPE_LAYER;
+ if (ann != NULL)
+ {
+ network_type = fann_get_network_type(ann);
+ }
+ return static_cast<network_type_enum>(network_type);
+ }
+
+ /* Method: get_connection_rate
+
+ Get the connection rate used when the network was created
+
+ Returns:
+ The connection rate
+
+ See also:
+ <fann_get_connection_rate>
+
+ This function appears in FANN >= 2.1.0
+ */
+ float get_connection_rate()
+ {
+ if (ann == NULL)
+ {
+ return 0;
+ }
+ return fann_get_connection_rate(ann);
+ }
+
+ /* Method: get_num_layers
+
+ Get the number of layers in the network
+
+ Returns:
+ The number of layers in the neural network
+
+ See also:
+ <fann_get_num_layers>
+
+ This function appears in FANN >= 2.1.0
+ */
+ unsigned int get_num_layers()
+ {
+ if (ann == NULL)
+ {
+ return 0;
+ }
+ return fann_get_num_layers(ann);
+ }
+
+ /* Method: get_layer_array
+
+ Get the number of neurons in each layer in the network.
+
+ Bias is not included so the layers match the create methods.
+
+ The layers array must be preallocated to at least
+ sizeof(unsigned int) * get_num_layers() long.
+
+ See also:
+ <fann_get_layer_array>
+
+ This function appears in FANN >= 2.1.0
+ */
+ void get_layer_array(unsigned int *layers)
+ {
+ if (ann != NULL)
+ {
+ fann_get_layer_array(ann, layers);
+ }
+ }
+
+ /* Method: get_bias_array
+
+ Get the number of bias in each layer in the network.
+
+ The bias array must be preallocated to at least
+ sizeof(unsigned int) * get_num_layers() long.
+
+ See also:
+ <fann_get_bias_array>
+
+ This function appears in FANN >= 2.1.0
+ */
+ void get_bias_array(unsigned int *bias)
+ {
+ if (ann != NULL)
+ {
+ fann_get_bias_array(ann, bias);
+ }
+ }
+
+ /* Method: get_connection_array
+
+ Get the connections in the network.
+
+ The connections array must be preallocated to at least
+ sizeof(struct fann_connection) * get_total_connections() long.
+
+ See also:
+ <fann_get_connection_array>
+
+ This function appears in FANN >= 2.1.0
+ */
+ void get_connection_array(connection *connections)
+ {
+ if (ann != NULL)
+ {
+ fann_get_connection_array(ann, connections);
+ }
+ }
+
+ /* Method: set_weight_array
+
+ Set connections in the network.
+
+ Only the weights can be changed, connections and weights are ignored
+ if they do not already exist in the network.
+
+ The array must have sizeof(struct fann_connection) * num_connections size.
+
+ See also:
+ <fann_set_weight_array>
+
+ This function appears in FANN >= 2.1.0
+ */
+ void set_weight_array(connection *connections, unsigned int num_connections)
+ {
+ if (ann != NULL)
+ {
+ fann_set_weight_array(ann, connections, num_connections);
+ }
+ }
+
+ /* Method: set_weight
+
+ Set a connection in the network.
+
+ Only the weights can be changed. The connection/weight is
+ ignored if it does not already exist in the network.
+
+ See also:
+ <fann_set_weight>
+
+ This function appears in FANN >= 2.1.0
+ */
+ void set_weight(unsigned int from_neuron, unsigned int to_neuron, fann_type weight)
+ {
+ if (ann != NULL)
+ {
+ fann_set_weight(ann, from_neuron, to_neuron, weight);
+ }
+ }
+
+ /*********************************************************************/
+
+ /* Method: get_learning_momentum
+
+ Get the learning momentum.
+
+ The learning momentum can be used to speed up FANN::TRAIN_INCREMENTAL training.
+ A too high momentum will however not benefit training. Setting momentum to 0 will
+ be the same as not using the momentum parameter. The recommended value of this parameter
+ is between 0.0 and 1.0.
+
+ The default momentum is 0.
+
+ See also:
+ <set_learning_momentum>, <set_training_algorithm>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ float get_learning_momentum()
+ {
+ float learning_momentum = 0.0f;
+ if (ann != NULL)
+ {
+ learning_momentum = fann_get_learning_momentum(ann);
+ }
+ return learning_momentum;
+ }
+
+ /* Method: set_learning_momentum
+
+ Set the learning momentum.
+
+ More info available in <get_learning_momentum>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_learning_momentum(float learning_momentum)
+ {
+ if (ann != NULL)
+ {
+ fann_set_learning_momentum(ann, learning_momentum);
+ }
+ }
+
+ /* Method: get_train_stop_function
+
+ Returns the the stop function used during training.
+
+ The stop function is described further in <FANN::stop_function_enum>
+
+ The default stop function is FANN::STOPFUNC_MSE
+
+ See also:
+ <get_train_stop_function>, <get_bit_fail_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ stop_function_enum get_train_stop_function()
+ {
+ enum fann_stopfunc_enum stopfunc = FANN_STOPFUNC_MSE;
+ if (ann != NULL)
+ {
+ stopfunc = fann_get_train_stop_function(ann);
+ }
+ return static_cast<stop_function_enum>(stopfunc);
+ }
+
+ /* Method: set_train_stop_function
+
+ Set the stop function used during training.
+
+ The stop function is described further in <FANN::stop_function_enum>
+
+ See also:
+ <get_train_stop_function>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_train_stop_function(stop_function_enum train_stop_function)
+ {
+ if (ann != NULL)
+ {
+ fann_set_train_stop_function(ann,
+ static_cast<enum fann_stopfunc_enum>(train_stop_function));
+ }
+ }
+
+ /* Method: get_bit_fail_limit
+
+ Returns the bit fail limit used during training.
+
+ The bit fail limit is used during training when the <FANN::stop_function_enum> is set to FANN_STOPFUNC_BIT.
+
+ The limit is the maximum accepted difference between the desired output and the actual output during
+ training. Each output that diverges more than this limit is counted as an error bit.
+ This difference is divided by two when dealing with symmetric activation functions,
+ so that symmetric and not symmetric activation functions can use the same limit.
+
+ The default bit fail limit is 0.35.
+
+ See also:
+ <set_bit_fail_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ fann_type get_bit_fail_limit()
+ {
+ fann_type bit_fail_limit = 0.0f;
+
+ if (ann != NULL)
+ {
+ bit_fail_limit = fann_get_bit_fail_limit(ann);
+ }
+ return bit_fail_limit;
+ }
+
+ /* Method: set_bit_fail_limit
+
+ Set the bit fail limit used during training.
+
+ See also:
+ <get_bit_fail_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_bit_fail_limit(fann_type bit_fail_limit)
+ {
+ if (ann != NULL)
+ {
+ fann_set_bit_fail_limit(ann, bit_fail_limit);
+ }
+ }
+
+ /* Method: get_bit_fail
+
+ The number of fail bits; means the number of output neurons which differ more
+ than the bit fail limit (see <get_bit_fail_limit>, <set_bit_fail_limit>).
+ The bits are counted in all of the training data, so this number can be higher than
+ the number of training data.
+
+ This value is reset by <reset_MSE> and updated by all the same functions which also
+ updates the MSE value (e.g. <test_data>, <train_epoch>)
+
+ See also:
+ <FANN::stop_function_enum>, <get_MSE>
+
+ This function appears in FANN >= 2.0.0
+ */
+ unsigned int get_bit_fail()
+ {
+ unsigned int bit_fail = 0;
+ if (ann != NULL)
+ {
+ bit_fail = fann_get_bit_fail(ann);
+ }
+ return bit_fail;
+ }
+
+ /*********************************************************************/
+
+ /* Method: cascadetrain_on_data
+
+ Trains on an entire dataset, for a period of time using the Cascade2 training algorithm.
+ This algorithm adds neurons to the neural network while training, which means that it
+ needs to start with an ANN without any hidden layers. The neural network should also use
+ shortcut connections, so <create_shortcut> should be used to create the ANN like this:
+ >net.create_shortcut(2, train_data.num_input_train_data(), train_data.num_output_train_data());
+
+ This training uses the parameters set using the set_cascade_..., but it also uses another
+ training algorithm as it's internal training algorithm. This algorithm can be set to either
+ FANN::TRAIN_RPROP or FANN::TRAIN_QUICKPROP by <set_training_algorithm>, and the parameters
+ set for these training algorithms will also affect the cascade training.
+
+ Parameters:
+ data - The data, which should be used during training
+ max_neuron - The maximum number of neurons to be added to neural network
+ neurons_between_reports - The number of neurons between printing a status report to stdout.
+ A value of zero means no reports should be printed.
+ desired_error - The desired <fann_get_MSE> or <fann_get_bit_fail>, depending on which stop function
+ is chosen by <fann_set_train_stop_function>.
+
+ Instead of printing out reports every neurons_between_reports, a callback function can be called
+ (see <set_callback>).
+
+ See also:
+ <train_on_data>, <cascadetrain_on_file>, <fann_cascadetrain_on_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void cascadetrain_on_data(const training_data &data, unsigned int max_neurons,
+ unsigned int neurons_between_reports, float desired_error)
+ {
+ if ((ann != NULL) && (data.train_data != NULL))
+ {
+ fann_cascadetrain_on_data(ann, data.train_data, max_neurons,
+ neurons_between_reports, desired_error);
+ }
+ }
+
+ /* Method: cascadetrain_on_file
+
+ Does the same as <cascadetrain_on_data>, but reads the training data directly from a file.
+
+ See also:
+ <fann_cascadetrain_on_data>, <fann_cascadetrain_on_file>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void cascadetrain_on_file(const std::string &filename, unsigned int max_neurons,
+ unsigned int neurons_between_reports, float desired_error)
+ {
+ if (ann != NULL)
+ {
+ fann_cascadetrain_on_file(ann, filename.c_str(),
+ max_neurons, neurons_between_reports, desired_error);
+ }
+ }
+
+ /* Method: get_cascade_output_change_fraction
+
+ The cascade output change fraction is a number between 0 and 1 determining how large a fraction
+ the <get_MSE> value should change within <get_cascade_output_stagnation_epochs> during
+ training of the output connections, in order for the training not to stagnate. If the training
+ stagnates, the training of the output connections will be ended and new candidates will be prepared.
+
+ This means:
+ If the MSE does not change by a fraction of <get_cascade_output_change_fraction> during a
+ period of <get_cascade_output_stagnation_epochs>, the training of the output connections
+ is stopped because the training has stagnated.
+
+ If the cascade output change fraction is low, the output connections will be trained more and if the
+ fraction is high they will be trained less.
+
+ The default cascade output change fraction is 0.01, which is equalent to a 1% change in MSE.
+
+ See also:
+ <set_cascade_output_change_fraction>, <get_MSE>,
+ <get_cascade_output_stagnation_epochs>, <fann_get_cascade_output_change_fraction>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ float get_cascade_output_change_fraction()
+ {
+ float change_fraction = 0.0f;
+ if (ann != NULL)
+ {
+ change_fraction = fann_get_cascade_output_change_fraction(ann);
+ }
+ return change_fraction;
+ }
+
+ /* Method: set_cascade_output_change_fraction
+
+ Sets the cascade output change fraction.
+
+ See also:
+ <get_cascade_output_change_fraction>, <fann_set_cascade_output_change_fraction>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_output_change_fraction(float cascade_output_change_fraction)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_output_change_fraction(ann, cascade_output_change_fraction);
+ }
+ }
+
+ /* Method: get_cascade_output_stagnation_epochs
+
+ The number of cascade output stagnation epochs determines the number of epochs training is allowed to
+ continue without changing the MSE by a fraction of <get_cascade_output_change_fraction>.
+
+ See more info about this parameter in <get_cascade_output_change_fraction>.
+
+ The default number of cascade output stagnation epochs is 12.
+
+ See also:
+ <set_cascade_output_stagnation_epochs>, <get_cascade_output_change_fraction>,
+ <fann_get_cascade_output_stagnation_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int get_cascade_output_stagnation_epochs()
+ {
+ unsigned int stagnation_epochs = 0;
+ if (ann != NULL)
+ {
+ stagnation_epochs = fann_get_cascade_output_stagnation_epochs(ann);
+ }
+ return stagnation_epochs;
+ }
+
+ /* Method: set_cascade_output_stagnation_epochs
+
+ Sets the number of cascade output stagnation epochs.
+
+ See also:
+ <get_cascade_output_stagnation_epochs>, <fann_set_cascade_output_stagnation_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_output_stagnation_epochs(unsigned int cascade_output_stagnation_epochs)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_output_stagnation_epochs(ann, cascade_output_stagnation_epochs);
+ }
+ }
+
+ /* Method: get_cascade_candidate_change_fraction
+
+ The cascade candidate change fraction is a number between 0 and 1 determining how large a fraction
+ the <get_MSE> value should change within <get_cascade_candidate_stagnation_epochs> during
+ training of the candidate neurons, in order for the training not to stagnate. If the training
+ stagnates, the training of the candidate neurons will be ended and the best candidate will be selected.
+
+ This means:
+ If the MSE does not change by a fraction of <get_cascade_candidate_change_fraction> during a
+ period of <get_cascade_candidate_stagnation_epochs>, the training of the candidate neurons
+ is stopped because the training has stagnated.
+
+ If the cascade candidate change fraction is low, the candidate neurons will be trained more and if the
+ fraction is high they will be trained less.
+
+ The default cascade candidate change fraction is 0.01, which is equalent to a 1% change in MSE.
+
+ See also:
+ <set_cascade_candidate_change_fraction>, <get_MSE>,
+ <get_cascade_candidate_stagnation_epochs>, <fann_get_cascade_candidate_change_fraction>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ float get_cascade_candidate_change_fraction()
+ {
+ float change_fraction = 0.0f;
+ if (ann != NULL)
+ {
+ change_fraction = fann_get_cascade_candidate_change_fraction(ann);
+ }
+ return change_fraction;
+ }
+
+ /* Method: set_cascade_candidate_change_fraction
+
+ Sets the cascade candidate change fraction.
+
+ See also:
+ <get_cascade_candidate_change_fraction>,
+ <fann_set_cascade_candidate_change_fraction>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_candidate_change_fraction(float cascade_candidate_change_fraction)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_candidate_change_fraction(ann, cascade_candidate_change_fraction);
+ }
+ }
+
+ /* Method: get_cascade_candidate_stagnation_epochs
+
+ The number of cascade candidate stagnation epochs determines the number of epochs training is allowed to
+ continue without changing the MSE by a fraction of <get_cascade_candidate_change_fraction>.
+
+ See more info about this parameter in <get_cascade_candidate_change_fraction>.
+
+ The default number of cascade candidate stagnation epochs is 12.
+
+ See also:
+ <set_cascade_candidate_stagnation_epochs>, <get_cascade_candidate_change_fraction>,
+ <fann_get_cascade_candidate_stagnation_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int get_cascade_candidate_stagnation_epochs()
+ {
+ unsigned int stagnation_epochs = 0;
+ if (ann != NULL)
+ {
+ stagnation_epochs = fann_get_cascade_candidate_stagnation_epochs(ann);
+ }
+ return stagnation_epochs;
+ }
+
+ /* Method: set_cascade_candidate_stagnation_epochs
+
+ Sets the number of cascade candidate stagnation epochs.
+
+ See also:
+ <get_cascade_candidate_stagnation_epochs>,
+ <fann_set_cascade_candidate_stagnation_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_candidate_stagnation_epochs(unsigned int cascade_candidate_stagnation_epochs)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_candidate_stagnation_epochs(ann, cascade_candidate_stagnation_epochs);
+ }
+ }
+
+ /* Method: get_cascade_weight_multiplier
+
+ The weight multiplier is a parameter which is used to multiply the weights from the candidate neuron
+ before adding the neuron to the neural network. This parameter is usually between 0 and 1, and is used
+ to make the training a bit less aggressive.
+
+ The default weight multiplier is 0.4
+
+ See also:
+ <set_cascade_weight_multiplier>, <fann_get_cascade_weight_multiplier>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ fann_type get_cascade_weight_multiplier()
+ {
+ fann_type weight_multiplier = 0;
+ if (ann != NULL)
+ {
+ weight_multiplier = fann_get_cascade_weight_multiplier(ann);
+ }
+ return weight_multiplier;
+ }
+
+ /* Method: set_cascade_weight_multiplier
+
+ Sets the weight multiplier.
+
+ See also:
+ <get_cascade_weight_multiplier>, <fann_set_cascade_weight_multiplier>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_weight_multiplier(fann_type cascade_weight_multiplier)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_weight_multiplier(ann, cascade_weight_multiplier);
+ }
+ }
+
+ /* Method: get_cascade_candidate_limit
+
+ The candidate limit is a limit for how much the candidate neuron may be trained.
+ The limit is a limit on the proportion between the MSE and candidate score.
+
+ Set this to a lower value to avoid overfitting and to a higher if overfitting is
+ not a problem.
+
+ The default candidate limit is 1000.0
+
+ See also:
+ <set_cascade_candidate_limit>, <fann_get_cascade_candidate_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ fann_type get_cascade_candidate_limit()
+ {
+ fann_type candidate_limit = 0;
+ if (ann != NULL)
+ {
+ candidate_limit = fann_get_cascade_candidate_limit(ann);
+ }
+ return candidate_limit;
+ }
+
+ /* Method: set_cascade_candidate_limit
+
+ Sets the candidate limit.
+
+ See also:
+ <get_cascade_candidate_limit>, <fann_set_cascade_candidate_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_candidate_limit(fann_type cascade_candidate_limit)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_candidate_limit(ann, cascade_candidate_limit);
+ }
+ }
+
+ /* Method: get_cascade_max_out_epochs
+
+ The maximum out epochs determines the maximum number of epochs the output connections
+ may be trained after adding a new candidate neuron.
+
+ The default max out epochs is 150
+
+ See also:
+ <set_cascade_max_out_epochs>, <fann_get_cascade_max_out_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int get_cascade_max_out_epochs()
+ {
+ unsigned int max_out_epochs = 0;
+ if (ann != NULL)
+ {
+ max_out_epochs = fann_get_cascade_max_out_epochs(ann);
+ }
+ return max_out_epochs;
+ }
+
+ /* Method: set_cascade_max_out_epochs
+
+ Sets the maximum out epochs.
+
+ See also:
+ <get_cascade_max_out_epochs>, <fann_set_cascade_max_out_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_max_out_epochs(unsigned int cascade_max_out_epochs)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_max_out_epochs(ann, cascade_max_out_epochs);
+ }
+ }
+
+ /* Method: get_cascade_max_cand_epochs
+
+ The maximum candidate epochs determines the maximum number of epochs the input
+ connections to the candidates may be trained before adding a new candidate neuron.
+
+ The default max candidate epochs is 150
+
+ See also:
+ <set_cascade_max_cand_epochs>, <fann_get_cascade_max_cand_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int get_cascade_max_cand_epochs()
+ {
+ unsigned int max_cand_epochs = 0;
+ if (ann != NULL)
+ {
+ max_cand_epochs = fann_get_cascade_max_cand_epochs(ann);
+ }
+ return max_cand_epochs;
+ }
+
+ /* Method: set_cascade_max_cand_epochs
+
+ Sets the max candidate epochs.
+
+ See also:
+ <get_cascade_max_cand_epochs>, <fann_set_cascade_max_cand_epochs>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_max_cand_epochs(unsigned int cascade_max_cand_epochs)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_max_cand_epochs(ann, cascade_max_cand_epochs);
+ }
+ }
+
+ /* Method: get_cascade_num_candidates
+
+ The number of candidates used during training (calculated by multiplying <get_cascade_activation_functions_count>,
+ <get_cascade_activation_steepnesses_count> and <get_cascade_num_candidate_groups>).
+
+ The actual candidates is defined by the <get_cascade_activation_functions> and
+ <get_cascade_activation_steepnesses> arrays. These arrays define the activation functions
+ and activation steepnesses used for the candidate neurons. If there are 2 activation functions
+ in the activation function array and 3 steepnesses in the steepness array, then there will be
+ 2x3=6 different candidates which will be trained. These 6 different candidates can be copied into
+ several candidate groups, where the only difference between these groups is the initial weights.
+ If the number of groups is set to 2, then the number of candidate neurons will be 2x3x2=12. The
+ number of candidate groups is defined by <set_cascade_num_candidate_groups>.
+
+ The default number of candidates is 6x4x2 = 48
+
+ See also:
+ <get_cascade_activation_functions>, <get_cascade_activation_functions_count>,
+ <get_cascade_activation_steepnesses>, <get_cascade_activation_steepnesses_count>,
+ <get_cascade_num_candidate_groups>, <fann_get_cascade_num_candidates>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int get_cascade_num_candidates()
+ {
+ unsigned int num_candidates = 0;
+ if (ann != NULL)
+ {
+ num_candidates = fann_get_cascade_num_candidates(ann);
+ }
+ return num_candidates;
+ }
+
+ /* Method: get_cascade_activation_functions_count
+
+ The number of activation functions in the <get_cascade_activation_functions> array.
+
+ The default number of activation functions is 6.
+
+ See also:
+ <get_cascade_activation_functions>, <set_cascade_activation_functions>,
+ <fann_get_cascade_activation_functions_count>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int get_cascade_activation_functions_count()
+ {
+ unsigned int activation_functions_count = 0;
+ if (ann != NULL)
+ {
+ activation_functions_count = fann_get_cascade_activation_functions_count(ann);
+ }
+ return activation_functions_count;
+ }
+
+ /* Method: get_cascade_activation_functions
+
+ The cascade activation functions array is an array of the different activation functions used by
+ the candidates.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ See also:
+ <get_cascade_activation_functions_count>, <set_cascade_activation_functions>,
+ <FANN::activation_function_enum>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ activation_function_enum * get_cascade_activation_functions()
+ {
+ enum fann_activationfunc_enum *activation_functions = NULL;
+ if (ann != NULL)
+ {
+ activation_functions = fann_get_cascade_activation_functions(ann);
+ }
+ return reinterpret_cast<activation_function_enum *>(activation_functions);
+ }
+
+ /* Method: set_cascade_activation_functions
+
+ Sets the array of cascade candidate activation functions. The array must be just as long
+ as defined by the count.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ See also:
+ <get_cascade_activation_steepnesses_count>, <get_cascade_activation_steepnesses>,
+ <fann_set_cascade_activation_functions>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_activation_functions(activation_function_enum *cascade_activation_functions,
+ unsigned int cascade_activation_functions_count)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_activation_functions(ann,
+ reinterpret_cast<enum fann_activationfunc_enum *>(cascade_activation_functions),
+ cascade_activation_functions_count);
+ }
+ }
+
+ /* Method: get_cascade_activation_steepnesses_count
+
+ The number of activation steepnesses in the <get_cascade_activation_functions> array.
+
+ The default number of activation steepnesses is 4.
+
+ See also:
+ <get_cascade_activation_steepnesses>, <set_cascade_activation_functions>,
+ <fann_get_cascade_activation_steepnesses_count>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int get_cascade_activation_steepnesses_count()
+ {
+ unsigned int activation_steepness_count = 0;
+ if (ann != NULL)
+ {
+ activation_steepness_count = fann_get_cascade_activation_steepnesses_count(ann);
+ }
+ return activation_steepness_count;
+ }
+
+ /* Method: get_cascade_activation_steepnesses
+
+ The cascade activation steepnesses array is an array of the different activation functions used by
+ the candidates.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ The default activation steepnesses is {0.25, 0.50, 0.75, 1.00}
+
+ See also:
+ <set_cascade_activation_steepnesses>, <get_cascade_activation_steepnesses_count>,
+ <fann_get_cascade_activation_steepnesses>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ fann_type *get_cascade_activation_steepnesses()
+ {
+ fann_type *activation_steepnesses = NULL;
+ if (ann != NULL)
+ {
+ activation_steepnesses = fann_get_cascade_activation_steepnesses(ann);
+ }
+ return activation_steepnesses;
+ }
+
+ /* Method: set_cascade_activation_steepnesses
+
+ Sets the array of cascade candidate activation steepnesses. The array must be just as long
+ as defined by the count.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this array.
+
+ See also:
+ <get_cascade_activation_steepnesses>, <get_cascade_activation_steepnesses_count>,
+ <fann_set_cascade_activation_steepnesses>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_activation_steepnesses(fann_type *cascade_activation_steepnesses,
+ unsigned int cascade_activation_steepnesses_count)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_activation_steepnesses(ann,
+ cascade_activation_steepnesses, cascade_activation_steepnesses_count);
+ }
+ }
+
+ /* Method: get_cascade_num_candidate_groups
+
+ The number of candidate groups is the number of groups of identical candidates which will be used
+ during training.
+
+ This number can be used to have more candidates without having to define new parameters for the candidates.
+
+ See <get_cascade_num_candidates> for a description of which candidate neurons will be
+ generated by this parameter.
+
+ The default number of candidate groups is 2
+
+ See also:
+ <set_cascade_num_candidate_groups>, <fann_get_cascade_num_candidate_groups>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ unsigned int get_cascade_num_candidate_groups()
+ {
+ unsigned int num_candidate_groups = 0;
+ if (ann != NULL)
+ {
+ num_candidate_groups = fann_get_cascade_num_candidate_groups(ann);
+ }
+ return num_candidate_groups;
+ }
+
+ /* Method: set_cascade_num_candidate_groups
+
+ Sets the number of candidate groups.
+
+ See also:
+ <get_cascade_num_candidate_groups>, <fann_set_cascade_num_candidate_groups>
+
+ This function appears in FANN >= 2.0.0.
+ */
+ void set_cascade_num_candidate_groups(unsigned int cascade_num_candidate_groups)
+ {
+ if (ann != NULL)
+ {
+ fann_set_cascade_num_candidate_groups(ann, cascade_num_candidate_groups);
+ }
+ }
+
+ /*********************************************************************/
+
+#ifndef FIXEDFANN
+ /* Method: scale_train
+
+ Scale input and output data based on previously calculated parameters.
+
+ See also:
+ <descale_train>, <fann_scale_train>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ void scale_train(training_data &data)
+ {
+ if (ann != NULL)
+ {
+ fann_scale_train(ann, data.train_data);
+ }
+ }
+
+ /* Method: descale_train
+
+ Descale input and output data based on previously calculated parameters.
+
+ See also:
+ <scale_train>, <fann_descale_train>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ void descale_train(training_data &data)
+ {
+ if (ann != NULL)
+ {
+ fann_descale_train(ann, data.train_data);
+ }
+ }
+
+ /* Method: set_input_scaling_params
+
+ Calculate scaling parameters for future use based on training data.
+
+ See also:
+ <set_output_scaling_params>, <fann_set_input_scaling_params>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ bool set_input_scaling_params(const training_data &data, float new_input_min, float new_input_max)
+ {
+ bool status = false;
+ if (ann != NULL)
+ {
+ status = (fann_set_input_scaling_params(ann, data.train_data, new_input_min, new_input_max) != -1);
+ }
+ return status;
+ }
+
+ /* Method: set_output_scaling_params
+
+ Calculate scaling parameters for future use based on training data.
+
+ See also:
+ <set_input_scaling_params>, <fann_set_output_scaling_params>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ bool set_output_scaling_params(const training_data &data, float new_output_min, float new_output_max)
+ {
+ bool status = false;
+ if (ann != NULL)
+ {
+ status = (fann_set_output_scaling_params(ann, data.train_data, new_output_min, new_output_max) != -1);
+ }
+ return status;
+ }
+
+ /* Method: set_scaling_params
+
+ Calculate scaling parameters for future use based on training data.
+
+ See also:
+ <clear_scaling_params>, <fann_set_scaling_params>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ bool set_scaling_params(const training_data &data,
+ float new_input_min, float new_input_max, float new_output_min, float new_output_max)
+ {
+ bool status = false;
+ if (ann != NULL)
+ {
+ status = (fann_set_scaling_params(ann, data.train_data,
+ new_input_min, new_input_max, new_output_min, new_output_max) != -1);
+ }
+ return status;
+ }
+
+ /* Method: clear_scaling_params
+
+ Clears scaling parameters.
+
+ See also:
+ <set_scaling_params>, <fann_clear_scaling_params>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ bool clear_scaling_params()
+ {
+ bool status = false;
+ if (ann != NULL)
+ {
+ status = (fann_clear_scaling_params(ann) != -1);
+ }
+ return status;
+ }
+
+ /* Method: scale_input
+
+ Scale data in input vector before feed it to ann based on previously calculated parameters.
+
+ See also:
+ <descale_input>, <scale_output>, <fann_scale_input>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ void scale_input(fann_type *input_vector)
+ {
+ if (ann != NULL)
+ {
+ fann_scale_input(ann, input_vector );
+ }
+ }
+
+ /* Method: scale_output
+
+ Scale data in output vector before feed it to ann based on previously calculated parameters.
+
+ See also:
+ <descale_output>, <scale_input>, <fann_scale_output>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ void scale_output(fann_type *output_vector)
+ {
+ if (ann != NULL)
+ {
+ fann_scale_output(ann, output_vector );
+ }
+ }
+
+ /* Method: descale_input
+
+ Scale data in input vector after get it from ann based on previously calculated parameters.
+
+ See also:
+ <scale_input>, <descale_output>, <fann_descale_input>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ void descale_input(fann_type *input_vector)
+ {
+ if (ann != NULL)
+ {
+ fann_descale_input(ann, input_vector );
+ }
+ }
+
+ /* Method: descale_output
+
+ Scale data in output vector after get it from ann based on previously calculated parameters.
+
+ See also:
+ <scale_output>, <descale_input>, <fann_descale_output>
+
+ This function appears in FANN >= 2.1.0.
+ */
+ void descale_output(fann_type *output_vector)
+ {
+ if (ann != NULL)
+ {
+ fann_descale_output(ann, output_vector );
+ }
+ }
+
+#endif /* FIXEDFANN */
+
+ /*********************************************************************/
+
+ /* Method: set_error_log
+
+ Change where errors are logged to.
+
+ If log_file is NULL, no errors will be printed.
+
+ If neural_net is empty i.e. ann is NULL, the default log will be set.
+ The default log is the log used when creating a neural_net.
+ This default log will also be the default for all new structs
+ that are created.
+
+ The default behavior is to log them to stderr.
+
+ See also:
+ <struct fann_error>, <fann_set_error_log>
+
+ This function appears in FANN >= 1.1.0.
+ */
+ void set_error_log(FILE *log_file)
+ {
+ fann_set_error_log(reinterpret_cast<struct fann_error *>(ann), log_file);
+ }
+
+ /* Method: get_errno
+
+ Returns the last error number.
+
+ See also:
+ <fann_errno_enum>, <fann_reset_errno>, <fann_get_errno>
+
+ This function appears in FANN >= 1.1.0.
+ */
+ unsigned int get_errno()
+ {
+ return fann_get_errno(reinterpret_cast<struct fann_error *>(ann));
+ }
+
+ /* Method: reset_errno
+
+ Resets the last error number.
+
+ This function appears in FANN >= 1.1.0.
+ */
+ void reset_errno()
+ {
+ fann_reset_errno(reinterpret_cast<struct fann_error *>(ann));
+ }
+
+ /* Method: reset_errstr
+
+ Resets the last error string.
+
+ This function appears in FANN >= 1.1.0.
+ */
+ void reset_errstr()
+ {
+ fann_reset_errstr(reinterpret_cast<struct fann_error *>(ann));
+ }
+
+ /* Method: get_errstr
+
+ Returns the last errstr.
+
+ This function calls <fann_reset_errno> and <fann_reset_errstr>
+
+ This function appears in FANN >= 1.1.0.
+ */
+ std::string get_errstr()
+ {
+ return std::string(fann_get_errstr(reinterpret_cast<struct fann_error *>(ann)));
+ }
+
+ /* Method: print_error
+
+ Prints the last error to stderr.
+
+ This function appears in FANN >= 1.1.0.
+ */
+ void print_error()
+ {
+ fann_print_error(reinterpret_cast<struct fann_error *>(ann));
+ }
+
+ /*********************************************************************/
+
+ private:
+ // Structure used by set_callback to hold information about a user callback
+ typedef struct user_context_type
+ {
+ callback_type user_callback; // Pointer to user callback function
+ void *user_data; // Arbitrary data pointer passed to the callback
+ neural_net *net; // This pointer for the neural network
+ } user_context;
+
+ // Internal callback used to convert from pointers to class references
+ static int FANN_API internal_callback(struct fann *ann, struct fann_train_data *train,
+ unsigned int max_epochs, unsigned int epochs_between_reports, float desired_error, unsigned int epochs)
+ {
+ user_context *user_data = static_cast<user_context *>(fann_get_user_data(ann));
+ if (user_data != NULL)
+ {
+ FANN::training_data data;
+ data.train_data = train;
+
+ int result = (*user_data->user_callback)(*user_data->net,
+ data, max_epochs, epochs_between_reports, desired_error, epochs, user_data);
+
+ data.train_data = NULL; // Prevent automatic cleanup
+ return result;
+ }
+ else
+ {
+ return -1; // This should not occur except if out of memory
+ }
+ }
+ protected:
+ // Pointer the encapsulated fann neural net structure
+ struct fann *ann;
+ };
+
+ /*************************************************************************/
+}
+
+#endif /* FANN_CPP_H_INCLUDED */
diff --git a/include/fann_data.h b/include/fann_data.h
new file mode 100644
index 0000000..b8e90fd
--- /dev/null
+++ b/include/fann_data.h
@@ -0,0 +1,824 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __fann_data_h__
+#define __fann_data_h__
+
+#include <stdio.h>
+
+/* Section: FANN Datatypes
+
+ The two main datatypes used in the fann library is <struct fann>,
+ which represents an artificial neural network, and <struct fann_train_data>,
+ which represent training data.
+ */
+
+
+/* Type: fann_type
+ fann_type is the type used for the weights, inputs and outputs of the neural network.
+
+ fann_type is defined as a:
+ float - if you include fann.h or floatfann.h
+ double - if you include doublefann.h
+ int - if you include fixedfann.h (please be aware that fixed point usage is
+ only to be used during execution, and not during training).
+*/
+
+/* Enum: fann_train_enum
+ The Training algorithms used when training on <struct fann_train_data> with functions like
+ <fann_train_on_data> or <fann_train_on_file>. The incremental training looks alters the weights
+ after each time it is presented an input pattern, while batch only alters the weights once after
+ it has been presented to all the patterns.
+
+ FANN_TRAIN_INCREMENTAL - Standard backpropagation algorithm, where the weights are
+ updated after each training pattern. This means that the weights are updated many
+ times during a single epoch. For this reason some problems, will train very fast with
+ this algorithm, while other more advanced problems will not train very well.
+ FANN_TRAIN_BATCH - Standard backpropagation algorithm, where the weights are updated after
+ calculating the mean square error for the whole training set. This means that the weights
+ are only updated once during a epoch. For this reason some problems, will train slower with
+ this algorithm. But since the mean square error is calculated more correctly than in
+ incremental training, some problems will reach a better solutions with this algorithm.
+ FANN_TRAIN_RPROP - A more advanced batch training algorithm which achieves good results
+ for many problems. The RPROP training algorithm is adaptive, and does therefore not
+ use the learning_rate. Some other parameters can however be set to change the way the
+ RPROP algorithm works, but it is only recommended for users with insight in how the RPROP
+ training algorithm works. The RPROP training algorithm is described by
+ [Riedmiller and Braun, 1993], but the actual learning algorithm used here is the
+ iRPROP- training algorithm which is described by [Igel and Husken, 2000] which
+ is an variety of the standard RPROP training algorithm.
+ FANN_TRAIN_QUICKPROP - A more advanced batch training algorithm which achieves good results
+ for many problems. The quickprop training algorithm uses the learning_rate parameter
+ along with other more advanced parameters, but it is only recommended to change these
+ advanced parameters, for users with insight in how the quickprop training algorithm works.
+ The quickprop training algorithm is described by [Fahlman, 1988].
+
+ See also:
+ <fann_set_training_algorithm>, <fann_get_training_algorithm>
+*/
+enum fann_train_enum
+{
+ FANN_TRAIN_INCREMENTAL = 0,
+ FANN_TRAIN_BATCH,
+ FANN_TRAIN_RPROP,
+ FANN_TRAIN_QUICKPROP,
+ FANN_TRAIN_SARPROP
+};
+
+/* Constant: FANN_TRAIN_NAMES
+
+ Constant array consisting of the names for the training algorithms, so that the name of an
+ training function can be received by:
+ (code)
+ char *name = FANN_TRAIN_NAMES[train_function];
+ (end)
+
+ See Also:
+ <fann_train_enum>
+*/
+static char const *const FANN_TRAIN_NAMES[] = {
+ "FANN_TRAIN_INCREMENTAL",
+ "FANN_TRAIN_BATCH",
+ "FANN_TRAIN_RPROP",
+ "FANN_TRAIN_QUICKPROP",
+ "FANN_TRAIN_SARPROP"
+};
+
+/* Enums: fann_activationfunc_enum
+
+ The activation functions used for the neurons during training. The activation functions
+ can either be defined for a group of neurons by <fann_set_activation_function_hidden> and
+ <fann_set_activation_function_output> or it can be defined for a single neuron by <fann_set_activation_function>.
+
+ The steepness of an activation function is defined in the same way by
+ <fann_set_activation_steepness_hidden>, <fann_set_activation_steepness_output> and <fann_set_activation_steepness>.
+
+ The functions are described with functions where:
+ * x is the input to the activation function,
+ * y is the output,
+ * s is the steepness and
+ * d is the derivation.
+
+ FANN_LINEAR - Linear activation function.
+ * span: -inf < y < inf
+ * y = x*s, d = 1*s
+ * Can NOT be used in fixed point.
+
+ FANN_THRESHOLD - Threshold activation function.
+ * x < 0 -> y = 0, x >= 0 -> y = 1
+ * Can NOT be used during training.
+
+ FANN_THRESHOLD_SYMMETRIC - Threshold activation function.
+ * x < 0 -> y = 0, x >= 0 -> y = 1
+ * Can NOT be used during training.
+
+ FANN_SIGMOID - Sigmoid activation function.
+ * One of the most used activation functions.
+ * span: 0 < y < 1
+ * y = 1/(1 + exp(-2*s*x))
+ * d = 2*s*y*(1 - y)
+
+ FANN_SIGMOID_STEPWISE - Stepwise linear approximation to sigmoid.
+ * Faster than sigmoid but a bit less precise.
+
+ FANN_SIGMOID_SYMMETRIC - Symmetric sigmoid activation function, aka. tanh.
+ * One of the most used activation functions.
+ * span: -1 < y < 1
+ * y = tanh(s*x) = 2/(1 + exp(-2*s*x)) - 1
+ * d = s*(1-(y*y))
+
+ FANN_SIGMOID_SYMMETRIC - Stepwise linear approximation to symmetric sigmoid.
+ * Faster than symmetric sigmoid but a bit less precise.
+
+ FANN_GAUSSIAN - Gaussian activation function.
+ * 0 when x = -inf, 1 when x = 0 and 0 when x = inf
+ * span: 0 < y < 1
+ * y = exp(-x*s*x*s)
+ * d = -2*x*s*y*s
+
+ FANN_GAUSSIAN_SYMMETRIC - Symmetric gaussian activation function.
+ * -1 when x = -inf, 1 when x = 0 and 0 when x = inf
+ * span: -1 < y < 1
+ * y = exp(-x*s*x*s)*2-1
+ * d = -2*x*s*(y+1)*s
+
+ FANN_ELLIOT - Fast (sigmoid like) activation function defined by David Elliott
+ * span: 0 < y < 1
+ * y = ((x*s) / 2) / (1 + |x*s|) + 0.5
+ * d = s*1/(2*(1+|x*s|)*(1+|x*s|))
+
+ FANN_ELLIOT_SYMMETRIC - Fast (symmetric sigmoid like) activation function defined by David Elliott
+ * span: -1 < y < 1
+ * y = (x*s) / (1 + |x*s|)
+ * d = s*1/((1+|x*s|)*(1+|x*s|))
+
+ FANN_LINEAR_PIECE - Bounded linear activation function.
+ * span: 0 <= y <= 1
+ * y = x*s, d = 1*s
+
+ FANN_LINEAR_PIECE_SYMMETRIC - Bounded linear activation function.
+ * span: -1 <= y <= 1
+ * y = x*s, d = 1*s
+
+ FANN_SIN_SYMMETRIC - Periodical sinus activation function.
+ * span: -1 <= y <= 1
+ * y = sin(x*s)
+ * d = s*cos(x*s)
+
+ FANN_COS_SYMMETRIC - Periodical cosinus activation function.
+ * span: -1 <= y <= 1
+ * y = cos(x*s)
+ * d = s*-sin(x*s)
+
+ FANN_SIN - Periodical sinus activation function.
+ * span: 0 <= y <= 1
+ * y = sin(x*s)/2+0.5
+ * d = s*cos(x*s)/2
+
+ FANN_COS - Periodical cosinus activation function.
+ * span: 0 <= y <= 1
+ * y = cos(x*s)/2+0.5
+ * d = s*-sin(x*s)/2
+
+ See also:
+ <fann_set_activation_function_layer>, <fann_set_activation_function_hidden>,
+ <fann_set_activation_function_output>, <fann_set_activation_steepness>,
+ <fann_set_activation_function>
+*/
+enum fann_activationfunc_enum
+{
+ FANN_LINEAR = 0,
+ FANN_THRESHOLD,
+ FANN_THRESHOLD_SYMMETRIC,
+ FANN_SIGMOID,
+ FANN_SIGMOID_STEPWISE,
+ FANN_SIGMOID_SYMMETRIC,
+ FANN_SIGMOID_SYMMETRIC_STEPWISE,
+ FANN_GAUSSIAN,
+ FANN_GAUSSIAN_SYMMETRIC,
+ /* Stepwise linear approximation to gaussian.
+ * Faster than gaussian but a bit less precise.
+ * NOT implemented yet.
+ */
+ FANN_GAUSSIAN_STEPWISE,
+ FANN_ELLIOT,
+ FANN_ELLIOT_SYMMETRIC,
+ FANN_LINEAR_PIECE,
+ FANN_LINEAR_PIECE_SYMMETRIC,
+ FANN_SIN_SYMMETRIC,
+ FANN_COS_SYMMETRIC,
+ FANN_SIN,
+ FANN_COS
+};
+
+/* Constant: FANN_ACTIVATIONFUNC_NAMES
+
+ Constant array consisting of the names for the activation function, so that the name of an
+ activation function can be received by:
+ (code)
+ char *name = FANN_ACTIVATIONFUNC_NAMES[activation_function];
+ (end)
+
+ See Also:
+ <fann_activationfunc_enum>
+*/
+static char const *const FANN_ACTIVATIONFUNC_NAMES[] = {
+ "FANN_LINEAR",
+ "FANN_THRESHOLD",
+ "FANN_THRESHOLD_SYMMETRIC",
+ "FANN_SIGMOID",
+ "FANN_SIGMOID_STEPWISE",
+ "FANN_SIGMOID_SYMMETRIC",
+ "FANN_SIGMOID_SYMMETRIC_STEPWISE",
+ "FANN_GAUSSIAN",
+ "FANN_GAUSSIAN_SYMMETRIC",
+ "FANN_GAUSSIAN_STEPWISE",
+ "FANN_ELLIOT",
+ "FANN_ELLIOT_SYMMETRIC",
+ "FANN_LINEAR_PIECE",
+ "FANN_LINEAR_PIECE_SYMMETRIC",
+ "FANN_SIN_SYMMETRIC",
+ "FANN_COS_SYMMETRIC",
+ "FANN_SIN",
+ "FANN_COS"
+};
+
+/* Enum: fann_errorfunc_enum
+ Error function used during training.
+
+ FANN_ERRORFUNC_LINEAR - Standard linear error function.
+ FANN_ERRORFUNC_TANH - Tanh error function, usually better
+ but can require a lower learning rate. This error function agressively targets outputs that
+ differ much from the desired, while not targetting outputs that only differ a little that much.
+ This activation function is not recommended for cascade training and incremental training.
+
+ See also:
+ <fann_set_train_error_function>, <fann_get_train_error_function>
+*/
+enum fann_errorfunc_enum
+{
+ FANN_ERRORFUNC_LINEAR = 0,
+ FANN_ERRORFUNC_TANH
+};
+
+/* Constant: FANN_ERRORFUNC_NAMES
+
+ Constant array consisting of the names for the training error functions, so that the name of an
+ error function can be received by:
+ (code)
+ char *name = FANN_ERRORFUNC_NAMES[error_function];
+ (end)
+
+ See Also:
+ <fann_errorfunc_enum>
+*/
+static char const *const FANN_ERRORFUNC_NAMES[] = {
+ "FANN_ERRORFUNC_LINEAR",
+ "FANN_ERRORFUNC_TANH"
+};
+
+/* Enum: fann_stopfunc_enum
+ Stop criteria used during training.
+
+ FANN_STOPFUNC_MSE - Stop criteria is Mean Square Error (MSE) value.
+ FANN_STOPFUNC_BIT - Stop criteria is number of bits that fail. The number of bits; means the
+ number of output neurons which differ more than the bit fail limit
+ (see <fann_get_bit_fail_limit>, <fann_set_bit_fail_limit>).
+ The bits are counted in all of the training data, so this number can be higher than
+ the number of training data.
+
+ See also:
+ <fann_set_train_stop_function>, <fann_get_train_stop_function>
+*/
+enum fann_stopfunc_enum
+{
+ FANN_STOPFUNC_MSE = 0,
+ FANN_STOPFUNC_BIT
+};
+
+/* Constant: FANN_STOPFUNC_NAMES
+
+ Constant array consisting of the names for the training stop functions, so that the name of a
+ stop function can be received by:
+ (code)
+ char *name = FANN_STOPFUNC_NAMES[stop_function];
+ (end)
+
+ See Also:
+ <fann_stopfunc_enum>
+*/
+static char const *const FANN_STOPFUNC_NAMES[] = {
+ "FANN_STOPFUNC_MSE",
+ "FANN_STOPFUNC_BIT"
+};
+
+/* Enum: fann_network_type_enum
+
+ Definition of network types used by <fann_get_network_type>
+
+ FANN_NETTYPE_LAYER - Each layer only has connections to the next layer
+ FANN_NETTYPE_SHORTCUT - Each layer has connections to all following layers
+
+ See Also:
+ <fann_get_network_type>
+
+ This enumeration appears in FANN >= 2.1.0
+*/
+enum fann_nettype_enum
+{
+ FANN_NETTYPE_LAYER = 0, /* Each layer only has connections to the next layer */
+ FANN_NETTYPE_SHORTCUT /* Each layer has connections to all following layers */
+};
+
+/* Constant: FANN_NETWORK_TYPE_NAMES
+
+ Constant array consisting of the names for the network types, so that the name of an
+ network type can be received by:
+ (code)
+ char *network_type_name = FANN_NETWORK_TYPE_NAMES[fann_get_network_type(ann)];
+ (end)
+
+ See Also:
+ <fann_get_network_type>
+
+ This constant appears in FANN >= 2.1.0
+*/
+static char const *const FANN_NETTYPE_NAMES[] = {
+ "FANN_NETTYPE_LAYER",
+ "FANN_NETTYPE_SHORTCUT"
+};
+
+
+/* forward declarations for use with the callback */
+struct fann;
+struct fann_train_data;
+/* Type: fann_callback_type
+ This callback function can be called during training when using <fann_train_on_data>,
+ <fann_train_on_file> or <fann_cascadetrain_on_data>.
+
+ >typedef int (FANN_API * fann_callback_type) (struct fann *ann, struct fann_train_data *train,
+ > unsigned int max_epochs,
+ > unsigned int epochs_between_reports,
+ > float desired_error, unsigned int epochs);
+
+ The callback can be set by using <fann_set_callback> and is very usefull for doing custom
+ things during training. It is recommended to use this function when implementing custom
+ training procedures, or when visualizing the training in a GUI etc. The parameters which the
+ callback function takes is the parameters given to the <fann_train_on_data>, plus an epochs
+ parameter which tells how many epochs the training have taken so far.
+
+ The callback function should return an integer, if the callback function returns -1, the training
+ will terminate.
+
+ Example of a callback function:
+ >int FANN_API test_callback(struct fann *ann, struct fann_train_data *train,
+ > unsigned int max_epochs, unsigned int epochs_between_reports,
+ > float desired_error, unsigned int epochs)
+ >{
+ > printf("Epochs %8d. MSE: %.5f. Desired-MSE: %.5f\n", epochs, fann_get_MSE(ann), desired_error);
+ > return 0;
+ >}
+
+ See also:
+ <fann_set_callback>, <fann_train_on_data>
+ */
+FANN_EXTERNAL typedef int (FANN_API * fann_callback_type) (struct fann *ann, struct fann_train_data *train,
+ unsigned int max_epochs,
+ unsigned int epochs_between_reports,
+ float desired_error, unsigned int epochs);
+
+
+/* ----- Data structures -----
+ * No data within these structures should be altered directly by the user.
+ */
+
+struct fann_neuron
+{
+ /* Index to the first and last connection
+ * (actually the last is a past end index)
+ */
+ unsigned int first_con;
+ unsigned int last_con;
+ /* The sum of the inputs multiplied with the weights */
+ fann_type sum;
+ /* The value of the activation function applied to the sum */
+ fann_type value;
+ /* The steepness of the activation function */
+ fann_type activation_steepness;
+ /* Used to choose which activation function to use */
+ enum fann_activationfunc_enum activation_function;
+#ifdef __GNUC__
+} __attribute__ ((packed));
+#else
+};
+#endif
+
+/* A single layer in the neural network.
+ */
+struct fann_layer
+{
+ /* A pointer to the first neuron in the layer
+ * When allocated, all the neurons in all the layers are actually
+ * in one long array, this is because we wan't to easily clear all
+ * the neurons at once.
+ */
+ struct fann_neuron *first_neuron;
+
+ /* A pointer to the neuron past the last neuron in the layer */
+ /* the number of neurons is last_neuron - first_neuron */
+ struct fann_neuron *last_neuron;
+};
+
+/* Struct: struct fann_error
+
+ Structure used to store error-related information, both
+ <struct fann> and <struct fann_train_data> can be casted to this type.
+
+ See also:
+ <fann_set_error_log>, <fann_get_errno>
+*/
+struct fann_error
+{
+ enum fann_errno_enum errno_f;
+ FILE *error_log;
+ char *errstr;
+};
+
+
+/* Struct: struct fann
+ The fast artificial neural network(fann) structure.
+
+ Data within this structure should never be accessed directly, but only by using the
+ *fann_get_...* and *fann_set_...* functions.
+
+ The fann structure is created using one of the *fann_create_...* functions and each of
+ the functions which operates on the structure takes *struct fann * ann* as the first parameter.
+
+ See also:
+ <fann_create_standard>, <fann_destroy>
+ */
+struct fann
+{
+ /* The type of error that last occured. */
+ enum fann_errno_enum errno_f;
+
+ /* Where to log error messages. */
+ FILE *error_log;
+
+ /* A string representation of the last error. */
+ char *errstr;
+
+ /* the learning rate of the network */
+ float learning_rate;
+
+ /* The learning momentum used for backpropagation algorithm. */
+ float learning_momentum;
+
+ /* the connection rate of the network
+ * between 0 and 1, 1 meaning fully connected
+ */
+ float connection_rate;
+
+ /* is 1 if shortcut connections are used in the ann otherwise 0
+ * Shortcut connections are connections that skip layers.
+ * A fully connected ann with shortcut connections are a ann where
+ * neurons have connections to all neurons in all later layers.
+ */
+ enum fann_nettype_enum network_type;
+
+ /* pointer to the first layer (input layer) in an array af all the layers,
+ * including the input and outputlayers
+ */
+ struct fann_layer *first_layer;
+
+ /* pointer to the layer past the last layer in an array af all the layers,
+ * including the input and outputlayers
+ */
+ struct fann_layer *last_layer;
+
+ /* Total number of neurons.
+ * very usefull, because the actual neurons are allocated in one long array
+ */
+ unsigned int total_neurons;
+
+ /* Number of input neurons (not calculating bias) */
+ unsigned int num_input;
+
+ /* Number of output neurons (not calculating bias) */
+ unsigned int num_output;
+
+ /* The weight array */
+ fann_type *weights;
+
+ /* The connection array */
+ struct fann_neuron **connections;
+
+ /* Used to contain the errors used during training
+ * Is allocated during first training session,
+ * which means that if we do not train, it is never allocated.
+ */
+ fann_type *train_errors;
+
+ /* Training algorithm used when calling fann_train_on_..
+ */
+ enum fann_train_enum training_algorithm;
+
+#ifdef FIXEDFANN
+ /* the decimal_point, used for shifting the fix point
+ * in fixed point integer operatons.
+ */
+ unsigned int decimal_point;
+
+ /* the multiplier, used for multiplying the fix point
+ * in fixed point integer operatons.
+ * Only used in special cases, since the decimal_point is much faster.
+ */
+ unsigned int multiplier;
+
+ /* When in choosen (or in fixed point), the sigmoid function is
+ * calculated as a stepwise linear function. In the
+ * activation_results array, the result is saved, and in the
+ * two values arrays, the values that gives the results are saved.
+ */
+ fann_type sigmoid_results[6];
+ fann_type sigmoid_values[6];
+ fann_type sigmoid_symmetric_results[6];
+ fann_type sigmoid_symmetric_values[6];
+#endif
+
+ /* Total number of connections.
+ * very usefull, because the actual connections
+ * are allocated in one long array
+ */
+ unsigned int total_connections;
+
+ /* used to store outputs in */
+ fann_type *output;
+
+ /* the number of data used to calculate the mean square error.
+ */
+ unsigned int num_MSE;
+
+ /* the total error value.
+ * the real mean square error is MSE_value/num_MSE
+ */
+ float MSE_value;
+
+ /* The number of outputs which would fail (only valid for classification problems)
+ */
+ unsigned int num_bit_fail;
+
+ /* The maximum difference between the actual output and the expected output
+ * which is accepted when counting the bit fails.
+ * This difference is multiplied by two when dealing with symmetric activation functions,
+ * so that symmetric and not symmetric activation functions can use the same limit.
+ */
+ fann_type bit_fail_limit;
+
+ /* The error function used during training. (default FANN_ERRORFUNC_TANH)
+ */
+ enum fann_errorfunc_enum train_error_function;
+
+ /* The stop function used during training. (default FANN_STOPFUNC_MSE)
+ */
+ enum fann_stopfunc_enum train_stop_function;
+
+ /* The callback function used during training. (default NULL)
+ */
+ fann_callback_type callback;
+
+ /* A pointer to user defined data. (default NULL)
+ */
+ void *user_data;
+
+ /* Variables for use with Cascade Correlation */
+
+ /* The error must change by at least this
+ * fraction of its old value to count as a
+ * significant change.
+ */
+ float cascade_output_change_fraction;
+
+ /* No change in this number of epochs will cause
+ * stagnation.
+ */
+ unsigned int cascade_output_stagnation_epochs;
+
+ /* The error must change by at least this
+ * fraction of its old value to count as a
+ * significant change.
+ */
+ float cascade_candidate_change_fraction;
+
+ /* No change in this number of epochs will cause
+ * stagnation.
+ */
+ unsigned int cascade_candidate_stagnation_epochs;
+
+ /* The current best candidate, which will be installed.
+ */
+ unsigned int cascade_best_candidate;
+
+ /* The upper limit for a candidate score
+ */
+ fann_type cascade_candidate_limit;
+
+ /* Scale of copied candidate output weights
+ */
+ fann_type cascade_weight_multiplier;
+
+ /* Maximum epochs to train the output neurons during cascade training
+ */
+ unsigned int cascade_max_out_epochs;
+
+ /* Maximum epochs to train the candidate neurons during cascade training
+ */
+ unsigned int cascade_max_cand_epochs;
+
+ /* Minimum epochs to train the output neurons during cascade training
+ */
+ unsigned int cascade_min_out_epochs;
+
+ /* Minimum epochs to train the candidate neurons during cascade training
+ */
+ unsigned int cascade_min_cand_epochs;
+
+ /* An array consisting of the activation functions used when doing
+ * cascade training.
+ */
+ enum fann_activationfunc_enum *cascade_activation_functions;
+
+ /* The number of elements in the cascade_activation_functions array.
+ */
+ unsigned int cascade_activation_functions_count;
+
+ /* An array consisting of the steepnesses used during cascade training.
+ */
+ fann_type *cascade_activation_steepnesses;
+
+ /* The number of elements in the cascade_activation_steepnesses array.
+ */
+ unsigned int cascade_activation_steepnesses_count;
+
+ /* The number of candidates of each type that will be present.
+ * The actual number of candidates is then
+ * cascade_activation_functions_count *
+ * cascade_activation_steepnesses_count *
+ * cascade_num_candidate_groups
+ */
+ unsigned int cascade_num_candidate_groups;
+
+ /* An array consisting of the score of the individual candidates,
+ * which is used to decide which candidate is the best
+ */
+ fann_type *cascade_candidate_scores;
+
+ /* The number of allocated neurons during cascade correlation algorithms.
+ * This number might be higher than the actual number of neurons to avoid
+ * allocating new space too often.
+ */
+ unsigned int total_neurons_allocated;
+
+ /* The number of allocated connections during cascade correlation algorithms.
+ * This number might be higher than the actual number of neurons to avoid
+ * allocating new space too often.
+ */
+ unsigned int total_connections_allocated;
+
+ /* Variables for use with Quickprop training */
+
+ /* Decay is used to make the weights not go so high */
+ float quickprop_decay;
+
+ /* Mu is a factor used to increase and decrease the stepsize */
+ float quickprop_mu;
+
+ /* Variables for use with with RPROP training */
+
+ /* Tells how much the stepsize should increase during learning */
+ float rprop_increase_factor;
+
+ /* Tells how much the stepsize should decrease during learning */
+ float rprop_decrease_factor;
+
+ /* The minimum stepsize */
+ float rprop_delta_min;
+
+ /* The maximum stepsize */
+ float rprop_delta_max;
+
+ /* The initial stepsize */
+ float rprop_delta_zero;
+
+ /* Defines how much the weights are constrained to smaller values at the beginning */
+ float sarprop_weight_decay_shift;
+
+ /* Decides if the stepsize is too big with regard to the error */
+ float sarprop_step_error_threshold_factor;
+
+ /* Defines how much the stepsize is influenced by the error */
+ float sarprop_step_error_shift;
+
+ /* Defines how much the epoch influences weight decay and noise */
+ float sarprop_temperature;
+
+ /* Current training epoch */
+ unsigned int sarprop_epoch;
+
+ /* Used to contain the slope errors used during batch training
+ * Is allocated during first training session,
+ * which means that if we do not train, it is never allocated.
+ */
+ fann_type *train_slopes;
+
+ /* The previous step taken by the quickprop/rprop procedures.
+ * Not allocated if not used.
+ */
+ fann_type *prev_steps;
+
+ /* The slope values used by the quickprop/rprop procedures.
+ * Not allocated if not used.
+ */
+ fann_type *prev_train_slopes;
+
+ /* The last delta applied to a connection weight.
+ * This is used for the momentum term in the backpropagation algorithm.
+ * Not allocated if not used.
+ */
+ fann_type *prev_weights_deltas;
+
+#ifndef FIXEDFANN
+ /* Arithmetic mean used to remove steady component in input data. */
+ float *scale_mean_in;
+
+ /* Standart deviation used to normalize input data (mostly to [-1;1]). */
+ float *scale_deviation_in;
+
+ /* User-defined new minimum for input data.
+ * Resulting data values may be less than user-defined minimum.
+ */
+ float *scale_new_min_in;
+
+ /* Used to scale data to user-defined new maximum for input data.
+ * Resulting data values may be greater than user-defined maximum.
+ */
+ float *scale_factor_in;
+
+ /* Arithmetic mean used to remove steady component in output data. */
+ float *scale_mean_out;
+
+ /* Standart deviation used to normalize output data (mostly to [-1;1]). */
+ float *scale_deviation_out;
+
+ /* User-defined new minimum for output data.
+ * Resulting data values may be less than user-defined minimum.
+ */
+ float *scale_new_min_out;
+
+ /* Used to scale data to user-defined new maximum for output data.
+ * Resulting data values may be greater than user-defined maximum.
+ */
+ float *scale_factor_out;
+#endif
+};
+
+/* Type: fann_connection
+
+ Describes a connection between two neurons and its weight
+
+ from_neuron - Unique number used to identify source neuron
+ to_neuron - Unique number used to identify destination neuron
+ weight - The numerical value of the weight
+
+ See Also:
+ <fann_get_connection_array>, <fann_set_weight_array>
+
+ This structure appears in FANN >= 2.1.0
+*/
+struct fann_connection
+{
+ /* Unique number used to identify source neuron */
+ unsigned int from_neuron;
+ /* Unique number used to identify destination neuron */
+ unsigned int to_neuron;
+ /* The numerical value of the weight */
+ fann_type weight;
+};
+
+#endif
diff --git a/include/fann_error.h b/include/fann_error.h
new file mode 100644
index 0000000..0e882df
--- /dev/null
+++ b/include/fann_error.h
@@ -0,0 +1,165 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __fann_error_h__
+#define __fann_error_h__
+
+#include <stdio.h>
+
+#define FANN_ERRSTR_MAX 128
+struct fann_error;
+
+/* Section: FANN Error Handling
+
+ Errors from the fann library are usually reported on stderr.
+ It is however possible to redirect these error messages to a file,
+ or completely ignore them by the <fann_set_error_log> function.
+
+ It is also possible to inspect the last error message by using the
+ <fann_get_errno> and <fann_get_errstr> functions.
+ */
+
+/* Enum: fann_errno_enum
+ Used to define error events on <struct fann> and <struct fann_train_data>.
+
+ See also:
+ <fann_get_errno>, <fann_reset_errno>, <fann_get_errstr>
+
+ FANN_E_NO_ERROR - No error
+ FANN_E_CANT_OPEN_CONFIG_R - Unable to open configuration file for reading
+ FANN_E_CANT_OPEN_CONFIG_W - Unable to open configuration file for writing
+ FANN_E_WRONG_CONFIG_VERSION - Wrong version of configuration file
+ FANN_E_CANT_READ_CONFIG - Error reading info from configuration file
+ FANN_E_CANT_READ_NEURON - Error reading neuron info from configuration file
+ FANN_E_CANT_READ_CONNECTIONS - Error reading connections from configuration file
+ FANN_E_WRONG_NUM_CONNECTIONS - Number of connections not equal to the number expected
+ FANN_E_CANT_OPEN_TD_W - Unable to open train data file for writing
+ FANN_E_CANT_OPEN_TD_R - Unable to open train data file for reading
+ FANN_E_CANT_READ_TD - Error reading training data from file
+ FANN_E_CANT_ALLOCATE_MEM - Unable to allocate memory
+ FANN_E_CANT_TRAIN_ACTIVATION - Unable to train with the selected activation function
+ FANN_E_CANT_USE_ACTIVATION - Unable to use the selected activation function
+ FANN_E_TRAIN_DATA_MISMATCH - Irreconcilable differences between two <struct fann_train_data> structures
+ FANN_E_CANT_USE_TRAIN_ALG - Unable to use the selected training algorithm
+ FANN_E_TRAIN_DATA_SUBSET - Trying to take subset which is not within the training set
+ FANN_E_INDEX_OUT_OF_BOUND - Index is out of bound
+ FANN_E_SCALE_NOT_PRESENT - Scaling parameters not present
+ FANN_E_INPUT_NO_MATCH - The number of input neurons in the ann and data don't match
+ FANN_E_OUTPUT_NO_MATCH - The number of output neurons in the ann and data don't match
+*/
+enum fann_errno_enum
+{
+ FANN_E_NO_ERROR = 0,
+ FANN_E_CANT_OPEN_CONFIG_R,
+ FANN_E_CANT_OPEN_CONFIG_W,
+ FANN_E_WRONG_CONFIG_VERSION,
+ FANN_E_CANT_READ_CONFIG,
+ FANN_E_CANT_READ_NEURON,
+ FANN_E_CANT_READ_CONNECTIONS,
+ FANN_E_WRONG_NUM_CONNECTIONS,
+ FANN_E_CANT_OPEN_TD_W,
+ FANN_E_CANT_OPEN_TD_R,
+ FANN_E_CANT_READ_TD,
+ FANN_E_CANT_ALLOCATE_MEM,
+ FANN_E_CANT_TRAIN_ACTIVATION,
+ FANN_E_CANT_USE_ACTIVATION,
+ FANN_E_TRAIN_DATA_MISMATCH,
+ FANN_E_CANT_USE_TRAIN_ALG,
+ FANN_E_TRAIN_DATA_SUBSET,
+ FANN_E_INDEX_OUT_OF_BOUND,
+ FANN_E_SCALE_NOT_PRESENT,
+ FANN_E_INPUT_NO_MATCH,
+ FANN_E_OUTPUT_NO_MATCH
+};
+
+/* Group: Error Handling */
+
+/* Function: fann_set_error_log
+
+ Change where errors are logged to. Both <struct fann> and <struct fann_data> can be
+ casted to <struct fann_error>, so this function can be used to set either of these.
+
+ If log_file is NULL, no errors will be printed.
+
+ If errdata is NULL, the default log will be set. The default log is the log used when creating
+ <struct fann> and <struct fann_data>. This default log will also be the default for all new structs
+ that are created.
+
+ The default behavior is to log them to stderr.
+
+ See also:
+ <struct fann_error>
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_error_log(struct fann_error *errdat, FILE * log_file);
+
+
+/* Function: fann_get_errno
+
+ Returns the last error number.
+
+ See also:
+ <fann_errno_enum>, <fann_reset_errno>
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL enum fann_errno_enum FANN_API fann_get_errno(struct fann_error *errdat);
+
+
+/* Function: fann_reset_errno
+
+ Resets the last error number.
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL void FANN_API fann_reset_errno(struct fann_error *errdat);
+
+
+/* Function: fann_reset_errstr
+
+ Resets the last error string.
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL void FANN_API fann_reset_errstr(struct fann_error *errdat);
+
+
+/* Function: fann_get_errstr
+
+ Returns the last errstr.
+
+ This function calls <fann_reset_errno> and <fann_reset_errstr>
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL char *FANN_API fann_get_errstr(struct fann_error *errdat);
+
+
+/* Function: fann_print_error
+
+ Prints the last error to stderr.
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL void FANN_API fann_print_error(struct fann_error *errdat);
+
+extern FILE * fann_default_error_log;
+
+#endif
diff --git a/include/fann_internal.h b/include/fann_internal.h
new file mode 100644
index 0000000..bdf60d4
--- /dev/null
+++ b/include/fann_internal.h
@@ -0,0 +1,152 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __fann_internal_h__
+#define __fann_internal_h__
+/* internal include file, not to be included directly
+ */
+
+#include <math.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "fann_data.h"
+
+#define FANN_FIX_VERSION "FANN_FIX_2.0"
+#define FANN_FLO_VERSION "FANN_FLO_2.1"
+
+#ifdef FIXEDFANN
+#define FANN_CONF_VERSION FANN_FIX_VERSION
+#else
+#define FANN_CONF_VERSION FANN_FLO_VERSION
+#endif
+
+#define FANN_GET(type, name) \
+FANN_EXTERNAL type FANN_API fann_get_ ## name(struct fann *ann) \
+{ \
+ return ann->name; \
+}
+
+#define FANN_SET(type, name) \
+FANN_EXTERNAL void FANN_API fann_set_ ## name(struct fann *ann, type value) \
+{ \
+ ann->name = value; \
+}
+
+#define FANN_GET_SET(type, name) \
+FANN_GET(type, name) \
+FANN_SET(type, name)
+
+
+struct fann_train_data;
+
+struct fann *fann_allocate_structure(unsigned int num_layers);
+void fann_allocate_neurons(struct fann *ann);
+
+void fann_allocate_connections(struct fann *ann);
+
+int fann_save_internal(struct fann *ann, const char *configuration_file,
+ unsigned int save_as_fixed);
+int fann_save_internal_fd(struct fann *ann, FILE * conf, const char *configuration_file,
+ unsigned int save_as_fixed);
+int fann_save_train_internal(struct fann_train_data *data, const char *filename,
+ unsigned int save_as_fixed, unsigned int decimal_point);
+int fann_save_train_internal_fd(struct fann_train_data *data, FILE * file, const char *filename,
+ unsigned int save_as_fixed, unsigned int decimal_point);
+
+void fann_update_stepwise(struct fann *ann);
+void fann_seed_rand();
+
+void fann_error(struct fann_error *errdat, const enum fann_errno_enum errno_f, ...);
+void fann_init_error_data(struct fann_error *errdat);
+
+struct fann *fann_create_from_fd(FILE * conf, const char *configuration_file);
+struct fann_train_data *fann_read_train_from_fd(FILE * file, const char *filename);
+
+void fann_compute_MSE(struct fann *ann, fann_type * desired_output);
+void fann_update_output_weights(struct fann *ann);
+void fann_backpropagate_MSE(struct fann *ann);
+void fann_update_weights(struct fann *ann);
+void fann_update_slopes_batch(struct fann *ann, struct fann_layer *layer_begin,
+ struct fann_layer *layer_end);
+void fann_update_weights_quickprop(struct fann *ann, unsigned int num_data,
+ unsigned int first_weight, unsigned int past_end);
+void fann_update_weights_batch(struct fann *ann, unsigned int num_data, unsigned int first_weight,
+ unsigned int past_end);
+void fann_update_weights_irpropm(struct fann *ann, unsigned int first_weight,
+ unsigned int past_end);
+void fann_update_weights_sarprop(struct fann *ann, unsigned int epoch, unsigned int first_weight,
+ unsigned int past_end);
+
+void fann_clear_train_arrays(struct fann *ann);
+
+fann_type fann_activation(struct fann * ann, unsigned int activation_function, fann_type steepness,
+ fann_type value);
+
+fann_type fann_activation_derived(unsigned int activation_function,
+ fann_type steepness, fann_type value, fann_type sum);
+
+int fann_desired_error_reached(struct fann *ann, float desired_error);
+
+/* Some functions for cascade */
+int fann_train_outputs(struct fann *ann, struct fann_train_data *data, float desired_error);
+
+float fann_train_outputs_epoch(struct fann *ann, struct fann_train_data *data);
+
+int fann_train_candidates(struct fann *ann, struct fann_train_data *data);
+
+fann_type fann_train_candidates_epoch(struct fann *ann, struct fann_train_data *data);
+
+void fann_install_candidate(struct fann *ann);
+int fann_check_input_output_sizes(struct fann *ann, struct fann_train_data *data);
+
+int fann_initialize_candidates(struct fann *ann);
+
+void fann_set_shortcut_connections(struct fann *ann);
+
+int fann_allocate_scale(struct fann *ann);
+
+/* called fann_max, in order to not interferre with predefined versions of max */
+#define fann_max(x, y) (((x) > (y)) ? (x) : (y))
+#define fann_min(x, y) (((x) < (y)) ? (x) : (y))
+#define fann_safe_free(x) {if(x) { free(x); x = NULL; }}
+#define fann_clip(x, lo, hi) (((x) < (lo)) ? (lo) : (((x) > (hi)) ? (hi) : (x)))
+#define fann_exp2(x) exp(0.69314718055994530942*(x))
+/*#define fann_clip(x, lo, hi) (x)*/
+
+#define fann_rand(min_value, max_value) (((float)(min_value))+(((float)(max_value)-((float)(min_value)))*rand()/(RAND_MAX+1.0f)))
+
+#define fann_abs(value) (((value) > 0) ? (value) : -(value))
+
+#ifdef FIXEDFANN
+
+#define fann_mult(x,y) ((x*y) >> decimal_point)
+#define fann_div(x,y) (((x) << decimal_point)/y)
+#define fann_random_weight() (fann_type)(fann_rand(0,multiplier/10))
+#define fann_random_bias_weight() (fann_type)(fann_rand((0-multiplier)/10,multiplier/10))
+
+#else
+
+#define fann_mult(x,y) (x*y)
+#define fann_div(x,y) (x/y)
+#define fann_random_weight() (fann_rand(-0.1f,0.1f))
+#define fann_random_bias_weight() (fann_rand(-0.1f,0.1f))
+
+#endif
+
+#endif
diff --git a/include/fann_io.h b/include/fann_io.h
new file mode 100644
index 0000000..1b58279
--- /dev/null
+++ b/include/fann_io.h
@@ -0,0 +1,100 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __fann_io_h__
+#define __fann_io_h__
+
+/* Section: FANN File Input/Output
+
+ It is possible to save an entire ann to a file with <fann_save> for future loading with <fann_create_from_file>.
+ */
+
+/* Group: File Input and Output */
+
+/* Function: fann_create_from_file
+
+ Constructs a backpropagation neural network from a configuration file, which have been saved by <fann_save>.
+
+ See also:
+ <fann_save>, <fann_save_to_fixed>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL struct fann *FANN_API fann_create_from_file(const char *configuration_file);
+
+
+/* Function: fann_save
+
+ Save the entire network to a configuration file.
+
+ The configuration file contains all information about the neural network and enables
+ <fann_create_from_file> to create an exact copy of the neural network and all of the
+ parameters associated with the neural network.
+
+ These three parameters (<fann_set_callback>, <fann_set_error_log>,
+ <fann_set_user_data>) are *NOT* saved to the file because they cannot safely be
+ ported to a different location. Also temporary parameters generated during training
+ like <fann_get_MSE> is not saved.
+
+ Return:
+ The function returns 0 on success and -1 on failure.
+
+ See also:
+ <fann_create_from_file>, <fann_save_to_fixed>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL int FANN_API fann_save(struct fann *ann, const char *configuration_file);
+
+
+/* Function: fann_save_to_fixed
+
+ Saves the entire network to a configuration file.
+ But it is saved in fixed point format no matter which
+ format it is currently in.
+
+ This is usefull for training a network in floating points,
+ and then later executing it in fixed point.
+
+ The function returns the bit position of the fix point, which
+ can be used to find out how accurate the fixed point network will be.
+ A high value indicates high precision, and a low value indicates low
+ precision.
+
+ A negative value indicates very low precision, and a very
+ strong possibility for overflow.
+ (the actual fix point will be set to 0, since a negative
+ fix point does not make sence).
+
+ Generally, a fix point lower than 6 is bad, and should be avoided.
+ The best way to avoid this, is to have less connections to each neuron,
+ or just less neurons in each layer.
+
+ The fixed point use of this network is only intended for use on machines that
+ have no floating point processor, like an iPAQ. On normal computers the floating
+ point version is actually faster.
+
+ See also:
+ <fann_create_from_file>, <fann_save>
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL int FANN_API fann_save_to_fixed(struct fann *ann, const char *configuration_file);
+
+#endif
diff --git a/include/fann_train.h b/include/fann_train.h
new file mode 100644
index 0000000..972c414
--- /dev/null
+++ b/include/fann_train.h
@@ -0,0 +1,1310 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __fann_train_h__
+#define __fann_train_h__
+
+/* Section: FANN Training
+
+ There are many different ways of training neural networks and the FANN library supports
+ a number of different approaches.
+
+ Two fundementally different approaches are the most commonly used:
+
+ Fixed topology training - The size and topology of the ANN is determined in advance
+ and the training alters the weights in order to minimize the difference between
+ the desired output values and the actual output values. This kind of training is
+ supported by <fann_train_on_data>.
+
+ Evolving topology training - The training start out with an empty ANN, only consisting
+ of input and output neurons. Hidden neurons and connections is the added during training,
+ in order to reach the same goal as for fixed topology training. This kind of training
+ is supported by <FANN Cascade Training>.
+ */
+
+/* Struct: struct fann_train_data
+ Structure used to store data, for use with training.
+
+ The data inside this structure should never be manipulated directly, but should use some
+ of the supplied functions in <Training Data Manipulation>.
+
+ The training data structure is very usefull for storing data during training and testing of a
+ neural network.
+
+ See also:
+ <fann_read_train_from_file>, <fann_train_on_data>, <fann_destroy_train>
+*/
+struct fann_train_data
+{
+ enum fann_errno_enum errno_f;
+ FILE *error_log;
+ char *errstr;
+
+ unsigned int num_data;
+ unsigned int num_input;
+ unsigned int num_output;
+ fann_type **input;
+ fann_type **output;
+};
+
+/* Section: FANN Training */
+
+/* Group: Training */
+
+#ifndef FIXEDFANN
+/* Function: fann_train
+
+ Train one iteration with a set of inputs, and a set of desired outputs.
+ This training is always incremental training (see <fann_train_enum>), since
+ only one pattern is presented.
+
+ Parameters:
+ ann - The neural network structure
+ input - an array of inputs. This array must be exactly <fann_get_num_input> long.
+ desired_output - an array of desired outputs. This array must be exactly <fann_get_num_output> long.
+
+ See also:
+ <fann_train_on_data>, <fann_train_epoch>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_train(struct fann *ann, fann_type * input,
+ fann_type * desired_output);
+
+#endif /* NOT FIXEDFANN */
+
+/* Function: fann_test
+ Test with a set of inputs, and a set of desired outputs.
+ This operation updates the mean square error, but does not
+ change the network in any way.
+
+ See also:
+ <fann_test_data>, <fann_train>
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL fann_type * FANN_API fann_test(struct fann *ann, fann_type * input,
+ fann_type * desired_output);
+
+/* Function: fann_get_MSE
+ Reads the mean square error from the network.
+
+ Reads the mean square error from the network. This value is calculated during
+ training or testing, and can therefore sometimes be a bit off if the weights
+ have been changed since the last calculation of the value.
+
+ See also:
+ <fann_test_data>
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_MSE(struct fann *ann);
+
+/* Function: fann_get_bit_fail
+
+ The number of fail bits; means the number of output neurons which differ more
+ than the bit fail limit (see <fann_get_bit_fail_limit>, <fann_set_bit_fail_limit>).
+ The bits are counted in all of the training data, so this number can be higher than
+ the number of training data.
+
+ This value is reset by <fann_reset_MSE> and updated by all the same functions which also
+ updates the MSE value (e.g. <fann_test_data>, <fann_train_epoch>)
+
+ See also:
+ <fann_stopfunc_enum>, <fann_get_MSE>
+
+ This function appears in FANN >= 2.0.0
+*/
+FANN_EXTERNAL unsigned int FANN_API fann_get_bit_fail(struct fann *ann);
+
+/* Function: fann_reset_MSE
+ Resets the mean square error from the network.
+
+ This function also resets the number of bits that fail.
+
+ See also:
+ <fann_get_MSE>, <fann_get_bit_fail_limit>
+
+ This function appears in FANN >= 1.1.0
+ */
+FANN_EXTERNAL void FANN_API fann_reset_MSE(struct fann *ann);
+
+/* Group: Training Data Training */
+
+#ifndef FIXEDFANN
+
+/* Function: fann_train_on_data
+
+ Trains on an entire dataset, for a period of time.
+
+ This training uses the training algorithm chosen by <fann_set_training_algorithm>,
+ and the parameters set for these training algorithms.
+
+ Parameters:
+ ann - The neural network
+ data - The data, which should be used during training
+ max_epochs - The maximum number of epochs the training should continue
+ epochs_between_reports - The number of epochs between printing a status report to stdout.
+ A value of zero means no reports should be printed.
+ desired_error - The desired <fann_get_MSE> or <fann_get_bit_fail>, depending on which stop function
+ is chosen by <fann_set_train_stop_function>.
+
+ Instead of printing out reports every epochs_between_reports, a callback function can be called
+ (see <fann_set_callback>).
+
+ See also:
+ <fann_train_on_file>, <fann_train_epoch>, <Parameters>
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL void FANN_API fann_train_on_data(struct fann *ann, struct fann_train_data *data,
+ unsigned int max_epochs,
+ unsigned int epochs_between_reports,
+ float desired_error);
+
+/* Function: fann_train_on_file
+
+ Does the same as <fann_train_on_data>, but reads the training data directly from a file.
+
+ See also:
+ <fann_train_on_data>
+
+ This function appears in FANN >= 1.0.0.
+*/
+FANN_EXTERNAL void FANN_API fann_train_on_file(struct fann *ann, const char *filename,
+ unsigned int max_epochs,
+ unsigned int epochs_between_reports,
+ float desired_error);
+
+/* Function: fann_train_epoch
+ Train one epoch with a set of training data.
+
+ Train one epoch with the training data stored in data. One epoch is where all of
+ the training data is considered exactly once.
+
+ This function returns the MSE error as it is calculated either before or during
+ the actual training. This is not the actual MSE after the training epoch, but since
+ calculating this will require to go through the entire training set once more, it is
+ more than adequate to use this value during training.
+
+ The training algorithm used by this function is chosen by the <fann_set_training_algorithm>
+ function.
+
+ See also:
+ <fann_train_on_data>, <fann_test_data>
+
+ This function appears in FANN >= 1.2.0.
+ */
+FANN_EXTERNAL float FANN_API fann_train_epoch(struct fann *ann, struct fann_train_data *data);
+#endif /* NOT FIXEDFANN */
+
+/* Function: fann_test_data
+
+ Test a set of training data and calculates the MSE for the training data.
+
+ This function updates the MSE and the bit fail values.
+
+ See also:
+ <fann_test>, <fann_get_MSE>, <fann_get_bit_fail>
+
+ This function appears in FANN >= 1.2.0.
+ */
+FANN_EXTERNAL float FANN_API fann_test_data(struct fann *ann, struct fann_train_data *data);
+
+/* Group: Training Data Manipulation */
+
+/* Function: fann_read_train_from_file
+ Reads a file that stores training data.
+
+ The file must be formatted like:
+ >num_train_data num_input num_output
+ >inputdata seperated by space
+ >outputdata seperated by space
+ >
+ >.
+ >.
+ >.
+ >
+ >inputdata seperated by space
+ >outputdata seperated by space
+
+ See also:
+ <fann_train_on_data>, <fann_destroy_train>, <fann_save_train>
+
+ This function appears in FANN >= 1.0.0
+*/
+FANN_EXTERNAL struct fann_train_data *FANN_API fann_read_train_from_file(const char *filename);
+
+
+/* Function: fann_create_train
+ Creates an empty training data struct.
+
+ See also:
+ <fann_read_train_from_file>, <fann_train_on_data>, <fann_destroy_train>,
+ <fann_save_train>
+
+ This function appears in FANN >= 2.2.0
+*/
+FANN_EXTERNAL struct fann_train_data * FANN_API fann_create_train(unsigned int num_data, unsigned int num_input, unsigned int num_output);
+
+/* Function: fann_create_train_from_callback
+ Creates the training data struct from a user supplied function.
+ As the training data are numerable (data 1, data 2...), the user must write
+ a function that receives the number of the training data set (input,output)
+ and returns the set.
+
+ Parameters:
+ num_data - The number of training data
+ num_input - The number of inputs per training data
+ num_output - The number of ouputs per training data
+ user_function - The user suplied function
+
+ Parameters for the user function:
+ num - The number of the training data set
+ num_input - The number of inputs per training data
+ num_output - The number of ouputs per training data
+ input - The set of inputs
+ output - The set of desired outputs
+
+ See also:
+ <fann_read_train_from_file>, <fann_train_on_data>, <fann_destroy_train>,
+ <fann_save_train>
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL struct fann_train_data * FANN_API fann_create_train_from_callback(unsigned int num_data,
+ unsigned int num_input,
+ unsigned int num_output,
+ void (FANN_API *user_function)( unsigned int,
+ unsigned int,
+ unsigned int,
+ fann_type * ,
+ fann_type * ));
+
+/* Function: fann_destroy_train
+ Destructs the training data and properly deallocates all of the associated data.
+ Be sure to call this function after finished using the training data.
+
+ This function appears in FANN >= 1.0.0
+ */
+FANN_EXTERNAL void FANN_API fann_destroy_train(struct fann_train_data *train_data);
+
+
+/* Function: fann_shuffle_train_data
+
+ Shuffles training data, randomizing the order.
+ This is recommended for incremental training, while it have no influence during batch training.
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL void FANN_API fann_shuffle_train_data(struct fann_train_data *train_data);
+
+#ifndef FIXEDFANN
+/* Function: fann_scale_train
+
+ Scale input and output data based on previously calculated parameters.
+
+ Parameters:
+ ann - ann for which were calculated trained parameters before
+ data - training data that needs to be scaled
+
+ See also:
+ <fann_descale_train>, <fann_set_scaling_params>
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void FANN_API fann_scale_train( struct fann *ann, struct fann_train_data *data );
+
+/* Function: fann_descale_train
+
+ Descale input and output data based on previously calculated parameters.
+
+ Parameters:
+ ann - ann for which were calculated trained parameters before
+ data - training data that needs to be descaled
+
+ See also:
+ <fann_scale_train>, <fann_set_scaling_params>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL void FANN_API fann_descale_train( struct fann *ann, struct fann_train_data *data );
+
+/* Function: fann_set_input_scaling_params
+
+ Calculate input scaling parameters for future use based on training data.
+
+ Parameters:
+ ann - ann for wgich parameters needs to be calculated
+ data - training data that will be used to calculate scaling parameters
+ new_input_min - desired lower bound in input data after scaling (not strictly followed)
+ new_input_max - desired upper bound in input data after scaling (not strictly followed)
+
+ See also:
+ <fann_set_output_scaling_params>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL int FANN_API fann_set_input_scaling_params(
+ struct fann *ann,
+ const struct fann_train_data *data,
+ float new_input_min,
+ float new_input_max);
+
+/* Function: fann_set_output_scaling_params
+
+ Calculate output scaling parameters for future use based on training data.
+
+ Parameters:
+ ann - ann for wgich parameters needs to be calculated
+ data - training data that will be used to calculate scaling parameters
+ new_output_min - desired lower bound in input data after scaling (not strictly followed)
+ new_output_max - desired upper bound in input data after scaling (not strictly followed)
+
+ See also:
+ <fann_set_input_scaling_params>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL int FANN_API fann_set_output_scaling_params(
+ struct fann *ann,
+ const struct fann_train_data *data,
+ float new_output_min,
+ float new_output_max);
+
+/* Function: fann_set_scaling_params
+
+ Calculate input and output scaling parameters for future use based on training data.
+
+ Parameters:
+ ann - ann for wgich parameters needs to be calculated
+ data - training data that will be used to calculate scaling parameters
+ new_input_min - desired lower bound in input data after scaling (not strictly followed)
+ new_input_max - desired upper bound in input data after scaling (not strictly followed)
+ new_output_min - desired lower bound in input data after scaling (not strictly followed)
+ new_output_max - desired upper bound in input data after scaling (not strictly followed)
+
+ See also:
+ <fann_set_input_scaling_params>, <fann_set_output_scaling_params>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL int FANN_API fann_set_scaling_params(
+ struct fann *ann,
+ const struct fann_train_data *data,
+ float new_input_min,
+ float new_input_max,
+ float new_output_min,
+ float new_output_max);
+
+/* Function: fann_clear_scaling_params
+
+ Clears scaling parameters.
+
+ Parameters:
+ ann - ann for which to clear scaling parameters
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL int FANN_API fann_clear_scaling_params(struct fann *ann);
+
+/* Function: fann_scale_input
+
+ Scale data in input vector before feed it to ann based on previously calculated parameters.
+
+ Parameters:
+ ann - for which scaling parameters were calculated
+ input_vector - input vector that will be scaled
+
+ See also:
+ <fann_descale_input>, <fann_scale_output>
+
+ This function appears in FANN >= 2.1.0
+*/
+FANN_EXTERNAL void FANN_API fann_scale_input( struct fann *ann, fann_type *input_vector );
+
+/* Function: fann_scale_output
+
+ Scale data in output vector before feed it to ann based on previously calculated parameters.
+
+ Parameters:
+ ann - for which scaling parameters were calculated
+ output_vector - output vector that will be scaled
+
+ See also:
+ <fann_descale_output>, <fann_scale_input>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL void FANN_API fann_scale_output( struct fann *ann, fann_type *output_vector );
+
+/* Function: fann_descale_input
+
+ Scale data in input vector after get it from ann based on previously calculated parameters.
+
+ Parameters:
+ ann - for which scaling parameters were calculated
+ input_vector - input vector that will be descaled
+
+ See also:
+ <fann_scale_input>, <fann_descale_output>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL void FANN_API fann_descale_input( struct fann *ann, fann_type *input_vector );
+
+/* Function: fann_descale_output
+
+ Scale data in output vector after get it from ann based on previously calculated parameters.
+
+ Parameters:
+ ann - for which scaling parameters were calculated
+ output_vector - output vector that will be descaled
+
+ See also:
+ <fann_scale_output>, <fann_descale_input>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL void FANN_API fann_descale_output( struct fann *ann, fann_type *output_vector );
+
+#endif
+
+/* Function: fann_scale_input_train_data
+
+ Scales the inputs in the training data to the specified range.
+
+ See also:
+ <fann_scale_output_train_data>, <fann_scale_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_scale_input_train_data(struct fann_train_data *train_data,
+ fann_type new_min, fann_type new_max);
+
+
+/* Function: fann_scale_output_train_data
+
+ Scales the outputs in the training data to the specified range.
+
+ See also:
+ <fann_scale_input_train_data>, <fann_scale_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_scale_output_train_data(struct fann_train_data *train_data,
+ fann_type new_min, fann_type new_max);
+
+
+/* Function: fann_scale_train_data
+
+ Scales the inputs and outputs in the training data to the specified range.
+
+ See also:
+ <fann_scale_output_train_data>, <fann_scale_input_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_scale_train_data(struct fann_train_data *train_data,
+ fann_type new_min, fann_type new_max);
+
+
+/* Function: fann_merge_train_data
+
+ Merges the data from *data1* and *data2* into a new <struct fann_train_data>.
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL struct fann_train_data *FANN_API fann_merge_train_data(struct fann_train_data *data1,
+ struct fann_train_data *data2);
+
+
+/* Function: fann_duplicate_train_data
+
+ Returns an exact copy of a <struct fann_train_data>.
+
+ This function appears in FANN >= 1.1.0.
+ */
+FANN_EXTERNAL struct fann_train_data *FANN_API fann_duplicate_train_data(struct fann_train_data
+ *data);
+
+/* Function: fann_subset_train_data
+
+ Returns an copy of a subset of the <struct fann_train_data>, starting at position *pos*
+ and *length* elements forward.
+
+ >fann_subset_train_data(train_data, 0, fann_length_train_data(train_data))
+
+ Will do the same as <fann_duplicate_train_data>.
+
+ See also:
+ <fann_length_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL struct fann_train_data *FANN_API fann_subset_train_data(struct fann_train_data
+ *data, unsigned int pos,
+ unsigned int length);
+
+/* Function: fann_length_train_data
+
+ Returns the number of training patterns in the <struct fann_train_data>.
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_length_train_data(struct fann_train_data *data);
+
+/* Function: fann_num_input_train_data
+
+ Returns the number of inputs in each of the training patterns in the <struct fann_train_data>.
+
+ See also:
+ <fann_num_train_data>, <fann_num_output_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_num_input_train_data(struct fann_train_data *data);
+
+/* Function: fann_num_output_train_data
+
+ Returns the number of outputs in each of the training patterns in the <struct fann_train_data>.
+
+ See also:
+ <fann_num_train_data>, <fann_num_input_train_data>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL unsigned int FANN_API fann_num_output_train_data(struct fann_train_data *data);
+
+/* Function: fann_save_train
+
+ Save the training structure to a file, with the format as specified in <fann_read_train_from_file>
+
+ Return:
+ The function returns 0 on success and -1 on failure.
+
+ See also:
+ <fann_read_train_from_file>, <fann_save_train_to_fixed>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL int FANN_API fann_save_train(struct fann_train_data *data, const char *filename);
+
+
+/* Function: fann_save_train_to_fixed
+
+ Saves the training structure to a fixed point data file.
+
+ This function is very usefull for testing the quality of a fixed point network.
+
+ Return:
+ The function returns 0 on success and -1 on failure.
+
+ See also:
+ <fann_save_train>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL int FANN_API fann_save_train_to_fixed(struct fann_train_data *data, const char *filename,
+ unsigned int decimal_point);
+
+
+/* Group: Parameters */
+
+/* Function: fann_get_training_algorithm
+
+ Return the training algorithm as described by <fann_train_enum>. This training algorithm
+ is used by <fann_train_on_data> and associated functions.
+
+ Note that this algorithm is also used during <fann_cascadetrain_on_data>, although only
+ FANN_TRAIN_RPROP and FANN_TRAIN_QUICKPROP is allowed during cascade training.
+
+ The default training algorithm is FANN_TRAIN_RPROP.
+
+ See also:
+ <fann_set_training_algorithm>, <fann_train_enum>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL enum fann_train_enum FANN_API fann_get_training_algorithm(struct fann *ann);
+
+
+/* Function: fann_set_training_algorithm
+
+ Set the training algorithm.
+
+ More info available in <fann_get_training_algorithm>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_training_algorithm(struct fann *ann,
+ enum fann_train_enum training_algorithm);
+
+
+/* Function: fann_get_learning_rate
+
+ Return the learning rate.
+
+ The learning rate is used to determine how aggressive training should be for some of the
+ training algorithms (FANN_TRAIN_INCREMENTAL, FANN_TRAIN_BATCH, FANN_TRAIN_QUICKPROP).
+ Do however note that it is not used in FANN_TRAIN_RPROP.
+
+ The default learning rate is 0.7.
+
+ See also:
+ <fann_set_learning_rate>, <fann_set_training_algorithm>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_learning_rate(struct fann *ann);
+
+
+/* Function: fann_set_learning_rate
+
+ Set the learning rate.
+
+ More info available in <fann_get_learning_rate>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_learning_rate(struct fann *ann, float learning_rate);
+
+/* Function: fann_get_learning_momentum
+
+ Get the learning momentum.
+
+ The learning momentum can be used to speed up FANN_TRAIN_INCREMENTAL training.
+ A too high momentum will however not benefit training. Setting momentum to 0 will
+ be the same as not using the momentum parameter. The recommended value of this parameter
+ is between 0.0 and 1.0.
+
+ The default momentum is 0.
+
+ See also:
+ <fann_set_learning_momentum>, <fann_set_training_algorithm>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_learning_momentum(struct fann *ann);
+
+
+/* Function: fann_set_learning_momentum
+
+ Set the learning momentum.
+
+ More info available in <fann_get_learning_momentum>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_learning_momentum(struct fann *ann, float learning_momentum);
+
+
+/* Function: fann_get_activation_function
+
+ Get the activation function for neuron number *neuron* in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to get activation functions for the neurons in the input layer.
+
+ Information about the individual activation functions is available at <fann_activationfunc_enum>.
+
+ Returns:
+ The activation function for the neuron or -1 if the neuron is not defined in the neural network.
+
+ See also:
+ <fann_set_activation_function_layer>, <fann_set_activation_function_hidden>,
+ <fann_set_activation_function_output>, <fann_set_activation_steepness>,
+ <fann_set_activation_function>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL enum fann_activationfunc_enum FANN_API fann_get_activation_function(struct fann *ann,
+ int layer,
+ int neuron);
+
+/* Function: fann_set_activation_function
+
+ Set the activation function for neuron number *neuron* in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to set activation functions for the neurons in the input layer.
+
+ When choosing an activation function it is important to note that the activation
+ functions have different range. FANN_SIGMOID is e.g. in the 0 - 1 range while
+ FANN_SIGMOID_SYMMETRIC is in the -1 - 1 range and FANN_LINEAR is unbound.
+
+ Information about the individual activation functions is available at <fann_activationfunc_enum>.
+
+ The default activation function is FANN_SIGMOID_STEPWISE.
+
+ See also:
+ <fann_set_activation_function_layer>, <fann_set_activation_function_hidden>,
+ <fann_set_activation_function_output>, <fann_set_activation_steepness>,
+ <fann_get_activation_function>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_activation_function(struct fann *ann,
+ enum fann_activationfunc_enum
+ activation_function,
+ int layer,
+ int neuron);
+
+/* Function: fann_set_activation_function_layer
+
+ Set the activation function for all the neurons in the layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to set activation functions for the neurons in the input layer.
+
+ See also:
+ <fann_set_activation_function>, <fann_set_activation_function_hidden>,
+ <fann_set_activation_function_output>, <fann_set_activation_steepness_layer>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_activation_function_layer(struct fann *ann,
+ enum fann_activationfunc_enum
+ activation_function,
+ int layer);
+
+/* Function: fann_set_activation_function_hidden
+
+ Set the activation function for all of the hidden layers.
+
+ See also:
+ <fann_set_activation_function>, <fann_set_activation_function_layer>,
+ <fann_set_activation_function_output>, <fann_set_activation_steepness_hidden>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_activation_function_hidden(struct fann *ann,
+ enum fann_activationfunc_enum
+ activation_function);
+
+
+/* Function: fann_set_activation_function_output
+
+ Set the activation function for the output layer.
+
+ See also:
+ <fann_set_activation_function>, <fann_set_activation_function_layer>,
+ <fann_set_activation_function_hidden>, <fann_set_activation_steepness_output>
+
+ This function appears in FANN >= 1.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_activation_function_output(struct fann *ann,
+ enum fann_activationfunc_enum
+ activation_function);
+
+/* Function: fann_get_activation_steepness
+
+ Get the activation steepness for neuron number *neuron* in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to get activation steepness for the neurons in the input layer.
+
+ The steepness of an activation function says something about how fast the activation function
+ goes from the minimum to the maximum. A high value for the activation function will also
+ give a more agressive training.
+
+ When training neural networks where the output values should be at the extremes (usually 0 and 1,
+ depending on the activation function), a steep activation function can be used (e.g. 1.0).
+
+ The default activation steepness is 0.5.
+
+ Returns:
+ The activation steepness for the neuron or -1 if the neuron is not defined in the neural network.
+
+ See also:
+ <fann_set_activation_steepness_layer>, <fann_set_activation_steepness_hidden>,
+ <fann_set_activation_steepness_output>, <fann_set_activation_function>,
+ <fann_set_activation_steepness>
+
+ This function appears in FANN >= 2.1.0
+ */
+FANN_EXTERNAL fann_type FANN_API fann_get_activation_steepness(struct fann *ann,
+ int layer,
+ int neuron);
+
+/* Function: fann_set_activation_steepness
+
+ Set the activation steepness for neuron number *neuron* in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to set activation steepness for the neurons in the input layer.
+
+ The steepness of an activation function says something about how fast the activation function
+ goes from the minimum to the maximum. A high value for the activation function will also
+ give a more agressive training.
+
+ When training neural networks where the output values should be at the extremes (usually 0 and 1,
+ depending on the activation function), a steep activation function can be used (e.g. 1.0).
+
+ The default activation steepness is 0.5.
+
+ See also:
+ <fann_set_activation_steepness_layer>, <fann_set_activation_steepness_hidden>,
+ <fann_set_activation_steepness_output>, <fann_set_activation_function>,
+ <fann_get_activation_steepness>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_activation_steepness(struct fann *ann,
+ fann_type steepness,
+ int layer,
+ int neuron);
+
+/* Function: fann_set_activation_steepness_layer
+
+ Set the activation steepness all of the neurons in layer number *layer*,
+ counting the input layer as layer 0.
+
+ It is not possible to set activation steepness for the neurons in the input layer.
+
+ See also:
+ <fann_set_activation_steepness>, <fann_set_activation_steepness_hidden>,
+ <fann_set_activation_steepness_output>, <fann_set_activation_function_layer>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_activation_steepness_layer(struct fann *ann,
+ fann_type steepness,
+ int layer);
+
+/* Function: fann_set_activation_steepness_hidden
+
+ Set the steepness of the activation steepness in all of the hidden layers.
+
+ See also:
+ <fann_set_activation_steepness>, <fann_set_activation_steepness_layer>,
+ <fann_set_activation_steepness_output>, <fann_set_activation_function_hidden>
+
+ This function appears in FANN >= 1.2.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_activation_steepness_hidden(struct fann *ann,
+ fann_type steepness);
+
+
+/* Function: fann_set_activation_steepness_output
+
+ Set the steepness of the activation steepness in the output layer.
+
+ See also:
+ <fann_set_activation_steepness>, <fann_set_activation_steepness_layer>,
+ <fann_set_activation_steepness_hidden>, <fann_set_activation_function_output>
+
+ This function appears in FANN >= 1.2.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_activation_steepness_output(struct fann *ann,
+ fann_type steepness);
+
+
+/* Function: fann_get_train_error_function
+
+ Returns the error function used during training.
+
+ The error functions is described further in <fann_errorfunc_enum>
+
+ The default error function is FANN_ERRORFUNC_TANH
+
+ See also:
+ <fann_set_train_error_function>
+
+ This function appears in FANN >= 1.2.0.
+ */
+FANN_EXTERNAL enum fann_errorfunc_enum FANN_API fann_get_train_error_function(struct fann *ann);
+
+
+/* Function: fann_set_train_error_function
+
+ Set the error function used during training.
+
+ The error functions is described further in <fann_errorfunc_enum>
+
+ See also:
+ <fann_get_train_error_function>
+
+ This function appears in FANN >= 1.2.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_train_error_function(struct fann *ann,
+ enum fann_errorfunc_enum
+ train_error_function);
+
+
+/* Function: fann_get_train_stop_function
+
+ Returns the the stop function used during training.
+
+ The stop function is described further in <fann_stopfunc_enum>
+
+ The default stop function is FANN_STOPFUNC_MSE
+
+ See also:
+ <fann_get_train_stop_function>, <fann_get_bit_fail_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL enum fann_stopfunc_enum FANN_API fann_get_train_stop_function(struct fann *ann);
+
+
+/* Function: fann_set_train_stop_function
+
+ Set the stop function used during training.
+
+ Returns the the stop function used during training.
+
+ The stop function is described further in <fann_stopfunc_enum>
+
+ See also:
+ <fann_get_train_stop_function>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_train_stop_function(struct fann *ann,
+ enum fann_stopfunc_enum train_stop_function);
+
+
+/* Function: fann_get_bit_fail_limit
+
+ Returns the bit fail limit used during training.
+
+ The bit fail limit is used during training where the <fann_stopfunc_enum> is set to FANN_STOPFUNC_BIT.
+
+ The limit is the maximum accepted difference between the desired output and the actual output during
+ training. Each output that diverges more than this limit is counted as an error bit.
+ This difference is divided by two when dealing with symmetric activation functions,
+ so that symmetric and not symmetric activation functions can use the same limit.
+
+ The default bit fail limit is 0.35.
+
+ See also:
+ <fann_set_bit_fail_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL fann_type FANN_API fann_get_bit_fail_limit(struct fann *ann);
+
+/* Function: fann_set_bit_fail_limit
+
+ Set the bit fail limit used during training.
+
+ See also:
+ <fann_get_bit_fail_limit>
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_bit_fail_limit(struct fann *ann, fann_type bit_fail_limit);
+
+/* Function: fann_set_callback
+
+ Sets the callback function for use during training.
+
+ See <fann_callback_type> for more information about the callback function.
+
+ The default callback function simply prints out some status information.
+
+ This function appears in FANN >= 2.0.0.
+ */
+FANN_EXTERNAL void FANN_API fann_set_callback(struct fann *ann, fann_callback_type callback);
+
+/* Function: fann_get_quickprop_decay
+
+ The decay is a small negative valued number which is the factor that the weights
+ should become smaller in each iteration during quickprop training. This is used
+ to make sure that the weights do not become too high during training.
+
+ The default decay is -0.0001.
+
+ See also:
+ <fann_set_quickprop_decay>
+
+ This function appears in FANN >= 1.2.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_quickprop_decay(struct fann *ann);
+
+
+/* Function: fann_set_quickprop_decay
+
+ Sets the quickprop decay factor.
+
+ See also:
+ <fann_get_quickprop_decay>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL void FANN_API fann_set_quickprop_decay(struct fann *ann, float quickprop_decay);
+
+
+/* Function: fann_get_quickprop_mu
+
+ The mu factor is used to increase and decrease the step-size during quickprop training.
+ The mu factor should always be above 1, since it would otherwise decrease the step-size
+ when it was suppose to increase it.
+
+ The default mu factor is 1.75.
+
+ See also:
+ <fann_set_quickprop_mu>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL float FANN_API fann_get_quickprop_mu(struct fann *ann);
+
+
+/* Function: fann_set_quickprop_mu
+
+ Sets the quickprop mu factor.
+
+ See also:
+ <fann_get_quickprop_mu>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL void FANN_API fann_set_quickprop_mu(struct fann *ann, float quickprop_mu);
+
+
+/* Function: fann_get_rprop_increase_factor
+
+ The increase factor is a value larger than 1, which is used to
+ increase the step-size during RPROP training.
+
+ The default increase factor is 1.2.
+
+ See also:
+ <fann_set_rprop_increase_factor>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL float FANN_API fann_get_rprop_increase_factor(struct fann *ann);
+
+
+/* Function: fann_set_rprop_increase_factor
+
+ The increase factor used during RPROP training.
+
+ See also:
+ <fann_get_rprop_increase_factor>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL void FANN_API fann_set_rprop_increase_factor(struct fann *ann,
+ float rprop_increase_factor);
+
+
+/* Function: fann_get_rprop_decrease_factor
+
+ The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training.
+
+ The default decrease factor is 0.5.
+
+ See also:
+ <fann_set_rprop_decrease_factor>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL float FANN_API fann_get_rprop_decrease_factor(struct fann *ann);
+
+
+/* Function: fann_set_rprop_decrease_factor
+
+ The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training.
+
+ See also:
+ <fann_get_rprop_decrease_factor>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL void FANN_API fann_set_rprop_decrease_factor(struct fann *ann,
+ float rprop_decrease_factor);
+
+
+/* Function: fann_get_rprop_delta_min
+
+ The minimum step-size is a small positive number determining how small the minimum step-size may be.
+
+ The default value delta min is 0.0.
+
+ See also:
+ <fann_set_rprop_delta_min>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL float FANN_API fann_get_rprop_delta_min(struct fann *ann);
+
+
+/* Function: fann_set_rprop_delta_min
+
+ The minimum step-size is a small positive number determining how small the minimum step-size may be.
+
+ See also:
+ <fann_get_rprop_delta_min>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL void FANN_API fann_set_rprop_delta_min(struct fann *ann, float rprop_delta_min);
+
+
+/* Function: fann_get_rprop_delta_max
+
+ The maximum step-size is a positive number determining how large the maximum step-size may be.
+
+ The default delta max is 50.0.
+
+ See also:
+ <fann_set_rprop_delta_max>, <fann_get_rprop_delta_min>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL float FANN_API fann_get_rprop_delta_max(struct fann *ann);
+
+
+/* Function: fann_set_rprop_delta_max
+
+ The maximum step-size is a positive number determining how large the maximum step-size may be.
+
+ See also:
+ <fann_get_rprop_delta_max>, <fann_get_rprop_delta_min>
+
+ This function appears in FANN >= 1.2.0.
+*/
+FANN_EXTERNAL void FANN_API fann_set_rprop_delta_max(struct fann *ann, float rprop_delta_max);
+
+/* Function: fann_get_rprop_delta_zero
+
+ The initial step-size is a positive number determining the initial step size.
+
+ The default delta zero is 0.1.
+
+ See also:
+ <fann_set_rprop_delta_zero>, <fann_get_rprop_delta_min>, <fann_get_rprop_delta_max>
+
+ This function appears in FANN >= 2.1.0.
+*/
+FANN_EXTERNAL float FANN_API fann_get_rprop_delta_zero(struct fann *ann);
+
+
+/* Function: fann_set_rprop_delta_zero
+
+ The initial step-size is a positive number determining the initial step size.
+
+ See also:
+ <fann_get_rprop_delta_zero>, <fann_get_rprop_delta_zero>
+
+ This function appears in FANN >= 2.1.0.
+*/
+FANN_EXTERNAL void FANN_API fann_set_rprop_delta_zero(struct fann *ann, float rprop_delta_max);
+
+/* Method: fann_get_sarprop_weight_decay_shift
+
+ The sarprop weight decay shift.
+
+ The default delta max is -6.644.
+
+ See also:
+ <fann fann_set_sarprop_weight_decay_shift>
+
+ This function appears in FANN >= 2.1.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_sarprop_weight_decay_shift(struct fann *ann);
+
+/* Method: fann_set_sarprop_weight_decay_shift
+
+ Set the sarprop weight decay shift.
+
+ This function appears in FANN >= 2.1.0.
+
+ See also:
+ <fann_set_sarprop_weight_decay_shift>
+ */
+FANN_EXTERNAL void FANN_API fann_set_sarprop_weight_decay_shift(struct fann *ann, float sarprop_weight_decay_shift);
+
+/* Method: fann_get_sarprop_step_error_threshold_factor
+
+ The sarprop step error threshold factor.
+
+ The default delta max is 0.1.
+
+ See also:
+ <fann fann_get_sarprop_step_error_threshold_factor>
+
+ This function appears in FANN >= 2.1.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_sarprop_step_error_threshold_factor(struct fann *ann);
+
+/* Method: fann_set_sarprop_step_error_threshold_factor
+
+ Set the sarprop step error threshold factor.
+
+ This function appears in FANN >= 2.1.0.
+
+ See also:
+ <fann_get_sarprop_step_error_threshold_factor>
+ */
+FANN_EXTERNAL void FANN_API fann_set_sarprop_step_error_threshold_factor(struct fann *ann, float sarprop_step_error_threshold_factor);
+
+/* Method: fann_get_sarprop_step_error_shift
+
+ The get sarprop step error shift.
+
+ The default delta max is 1.385.
+
+ See also:
+ <fann_set_sarprop_step_error_shift>
+
+ This function appears in FANN >= 2.1.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_sarprop_step_error_shift(struct fann *ann);
+
+/* Method: fann_set_sarprop_step_error_shift
+
+ Set the sarprop step error shift.
+
+ This function appears in FANN >= 2.1.0.
+
+ See also:
+ <fann_get_sarprop_step_error_shift>
+ */
+FANN_EXTERNAL void FANN_API fann_set_sarprop_step_error_shift(struct fann *ann, float sarprop_step_error_shift);
+
+/* Method: fann_get_sarprop_temperature
+
+ The sarprop weight decay shift.
+
+ The default delta max is 0.015.
+
+ See also:
+ <fann_set_sarprop_temperature>
+
+ This function appears in FANN >= 2.1.0.
+ */
+FANN_EXTERNAL float FANN_API fann_get_sarprop_temperature(struct fann *ann);
+
+/* Method: fann_set_sarprop_temperature
+
+ Set the sarprop_temperature.
+
+ This function appears in FANN >= 2.1.0.
+
+ See also:
+ <fann_get_sarprop_temperature>
+ */
+FANN_EXTERNAL void FANN_API fann_set_sarprop_temperature(struct fann *ann, float sarprop_temperature);
+
+#endif
diff --git a/include/fixedfann.h b/include/fixedfann.h
new file mode 100644
index 0000000..28ae9f0
--- /dev/null
+++ b/include/fixedfann.h
@@ -0,0 +1,33 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __fixedfann_h__
+#define __fixedfann_h__
+
+typedef int fann_type;
+
+#undef FIXEDFANN
+#define FIXEDFANN
+#define FANNPRINTF "%d"
+#define FANNSCANF "%d"
+
+#define FANN_INCLUDE
+#include "fann.h"
+
+#endif
diff --git a/include/floatfann.h b/include/floatfann.h
new file mode 100644
index 0000000..e81deee
--- /dev/null
+++ b/include/floatfann.h
@@ -0,0 +1,33 @@
+/*
+Fast Artificial Neural Network Library (fann)
+Copyright (C) 2003-2012 Steffen Nissen (sn at leenissen.dk)
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef __floatfann_h__
+#define __floatfann_h__
+
+typedef float fann_type;
+
+#undef FLOATFANN
+#define FLOATFANN
+#define FANNPRINTF "%.20e"
+#define FANNSCANF "%f"
+
+#define FANN_INCLUDE
+#include "fann.h"
+
+#endif
diff --git a/setup.py b/setup.py
new file mode 100755
index 0000000..b4afbbb
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,105 @@
+#!/usr/bin/python
+
+from setuptools import setup, Extension, find_packages
+import glob
+import os
+import sys
+import subprocess
+
+
+NAME = 'fann2'
+VERSION = '1.0.7'
+
+with open("README.rst") as f:
+ LONG_DESCRIPTION = f.read()
+
+
+def find_executable(executable, path=None):
+ """Try to find 'executable' in the directories listed in 'path' (a
+ string listing directories separated by 'os.pathsep'; defaults to
+ os.environ['PATH']). Returns the complete filename or None if not
+ found
+ """
+ if path is None:
+ path = os.environ['PATH']
+ paths = path.split(os.pathsep)
+ extlist = ['']
+ if os.name == 'os2':
+ (base, ext) = os.path.splitext(executable)
+ # executable files on OS/2 can have an arbitrary extension, but
+ # .exe is automatically appended if no dot is present in the name
+ if not ext:
+ executable = executable + ".exe"
+ elif sys.platform == 'win32':
+ pathext = os.environ['PATHEXT'].lower().split(os.pathsep)
+ (base, ext) = os.path.splitext(executable)
+ if ext.lower() not in pathext:
+ extlist = pathext
+ for ext in extlist:
+ execname = executable + ext
+ if os.path.isfile(execname):
+ return execname
+ else:
+ for p in paths:
+ f = os.path.join(p, execname)
+ if os.path.isfile(f):
+ return f
+ else:
+ return None
+
+
+def find_swig():
+ for executable in ("swig2.0", "swig"):
+ if find_executable(executable):
+ return executable
+ raise Exception("Couldn't find swig2.0 binary!")
+
+
+def build_swig():
+ print("running swig")
+ swig_bin = find_swig()
+ swig_cmd = [swig_bin, '-c++', '-python', 'fann2/fann2.i']
+ subprocess.Popen(swig_cmd).wait()
+
+if "sdist" not in sys.argv:
+ build_swig()
+
+
+def hunt_files(root, which):
+ return glob.glob(os.path.join(root, which))
+
+setup(
+ name=NAME,
+ description='Fast Artificial Neural Network Library (fann) bindings.',
+ long_description=LONG_DESCRIPTION,
+ version=VERSION,
+ author='Steffen Nissen',
+ author_email='lukesky at diku.dk',
+ maintainer='Gil Megidish, Vincenzo Di Massa and FutureLinkCorporation',
+ maintainer_email='gil at megidish.net & hawk.it at tiscali,it and devel at futurelinkcorporation.com',
+ url='https://github.com/FutureLinkCorporation/fann2',
+ license='GNU LESSER GENERAL PUBLIC LICENSE (LGPL)',
+ dependency_links=[
+ "http://sourceforge.net/projects/fann/files/fann/2.2.0/FANN-2.2.0-Source.zip/download",
+ "http://www.swig.org/download.html"],
+ classifiers=[
+ "Development Status :: 4 - Beta",
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
+ "License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+)",
+ "Programming Language :: Python :: 2.7",
+ "Programming Language :: Python :: 3.3",
+ "Programming Language :: Python :: 3.4"
+ ],
+ keywords="ANN artificial intelligence FANN2.2.0 bindings".split(' '),
+ zip_safe=False,
+ include_package_data=True,
+ packages=find_packages(),
+ py_modules=['fann2.libfann'],
+ ext_modules=[Extension('fann2._libfann', ['fann2/fann2_wrap.cxx'],
+ include_dirs=['./include',
+ '../include', 'include'],
+ libraries=['doublefann'],
+ define_macros=[("SWIG_COMPILE", None)]
+ ),
+ ]
+)
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/python-fann2.git
More information about the debian-science-commits
mailing list