[Pkg-octave-commit] r2067 - in octave-forge-pkgs/octave-ad/trunk/debian: . patches

Rafael Laboissiere rafael at alioth.debian.org
Sun Jun 8 11:13:57 UTC 2008


Author: rafael
Date: 2008-06-08 11:13:57 +0000 (Sun, 08 Jun 2008)
New Revision: 2067

Added:
   octave-forge-pkgs/octave-ad/trunk/debian/patches/
   octave-forge-pkgs/octave-ad/trunk/debian/patches/documentation-source.diff
   octave-forge-pkgs/octave-ad/trunk/debian/patches/series
Log:
Add patch for including documentation source

Added: octave-forge-pkgs/octave-ad/trunk/debian/patches/documentation-source.diff
===================================================================
--- octave-forge-pkgs/octave-ad/trunk/debian/patches/documentation-source.diff	                        (rev 0)
+++ octave-forge-pkgs/octave-ad/trunk/debian/patches/documentation-source.diff	2008-06-08 11:13:57 UTC (rev 2067)
@@ -0,0 +1,770 @@
+Index: trunk/doc/ad.texi
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ trunk/doc/ad.texi	2008-06-08 12:59:43.000000000 +0200
+@@ -0,0 +1,765 @@
++\input texinfo
++
++ at setfilename ad.info
++
++ at settitle
++ at afourpaper
++
++ at titlepage
++ at title Automatic Differentiation (AD) in Octave
++ at subtitle December 2007
++ at author Thomas Kasper @email{thomaskasper@@gmx.net}
++ at page
++ at vskip 0pt plus 1filll
++Copyright @copyright{} 2006, 2007
++
++Permission is granted to make and distribute verbatim copies of
++this manual provided the copyright notice and this permission notice
++are preserved on all copies.
++
++Permission is granted to copy and distribute modified versions of this
++manual under the conditions for verbatim copying, provided that the entire
++resulting derived work is distributed under the terms of a permission
++notice identical to this one.
++
++Permission is granted to copy and distribute translations of this manual
++into another language, under the same conditions as for modified versions.
++ at end titlepage
++
++ at contents
++
++ at chapter Concept
++A wide range of numerical problems can be efficiently solved using derivatives 
++in one way or the other. While from a strictly mathematical point of view the 
++derivative is a well-defined object, its computation is anything but trivial.
++
++A classical approach is finite differences. Let @var{f} be differentiable at 
++some point @var{x}. Clearly, for a certain @var{h} small enough
++ at iftex
++ at tex
++$$
++f'(x) \approx {f(x+h) - f(x) \over h}
++$$
++ at end tex
++ at end iftex
++The problem with finite differences is twofold. One issue --– probably the more 
++important one –-- is accuracy. Being necessarily an approximation, its quality 
++largely depends on a sensible choice of @var{h}. Large values, obviously, make 
++for a poor estimate of the actual derivative; small ones, on the other hand, are 
++prone to computational artefacts such as cancellation. While there are 
++strategies to cope with this dilemma they normally do so --– and that is the 
++second concern –-- at the expense of additional evaluations of your function. 
++Central differences, for instance, requires a total of 2 at var{n} evaluations, 
++where @var{n} is the dimension of the domain space. If, to make matters worse, 
++the computation is carried out within an iterative loop, you forfeit a good deal 
++of the algorithmic efficiency that may have motivated the use of derivatives in 
++the first place.
++
++The concept of Automatic Differentiation is altogether different from the 
++above. Unlike finite differences, it provides a means to @emph{analytically} 
++compute the derivative of a function at a given inner point of its domain. A 
++straightforward approach --– the one implemented by the extension –-- is to 
++introduce a new data-type, often referred to in the literature as differential 
++number or gradient. Basically, this is a compound of the value 
++itself and the associated derivative. The fundamental idea is to define the 
++common operators on the set of differential numbers according to the well-know 
++rules of elementary calculus. Hence, multiplication becomes
++ at iftex
++ at tex
++$$
++  * : \left(\matrix{x \cr \dot{x}}\right), \left(\matrix{y \cr \dot{y}}\right)
++      \mapsto \left(\matrix{xy \cr \dot{x}y + x\dot{y}}\right)
++$$
++ at end tex
++ at end iftex
++Likewise, the addition of two differential numbers would have to be
++ at iftex
++ at tex
++$$
++  + : \left(\matrix{x \cr \dot{x}}\right), \left(\matrix{y \cr \dot{y}}\right)
++      \mapsto \left(\matrix{x + y \cr \dot{x} + \dot{y}}\right)
++$$
++and so on for the remaining cases. Now consider that a function, in practice, 
++is implemented by a computer program, which in turn is made up of discrete 
++instructions. Control flow may bifurcate depending on switch-statements, but, no 
++matter how complex its structure, eventually it is a sequence of elementary 
++operations. By overloading all or most of these in the above described manner 
++you create an ideally complete algebra of differential numbers, where
++ at iftex
++ at tex
++$$
++f(\left(\matrix{x \cr 1}\right)) = \left(\matrix{f(x) \cr D_xf(x)}\right)
++$$
++ at end tex
++ at end iftex
++Thus, all you have to do is create an initial gradient and pass it on to the 
++computer program, which will then construct the derivative @math{D_xf(x)} along 
++with the output @math{f(x)} simultaneously. 
++ 
++With AD you elude the two principal drawbacks of numerical differentiation 
++outlined previously. First of all, it is more reliable in that you no longer 
++have to worry about approximation errors. Although accuracy, of course, is 
++ultimately bounded by machine precision, it @emph{can} make a difference if 
++you get 16 instead of, say, 10 correct figures. The other advantage is maybe 
++less apparent and of minor relevance to most users. However, in cases where 
++cost is a non-negligible factor, it may be not indifferent that the number of 
++evaluations does not scale with the problem size. Whether your function depends 
++on 5 or, say, 500 variables, one pass will do either way. Due to the 
++computational overhead implied by every single operation this comes at the 
++price of a slowed-down execution during that single pass. We shall rely on 
++vectorized code for a good performance here.
++
++Today Automatic Differentiation is a widely used technology in both industry 
++and academic sience. Implementations cover almost every language or 
++application commonly used for numerical computations, the most popular being 
++Fortran, C, and @sc{Matlab}. For further discussion of the topic and relevant 
++links see, for instance, @uref{http://www-sop.inria.fr/tropics/ad/whatisad.html}, 
++the INRIA site dedicated to AD.
++
++ at chapter Octave AD-Extension
++
++ at section License Information and Disclaimer
++Copyright at copyright{} 2006, 2007 Thomas Kasper
++
++This program is free software; you can redistribute it and/or modify it under 
++the terms of the GNU General Public License as published by the Free Software 
++Foundation; either version 2 of the License, or (at your option) any later 
++version.
++
++This program is distributed in the hope that it will be useful, but 
++WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 
++FITNESS FOR A PARTICULAR PURPOSE. 
++
++You should have received a copy of the GNU General Public License
++along with this program; if not, see <http://www.gnu.org/licenses/>.
++
++ at section Prerequisites
++
++ at table @asis
++ at item 
++$@bullet$ GNU Octave 3.0
++ at end table
++The maintainer will do his best to retain upward compatibility, though at the current pace of 
++releases this seems an audacious promise.
++
++ at section New Features
++ at table @asis
++ at item
++$@bullet$ Easy installation thanks to Octave's new packaging system
++ at item
++$@bullet$ Enhanced testsuite and improved documentation
++ at item
++$@bullet$ Support for n-d arrays allowing gradients of arbitrary dimensions
++ at item
++$@bullet$ Handling of minimum norm solutions for over- and underdetermined linear systems
++ at item
++$@bullet$ Implementation of gradients by (complex) sparse matrix operations
++ at end table
++
++ at section Download and Installation
++The current release (@file{ad-0.9.19.tar.gz}, as of this writing) is available for download 
++as a gzipped archive at @w{@uref{http://home.cs.tum.edu/\~{}kasper/ad/index.html\#download}}.
++
++Install by typing @code{pkg install ad-x.x.x.tar.gz} at the octave prompt. For
++advanced options and general information about the new package manager invoke the
++online documentation with @code{help pkg} or consult the Octave-Forge
++website at @uref{http://octave.sf.net}.
++
++ at section Testsuite
++It is recommended that you run the integrated testscript after installation
++to make sure the entire AD-functionality is available to you. Be prepared that 
++it takes a couple of seconds before the results are reported. 
++
++ at example
++octave:1> fid = fopen ("ad.log", "wt");
++octave:2> \_\_ga\_\_ (fid)
++PASSES 289 out of 289 tests
++ at end example
++
++Note that the vast majority of tests are statistical and involve randomly 
++generated data. This may occasionally result in noise exceeding the specified 
++tolerance. Do let me know if any of the tests repeatedly fail on your system. 
++Before you report a bug, however, please check the log does not already classify 
++it as a known issue.
++
++ at chapter Using AD in Octave
++
++ at section User Interface and Class Gradient
++The function @code{D} provides an intuitive interface to AD-functionality while 
++hiding the ugly and potentially confusing details from the user. Let us have a 
++look at its signature:
++
++ at example
++octave:3> help D
++-- Function File: [Y, J] = D (F, X, VARARGIN)
++   Evaluate F for a given input X and compute the jacobian, such that
++
++              d
++   J(i,j) = ----- Y(i) where Y = F (X, VARARGIN$\{$:$\}$)
++            dX(j)
++
++   If X is complex, the above holds for the directional derivatives
++   along the real axis
++
++   Derivatives are computed analytically via Automatic Differentiation
++
++
++See also: use\_sparse\_jacobians.
++ at end example
++
++Note that @var{F} must be a function handle, not a character string. A simple 
++use-case scenario could look as follows. Suppose you want to find a root of the 
++non-linear function
++
++ at example
++octave:4> function y = foo (x)
++> y(1) = 100 * (x(1) - x(2)\^{}2)\^{}2;
++> y(2) = (1 - x(1))\^{}2; 
++> y = y(:) / 2;
++> endfunction
++ at end example  
++
++
++One way to go about this is to iteratively refine a random initial guess by
++Newton-steps
++ at example
++octave:5> x = rand (2, 1);
++octave:6> for k = 1:30, [y, J] = D (@@foo, x); x = x - J $\backslash$ y; endfor
++octave:7> x, res = norm (foo (x))
++x = 
++    1.0000
++    1.0000
++
++res = 1.1696e-009
++ at end example
++
++Note that @code{D} is a mere convenience function which wraps up the steps 
++outlined in the introductory section. Thus, @code{[y, J] = D (@@F, x)}
++essentially is a shortcut for
++
++ at example
++result = F (gradinit (x)); 
++y = result.x;
++J = result.J;
++ at end example
++
++With @code{gradinit} you specify the independent variables to differentiate 
++with respect to along with their initial values. In the resulting gradient 
++you find the argument @code{x} augmented by the jacobian which evaluates to 
++the identity matrix of size @code{numel (x)}
++
++ at example
++octave:5> g = gradinit ([-1; 2])
++g = 
++  
++value =
++
++  -1
++   2
++     
++(partial) derivative(s) =
++
++   1   0
++   0   1
++ at end example
++
++Gradients represent a class of their own and are listed as such by the
++interpreter. Use @code{isgradient} for type-checking:
++
++ at example
++octave:8> who -long g
++*** local user variables:
++
++  Prot Name        Size                    Bytes  Class
++  ==== ====        ====                    =====  =====
++   rwd g           2x1                        48  gradient
++   
++Total is 2 elements using 48 bytes
++ 
++octave:9> isgradient (g)
++ans = 1
++ at end example   
++
++Each of the two members (value and partial derivatives) can be accessed by
++suffixing the variable with ".x" and ".J" respectively. (For obvious reasons, 
++however, they should only be read out and never be assigned to directly.) 
++Analytical expressions in one or more variables of type gradient automatically 
++evaluate to gradients:
++
++ at example
++octave:10> foo (g)
++ans =
++  
++value =
++  
++   1250
++      2
++
++(partial) derivative(s) =
++ 
++   -500   2000
++     -2     -0
++ at end example       
++
++At any time their members satisfy @code{g.J(i,j) = d/dx(j)[g.x(i)]}, with @code{x(j)} 
++the variables previously passed to @code{gradinit}. Beware that this relation is 
++independent of shape and extends to arrays of arbitrary dimension. Thus, operations 
++which preserve the linear order of elements (like reshape, for instance, or a transposal 
++on column-vectors) do not alter the jacobian.
++
++ at section Sparse Storage Mode
++You may ask that partial derivatives be stored as a sparse matrix by invoking 
++ at code{use_sparse_jacobians} with a nonzero value. As with increasing dimension 
++jacobians tend to be sparsely occupied, doing so may eventually pay off in terms 
++of both memory consumption and speed.
++ at example
++octave:11> use\_sparse\_jacobians (1);
++octave:12> [y, J] = D (@@cumprod, reshape (1:9, 3, 3), 2)
++
++y =
++
++    1    4   28
++    2   10   80
++    3   18  162
++
++J =
++
++Compressed Column Sparse (rows = 9, cols = 9, nnz = 18)
++
++  (1, 1) -> 1
++  (4, 1) -> 4
++  (7, 1) -> 28
++  (2, 2) -> 1
++  (5, 2) -> 5
++  (8, 2) -> 40
++  (3, 3) -> 1
++  (6, 3) -> 6
++  (9, 3) -> 54
++  (4, 4) -> 1
++  (7, 4) -> 7
++  (5, 5) -> 2
++  (8, 5) -> 16
++  (6, 6) -> 3
++  (9, 6) -> 27
++  (7, 7) -> 4
++  (8, 8) -> 10
++  (9, 9) -> 18
++ at end example
++
++This is a best effort service, however, and there is no guarantee as to whether the 
++returned jacobian will in fact be sparse. It certainly helps when the involved 
++operands are:
++
++ at example
++octave:13> A = rand (6); b = rand (6);
++octave:14> x = sparse (A) $\backslash$ gradinit (b); 
++octave:15> spy (x.J, 0.5), issparse (x.J)
++ans = 1
++ at end example
++
++ at section Complex-valued Domains
++Although primarily designing for functions with a real domain, the author does not 
++think fit to impose any restriction here. Users should bear in mind though, when 
++working with complex input, that what they get is the directional derivative 
++along the real axis. It then may --– or may not, for that matter –-- coincide with 
++ at emph{the} derivative, depending on whether the function is locally holomorphic or not.
++
++ at example
++octave:16> [z, dz] = D (@@abs, 1 + i)
++z = 1.4142
++dz = 0.70711
++ at end example
++
++ at chapter Limitations
++Beware that operator overloading is frail when it comes to interfacing with 
++low-level routines. If you are in the habit of writing good portions of code 
++in C++ or Fortran as DLD-functions --– and there may well be good a reasons 
++for it --–, you will definitely run into trouble. The same caveat applies even 
++to some functions of the Octave core API, in which case you should incur an 
++error message like the one below:
++
++ at example
++octave:17> gamma (gradinit (4))
++error: AD-rule unknown or function not overloaded
++ at end example
++  
++One might consider adding rules as the need arises. On the other hand, 
++balancing the benefit against the extra effort, it is often more reasonable to 
++fall back on numerical differentiation for less common operations and use
++ at code{numgradient} instead. In any event, the algebra provided by the extension 
++makes no claim for completeness and there certainly would be no point in trying.
++
++ at chapter Index
++
++ at section Functions by Category
++ at subsection Overloaded Operators
++ at table @asis
++ at item +
++no restriction
++ at item -
++no restriction
++ at item *
++no restriction
++ at item /
++operand 2 must have maximal rank
++ at item ldiv
++operand 1 must have maximal rank
++ at item pow
++both operands must be scalar or, if op1 is square, op2 must be a non-negative integer. 
++This implies that in the latter case op2 cannot be a gradient, since int @math{Z = \emptyset}
++ at item .*
++no restriction
++ at item ./
++no restriction
++ at item elpow
++no restriction
++ at end table
++
++ at subsection Utility Functions
++ at table @asis
++ at item \_\_ga\_\_
++Testscript for the gradient algebra implemented by the package AD
++ at item D
++Evaluate @var{F} for a given input @var{x} and compute the jacobian
++ at item gradinit
++Create a gradient with value @var{x} and derivative @code{eye}(@code{numel}(@var{x}))
++ at item isgradient
++Return 1 if @var{x} is a gradient, otherwise return 0
++ at item use\_sparse\_jacobians
++Query or set the storage mode for AD
++ at end table
++
++ at subsection Overloaded Functions
++ at table @asis
++ at item gradabs
++overloads built-in mapper `abs' for a gradient X
++ at item gradacos
++overloads built-in mapper `acos' for a gradient X
++ at item gradacosh
++overloads built-in mapper `acosh' for a gradient X
++ at item gradasin
++overloads built-in mapper `asin' for a gradient X
++ at item gradasinh
++overloads built-in mapper `asinh' for a gradient X
++ at item gradatan
++overloads built-in mapper `atan' for a gradient X
++ at item gradatanh
++overloads built-in mapper `atanh' for a gradient X
++ at item gradconj
++overloads built-in mapper `conj' for a gradient X
++ at item gradcos
++overloads built-in mapper `cos' for a gradient X
++ at item gradcosh
++overloads built-in mapper `cosh' for a gradient X
++ at item gradcot
++overloads mapping function `cot' for a gradient X
++ at item gradcumprod
++overloads built-in function `cumprod' for a gradient X
++ at item gradcumsum
++overloads built-in function `cumsum' for a gradient X
++ at item gradexp
++overloads built-in mapper `exp' for a gradient X
++ at item gradfind
++overloads built-in function `find' for a gradient X
++ at item gradimag
++overloads built-in mapper `imag' for a gradient X
++ at item gradlog
++overloads built-in mapper `log' for a gradient X
++ at item gradlog10
++overloads built-in mapper `log10' for a gradient X
++ at item gradprod
++overloads built-in function `prod' for a gradient X
++ at item gradreal
++overloads built-in mapper `real' for a gradient X
++ at item gradsin
++overloads built-in mapper `sin' for a gradient X
++ at item gradsinh
++overloads built-in mapper `sinh' for a gradient X
++ at item gradsqrt
++overloads built-in mapper `sqrt' for a gradient X
++ at item gradsum
++overloads built-in function `sum' for a gradient X
++ at item gradtan
++overloads built-in mapper `tan' for a gradient X
++ at item gradtanh
++overloads built-in mapper `tanh' for a gradient X
++ at end table
++
++ at section Functions Alphabetically
++ at end iftex
++
++ at subsection \_\_ga\_\_
++ at deftypefn {Function File} {} \_\_ga\_\_ (@var{name}, @var{varargin})
++ at deftypefnx {Function File} {} \_\_ga\_\_ (@var{fid})
++Testscript for the gradient algebra implemented by the package AD
++
++If the first argument is a character string, assert functionality 
++ at var{name} complies with the specification. Otherwise run a set of 
++predefined tests and report failures to the stream @var{fid}
++(defaulting to @var{stderr})
++
++Intended use is:
++
++ at example
++ at group
++fid = fopen ("errors.log", "wt");
++\_\_ga\_\_ (fid)
++ at result{} PASSES [\#] out of [\#] tests ([\#] expected failures)
++ at end group
++ at end example
++ at end deftypefn
++See also: test
++
++
++ at deftypefn {Function File} {[@var{y}, @var{J}] =} D (@var{F}, @var{x}, @var{varargin})
++Evaluate @var{F} for a given input @var{x} and compute the jacobian, such that
++ at iftex
++ at tex
++$$ J_{i,j} = {\partial y_i \over \partial x_j} ,\qquad y = F (x, {\tt varargin\{:\}})$$
++ at end tex
++ at end iftex
++
++If @var{x} is complex, the above holds for the directional derivatives
++along the real axis
++
++Derivatives are computed analytically via Automatic Differentiation
++ at end deftypefn
++See also: use\_sparse\_jacobians
++
++ at subsection gradabs
++
++ at deftypefn {Mapping Function} {} gradabs (@var{x})
++overloads built-in mapper @code{abs} for a gradient @var{x}
++ at end deftypefn
++See also: abs
++
++ at subsection gradacos
++
++ at deftypefn {Mapping Function} {} gradacos (@var{x})
++overloads built-in mapper @code{acos} for a gradient @var{x}
++ at end deftypefn
++See also: acos
++
++ at subsection gradacosh
++
++ at deftypefn {Mapping Function} {} gradacosh (@var{x})
++overloads built-in mapper @code{acosh} for a gradient @var{x}
++ at end deftypefn
++See also: acosh
++
++ at subsection gradasin
++
++ at deftypefn {Mapping Function} {} gradasin (@var{x})
++overloads built-in mapper @code{asin} for a gradient @var{x}
++ at end deftypefn
++See also: asin
++
++ at subsection gradasinh
++
++ at deftypefn {Mapping Function} {} gradasinh (@var{x})
++overloads built-in mapper @code{asinh} for a gradient @var{x}
++ at end deftypefn
++See also: asinh
++
++ at subsection gradatan
++
++ at deftypefn {Mapping Function} {} gradatan (@var{x})
++overloads built-in mapper @code{atan} for a gradient @var{x}
++ at end deftypefn
++See also: atan
++
++ at subsection gradatanh
++
++ at deftypefn {Mapping Function} {} gradatanh (@var{x})
++overloads built-in mapper @code{atanh} for a gradient @var{x}
++ at end deftypefn
++See also: atanh
++
++ at subsection gradconj
++
++ at deftypefn {Mapping Function} {} gradconj (@var{x})
++overloads built-in mapper @code{conj} for a gradient @var{x}
++ at end deftypefn
++See also: conj
++
++ at subsection gradcos
++
++ at deftypefn {Mapping Function} {} gradcos (@var{x})
++overloads built-in mapper @code{cos} for a gradient @var{x}
++ at end deftypefn
++See also: cos
++
++ at subsection gradcosh
++
++ at deftypefn {Mapping Function} {} gradcosh (@var{x})
++overloads built-in mapper @code{cosh} for a gradient @var{x}
++ at end deftypefn
++See also: cosh
++
++ at subsection gradcot
++
++ at deftypefn {Mapping Function} {} gradcot (@var{x})
++overloads mapping function @code{cot} for a gradient @var{x}
++ at end deftypefn
++See also: cot
++
++ at subsection gradcumprod
++
++ at deftypefn {Function File} {@var{y} =} gradcumprod (@var{x})
++ at deftypefnx {Function File} {@var{y} =} gradcumprod (@var{x}, @var{dim})
++overloads built-in function @code{cumprod} for a gradient @var{x}
++ at end deftypefn
++See also: cumprod
++
++ at subsection gradcumsum
++
++ at deftypefn {Function File} {@var{y} =} gradcumsum (@var{x})
++ at deftypefnx {Function File} {@var{y} =} gradcumsum (@var{x}, @var{dim})
++overloads built-in function @code{cumsum} for a gradient @var{x}
++ at end deftypefn
++See also: cumsum
++
++ at subsection gradexp
++
++ at deftypefn {Mapping Function} {} gradexp (@var{x})
++overloads built-in mapper @code{exp} for a gradient @var{x}
++ at end deftypefn
++See also: exp
++
++ at subsection gradfind
++
++ at deftypefn {Function File} {} gradfind (@var{x})
++overloads built-in function @code{find} for a gradient @var{x}
++ at end deftypefn
++See also: find
++
++ at subsection gradimag
++
++ at deftypefn {Mapping Function} {} gradimag (@var{x})
++overloads built-in mapper @code{imag} for a gradient @var{x}
++ at end deftypefn
++See also: imag
++
++ at subsection gradinit
++
++ at deftypefn {Loadable Function} {@var{g} =} gradinit (@var{x})
++Create a gradient with value @var{x} and derivative @code{eye}(@code{numel}(@var{x}))
++
++Substituting @var{x} $\mapsto$ @var{g} in an analytical expression @var{F}
++depending on @var{x} will then produce at once @var{F}(@var{x}) and
++the jacobian @math{D}@var{F}(@var{x}). See example below:
++
++ at example
++ at group
++a = gradinit ([1; 2]);
++b = [a.' * a; 2 * a]
++ at result{}
++b =
++
++value =
++
++  5
++  2
++  4
++
++(partial) derivative(s) =
++
++  2  4
++  2  0
++  0  2
++
++ at end group
++ at end example
++
++Members can be accessed by suffixing the variable with .x and .J 
++respectively
++ at end deftypefn
++See also: use\_sparse\_jacobians
++
++ at subsection gradlog
++
++ at deftypefn {Mapping Function} {} gradlog (@var{x})
++overloads built-in mapper @code{log} for a gradient @var{x}
++ at end deftypefn
++See also: log
++
++ at subsection gradlog10
++
++ at deftypefn {Mapping Function} {} gradlog10 (@var{x})
++overloads built-in mapper @code{log10} for a gradient @var{x}
++ at end deftypefn
++See also: log10
++
++ at subsection gradprod
++
++ at deftypefn {Function File} {@var{y} =} gradprod (@var{x})
++ at deftypefnx {Function File} {@var{y} =} gradprod (@var{x}, @var{dim})
++overloads built-in function @code{prod} for a gradient @var{x}
++ at end deftypefn
++See also: prod
++
++ at subsection gradreal
++
++ at deftypefn {Mapping Function} {} gradreal (@var{x})
++overloads built-in mapper @code{real} for a gradient @var{x}
++ at end deftypefn
++See also: real
++
++ at subsection gradsin
++
++ at deftypefn {Mapping Function} {} gradsin (@var{x})
++overloads built-in mapper @code{sin} for a gradient @var{x}
++ at end deftypefn
++See also: sin
++
++ at subsection gradsinh
++
++ at deftypefn {Mapping Function} {} gradsinh (@var{x})
++overloads built-in mapper @code{sinh} for a gradient @var{x}
++ at end deftypefn
++See also: sinh
++
++ at subsection gradsqrt
++
++ at deftypefn {Mapping Function} {} gradsqrt (@var{x})
++overloads built-in mapper @code{sqrt} for a gradient @var{x}
++ at end deftypefn
++See also: sqrt
++
++ at subsection gradsum
++
++ at deftypefn {Function File} {@var{y} =} gradsum (@var{x})
++ at deftypefnx {Function File} {@var{y} =} gradsum (@var{x}, @var{dim})
++overloads built-in function @code{sum} for a gradient @var{x}
++ at end deftypefn
++See also: sum
++
++ at subsection gradtan
++
++ at deftypefn {Mapping Function} {} gradtan (@var{x})
++overloads built-in mapper @code{tan} for a gradient @var{x}
++ at end deftypefn
++See also: tan
++
++ at subsection gradtanh
++
++ at deftypefn {Mapping Function} {} gradtanh (@var{x})
++overloads built-in mapper @code{tanh} for a gradient @var{x}
++ at end deftypefn
++See also: tanh
++
++ at subsection isgradient
++
++ at deftypefn {Loadable Function} {} isgradient (@var{x})
++Return 1 if @var{x} is a gradient, otherwise return 0
++ at end deftypefn
++
++ at subsection use\_sparse\_jacobians
++
++ at deftypefn {Loadable Function} {@var{val} =} use\_sparse\_jacobians ()
++ at deftypefnx {Loadable Function} {@var{val} =} use\_sparse\_jacobians (@var{new\_val})
++Query or set the storage mode for AD. If nonzero, gradients
++will try to store partial derivatives as a sparse matrix
++ at end deftypefn
++
++ at bye

Added: octave-forge-pkgs/octave-ad/trunk/debian/patches/series
===================================================================
--- octave-forge-pkgs/octave-ad/trunk/debian/patches/series	                        (rev 0)
+++ octave-forge-pkgs/octave-ad/trunk/debian/patches/series	2008-06-08 11:13:57 UTC (rev 2067)
@@ -0,0 +1 @@
+documentation-source.diff




More information about the Pkg-octave-commit mailing list