[Neurodebian-devel] MDP patches in AFNI

Kundu, Prantik (NIH/NIMH) [F] prantik.kundu at nih.gov
Thu Nov 15 23:05:26 UTC 2012


Hello All

As you have guessed, the two-stage ica_nodes.py in AFNI is a custom modification, intended to make possible one contrast to bootstrap another, for example enabling the use of tanh contrast in 'somewhat' noisy data by bootstrapping convergence with the more robust pow3 contrast. If primary and secondary are both given as the same contrast function, then the secondary step is skipped and the standard MDP algo continues.  This modification is not in the MATLAB reference, and is something suited to the peculiarities of fMRI data. We know it improves sensitivity to spontaneous BOLD fluctuations in the course of denoising based on our NMR signal decay measures of BOLD signal quality, but otherwise it would be hard to tell the difference. Perhaps Tiziano would care to run this implementation on some reference data before deciding to integrate the patch into the mainline MDP. Importantly, the MDP as distributed in AFNI is isolated to the runtime directory of the code its distributed with (meica.libs), and is not made globally accessible.

-Prantik
________________________________________
From: Tiziano Zito [tiziano.zito at bccn-berlin.de]
Sent: Thursday, November 15, 2012 3:52 PM
To: NeuroDebian Development; Kundu, Prantik (NIH/NIMH) [F]
Cc: mdp-dev
Subject: Re: MDP patches in AFNI

Hi,

I had a quick look at the AFNI localized copy of MDP. It seems they
imported a fairly recent git snapshot. If the ica_nodes.py
patch would get integrated in MDP, I think they could use the
released MDP 3.3 version without any incompatibility.

Is this 2 stage convergence switching also present in the original
MATLAB reference version of FastICA? I am far away from the ICA
world since a long time now, so I may have missed some important
developments lately. If this approach is indeed novel and introduced
by PK, wouldn't be better to allow the user to choose which
approach to use -- standard or 2-stage -- at instantiation time?
This would make MDP users happy, because their results are not going
to look different after upgrading to the next MDP release, and AFNI
could still use exactly the same algorithm their are using now...

What do you think?

Ciao,
Tiziano

PS: Ccing the mdp-dev mailing list.

On Thu 15 Nov, 15:08, Yaroslav Halchenko wrote:
> Hi Tiziano,
>
> NB Ccing PK -- original author of the patch
>
> As you might (or not) know, AFNI now carries a copy of MDP (of state
> somewhere before actual 3.3 release as far as I see) with slight
> patching of ica_nodes.py to have 2 stages of convergence with switching
> to gFine later in the loop (PK, please correct me if I am wrong)
>
> I am attaching patch for your consideration to get absorbed into MDP
> since we would like to avoid maintaining two copies of it ;)
>
> if needed (may be there is more patches, PK?) you can find
> complete AFNI source with this mdp copy at
>
> http://git.debian.org/?p=pkg-exppsy/afni.git
>
> branch upstream.
>
> --
> Yaroslav O. Halchenko
> Postdoctoral Fellow,   Department of Psychological and Brain Sciences
> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
> Phone: +1 (603) 646-9834                       Fax: +1 (603) 646-1419
> WWW:   http://www.linkedin.com/in/yarik

> --- pkundu/meica.libs/mdp/nodes/ica_nodes_old.py      2012-11-15 14:33:53.000000000 -0500
> +++ pkundu/meica.libs/mdp/nodes/ica_nodes.py  2012-11-15 14:33:53.000000000 -0500
> @@ -1,3 +1,4 @@
> +
>  __docformat__ = "restructuredtext en"
>
>  import math
> @@ -325,8 +326,8 @@
>      def __init__(self, approach = 'defl', g = 'pow3', guess = None,
>                   fine_g = 'pow3', mu = 1,
>                   sample_size = 1, fine_tanh = 1, fine_gaus = 1,
> -                 max_it = 1000, max_it_fine = 100,
> -                 failures = 5, limit = 0.001, verbose = False,
> +                 max_it = 5000, max_it_fine = 100,
> +                 failures = 5, primary_limit=0.01, limit = 0.001,  verbose = False,
>                   whitened = False, white_comp = None, white_parm = None,
>                   input_dim = None, dtype=None):
>          """
> @@ -346,7 +347,9 @@
>                        It is passed directly to the WhiteningNode constructor.
>                        Ex: white_parm = { 'svd' : True }
>
> -        limit -- convergence threshold.
> +        limit -- final convergence threshold.
> +
> +     primary_limit -- initial convergence threshold, to switch to fine function, (i.e. linear to non-linear). PK 26-6-12.
>
>          Specific for FastICA:
>
> @@ -417,6 +420,7 @@
>          self.fine_gaus = fine_gaus
>          self.max_it = max_it
>          self.max_it_fine = max_it_fine
> +        self.primary_limit = primary_limit
>          self.failures = failures
>          self.guess = guess
>
> @@ -458,6 +462,7 @@
>                  guess = mult(guess, self.white.get_recmatrix(transposed=1))
>
>          limit = self.limit
> +        primary_limit = self.primary_limit
>          max_it = self.max_it
>          max_it_fine = self.max_it_fine
>          failures = self.failures
> @@ -501,6 +506,7 @@
>          used_g = gOrig
>          stroke = 0
>          fine_tuned = False
> +        in_secondary = False
>          lng = False
>
>          # SYMMETRIC APPROACH
> @@ -529,10 +535,16 @@
>                  v2 = 1.-abs((mult(Q.T, QOldF)).diagonal()).min(axis=0)
>                  convergence_fine.append(v2)
>
> +                if self.g!=self.fine_g and convergence[round] < primary_limit and not in_secondary:
> +                    if verbose:
> +                        print 'Primary convergence, switching to fine cost...'
> +                    used_g = gFine
> +                    in_secondary = True
> +
>                  if convergence[round] < limit:
>                      if fine_tuning and (not fine_tuned):
>                          if verbose:
> -                            print 'Initial convergence, fine-tuning...'
> +                            print 'Secondary convergence, fine-tuning...'
>                          fine_tuned = True
>                          used_g = gFine
>                          mu = muK * self.mu
> @@ -569,7 +581,7 @@
>                  # Show the progress...
>                  if verbose:
>                      msg = ('Step no. %d,'
> -                           ' convergence: %.3f' % (round+1,convergence[round]))
> +                           ' convergence: %.7f' % (round+1,convergence[round]))
>                      print msg
>
>




More information about the Neurodebian-devel mailing list