[Neurodebian-upstream] [Nipy-devel] Standard dataset

Michael Hanke michael.hanke at gmail.com
Fri Sep 24 00:43:57 UTC 2010


On Thu, Sep 23, 2010 at 05:01:41PM -0700, Matthew Brett wrote:
> > $ showmealldata T1
> > /usr/share/data/mni-colin27/colin27_t1_tal_lin.nii.gz
> > /usr/share/data/spm8/canonical/avg305T1.nii
> 
> Are these querying filenames with regular expressions?   Or are we
> imagining some sort of labels attached to each of the bits of data,
> with a central database by which these can be queried?

They would query some sort of database that knows about labeled datasets
with attached meta data, as well as the labels datasets content with
additional individual meta data.

> > (or similar stuff in Python, Perl, C, Haskell,....)
> 
> I suppose that can be an application like apt, that could be written
> in any language.

Right, but we need to have bindings for that thingie...

> > 1. A facility to get the data onto a system
> >
> >   Several readily usable solution are available.
> 
> By system - you mean some sort of central repository?  Solutions being
> scp and so on?

scp is just a transport. I was more aiming at something that can talk
to a majority of the already existing data warehouses (Steve gave few
examples earlier in this thread). We need to be able the "getter" that
we want to have a particular dataset, it figures out how to access it,
downloads it, and feeds all relevant meta data into our (local) database
of meta data.

If we cannot access existing warehouses in their "native language" and
import their relevant meta data, we would be forced to repack all
datasets into a new understood format -- somethings that we would want
to avoid, I guess.

> OK - I'll try and investigate datapkg a bit more, and then try and
> think of something sensible to ask ;)

And I'll try to figure out the licensing situation of some interesting
datasets...


Thanks,

Michael

-- 
GPG key:  1024D/3144BE0F Michael Hanke
http://mih.voxindeserto.de



More information about the Neurodebian-upstream mailing list