[Neurodebian-devel] How can Blends techniques be adapted to NeuroDebian
Michael Hanke
mih at debian.org
Sun Aug 26 16:00:18 UTC 2012
On Sun, Aug 26, 2012 at 01:26:52PM +0200, Andreas Tille wrote:
> On Sun, Aug 26, 2012 at 12:05:11PM +0200, Michael Hanke wrote:
> > I think we should not decide on a templating engine for the purpose of
> > an interface for portal builders. Whatever decision we are going to
> > make, the next blend will needs something else. I'd prefer to store the
> > _input_ into whatever templating engine instead.
>
> Could you please define "input" more precisely?
No need, you got it! ;-)
> > In other words,
> > something like a JSON (or similar) structure that holds the description,
> > screenshot link, available versions, architectures per version, ... for
> > each package. And that maybe wrapped into a larger structure that has
> > all the packages per task (per blend). If the information is broken
> > down to a level where no markup in the respectives values is required,
> > we should be able to feed that into ANY templating language and should
> > hence be relatively future proof.
>
> If I understand you correctly you mean the following:
>
> Current system:
>
> 1. Script drains information from UDD and creates HTML pages
> using some templates
>
> Your proposal:
>
> 1. Script drains information from UDD and creates some structured
> information in a to be defined format
> 2. Another script generates
> a) Current blends pages using the just existing templates
> b) Dynamic pages as they are currently used by NeuroDebian
> c) Some other nifty things
>
> Is this correct?
Jep!
> > If we have all the information aggregated in this way, we could even
> > decide at some point to put it all back into a DB (if access latency and
> > such ever becomes an issue). Since you must have all relevant information
> > already represented in some form to be able to feed genshi, I'd assume
> > that it should be relatively straightforward to export it right at this
> > point, or am I wrong?
>
> If I understand correctly than this is quite simple to do. We just need
> to agree about the format and I can easily do the output even in the
> current script which enables a quite smooth migration path. Considering
> that I would like to have some "user response system" on the web
> sentinel pages which is not possible with the current static pages I
> consider to start a GSoC project next year to enhance this anyway which
> somehow might be an argument to use a database (with the option to store
> this input) as the "to be defined format" mentioned above.
>
> So lets discuss this along these line. Does this fit your proposal and
> what database would you suggest?
I'd say let's aim for JSON-based storage -- that is simple, and is
supported all over the place (incl. web-stuff, and Python). Actually, if
you already have the per-package information in a Python-dict,
generating json should be as simple as:
>>> import json
>>> data_in_json_format = json.dumps(dict_with_all_the_info)
JSON goes well with DBs like couchdb (also frequently used and
_schema-free_, but I'm not a DB expert...). In terms of implementation
roadmap, I'd say:
1. It would be useful to have a dump of all relevant information in a
JSON text file.
2. I'd try to use that and rebuild neuro.debian.net on top of that. And
it would also help to split the current blends pages script to be
able to generate the static HTML pages from the content of that JSON
file.
3. Once we have that done, we know that we have a working and
comprehensive data structure. At this point we can put it into a DB
and create a more dynamic system.
It prefer a step-wise approach as it reduces the complexity of the
problem and automatically structures it (and resulting code) into: data
aggregation, storage, and retrieval/use. The resulting overhead should
be minimal and we can have 95% of the desired goals at the end of step
2) already.
Michael
--
Michael Hanke
http://mih.voxindeserto.de
More information about the Neurodebian-devel
mailing list