[Debburn-devel] jigdo in libisofs, was: schilyutils upload

Thomas Schmitt scdbackup at gmx.net
Wed Aug 19 10:20:01 UTC 2009


Hi,

Steve McIntyre wrote:
> It should be easy to add an extra call to a jigdo helper function at
> the right point

This has the architectural disadvantage of
interfering with the delicate core of
libisofs operations.
You will have to convince Vreixo and me
that this is really the best solution.

As said, i committed the same architectural
sins recently. So your proposal has a chance
to ride in the slipstream of that.
Please base any detailed proposals or patches
on the current development branch
  https://code.launchpad.net/~libburnia-team/libisofs/scdbackup
resp.
  bzr branch lp:libisofs/for-libisoburn
or on
  http://scdbackup.sourceforge.net/xorriso-0.4.1.tar.gz
which contains a recent snapshot of the branch.

If you tell which parameters need to be
forwarded from application to the inner
writers of libisofs, then i will implement
the appropriate interfaces from API to
writer. xorriso would get the necessary options.

You may assume the parameters available in
Ecma119Image as described in
libisofs/ecma119.h and reachable as
writer->target->parameter_name.

-------------------------------------------------

On the other hand, an independent API call
which inspects the directory tree model
of the image would need not more coordination
than the choice of the function name and the
description text in libisofs.h.
One can implement it with existing API calls
and thus would be safe against any future
inner changes of libisofs.

One could run this API function on images
that have already been written and on images
that will not be written finally.
(IsoStream allows to transparently read file
 content from imported ISO image or from
 local filesystem.)


I wrote:
> > if it is not important to record exactly the same
> > file content as in the image in case of race
> > conditions with changing files on disk. 
> No, it's very important. If the contents in the
> jigdo file and what's in the image don't match then
>  you've got problems. 

It comes to me that the actual outcome of the
race can be read if you run my API call _after_
the image was written. This would have the
further advantage that libisofs can give you the
MD5 of each data file without the need to read
its content again.
(See libisofs doc/checksums.txt for an overview
 of the upcomming MD5 features.)

But anyway: isn't the jigdo user in big trouble
if the file contents change during image
generation ?
I understand that in jidgo the MD5 connects
the file in the local filesystem and the
file in the ISO image. So the MD5 may
not change freely anyway.

(I mention this race condition only because it
 recently was the reason why i had to bloat
 the very same code where you want to hook in.)


> you've just doubled the amount of I/O needed
> on the input side if you go this way.

Only if you run it before image production.
If you load the image after it was written
then you can get the MD5s of data files for free.
E.g. an image of mine with 45,000 files and
1.5 GB has a tree of about 12 MB and checksums
of 720 kB to load.
... and a little read test after writing
can hardly be wrong.

Is the overall production of an ISO image plus
jigdo files an i/o critical operation ?
(The critical burner output is better protected
 in my proposal. Yours gives less input load.)


> >  Lots of up-wiring through
> >  the object hierarchy might be needed.
> Meh, just a simple matter of code... :-)

Not to be misunderstood:
It is not so much about code beauty but about
sticking with Vreixo's model design. There would
be encapsulations to be pierced.

This is achievable, of course, if necessary.


> Look in the jigdo source package, doc/TechDetails.txt, for *much* more
> information.

Yes, i should learn about it.

Maybe i can even make a competing proposal
so we can evaluate both in practice before
one of them goes into a release.
We already have MD5 and optional linking
of libz.

The next release of libisofs is planned in
about two weeks. Topic is MD5 checksumming
of directory tree, session data and of each
single data file.
After that i will have a closer look at jigdo. 


> > but HFS ? Can't they read ISO ?
> Well, if you want to boot a CD on Macs they
> need HFS information too. Or so I'm told.

As with any exotics, we would need a test user
and a benchmark use case.
Best would be if that user brings HFS specs
and has some knowledge about Macs.


Have a nice day :)

Thomas




More information about the Debburn-devel mailing list