[Multiarch-devel] Re: multiarch status update
Sven Mueller
debian at incase.de
Mon Jul 17 16:09:16 UTC 2006
Hi.
I know this is a late reply, but I anyhow felt it to be necessary.
Eduard Bloch wrote on 18/05/2006 07:19:
> * Goswin von Brederlow [Tue, May 16 2006, 11:55:18PM]:
>
>>What do you mean with invasive? Multiarch is designed to be
>>implemented without disrupting the existing archive or mono-arch
>>systems at any time. Each package can get converted to multiarch by
>>itself and once a substantial number of packages have done so a
>>multiarch dpkg/apt/aptitude can be used.
>
> And that is why I question it. Do we need that? You demonstrated that it
> is quite easy to setup the depencency chain for a package... but why
> should we care about change the whole distribution for the sake of few
> 3rd party packages if we have sufficient workarounds?
It's not only 3rd party packages, but there are also other reasons to
want multi-arch support. As far as I heard (and benchmarks done by a
friend confirm that it is true), some arches, especially the PPC64 one,
suffer from great performance loss when using 64bit applications, with
the exception of applications that need (or at least profit from) a lot
of RAM, like large databases or huge number crunching apps (that work on
a large array of data). So for those arches, it might be wanted to use
32bit apps by default, but use some 64bit apps where those use a lot of
RAM.
Also there are several programs (even Open source) which are not 64bit
clean (ie. assume certain pointer sizes etc.). Some of those are (as far
as I heard, I didn't verify) so complex that it would take a lot of time
to modify them so that they become 64bit clean. Sure, this probably
means that the open source programs of this kind will - in some more or
less distant future - get fixed to work with 64bit archs, but until
then, using their 32bit version on otherwise 64bit archs is probably
what our users would like to do.
>>But cooking the packages is not 100% successfull and involves a lot of
>>diversions and alternatives. Every include file gets diverted, every
>>binary in a library gets an alternative. All cooked packages depend on
>>their uncooked other architecture version for pre/postinst/rm scripts,
>>forcing both arch to be installed even if only the cooked one is
>>needed.
>
> I don't see a bad problem with that, sounds like an acceptable
> compromise.
Do you mean the dependency on the native version of the package: Sure,
it would be an acceptable compromise. However I see a problem with that
huge number of alternatives. The more alternatives that are installed,
the slower their resolution gets (due to filesystem effects, espsecially
on the ext2/ext3 filesystems still used by most of our users).
>>And still some things won't work without the multiarch dirs being used
>>by any software using dlopen and similar. That includes X locales,
>>gtk-pixmap, pango to start with.
>
> Such things are not okay but there could be few workarounds as well.
How many workarounds would you want to put into that cooking script
(set)? They will become a maintenance nightmare as more source packages
and more workarounds are introduced.
>>It works for a stable release but for unstable the constant stream of
>>changes needed in the cooking script would be very disruptive for
>>users.
>
> Only if you port the whole distribution. If you port few dozens of
> library packages, maintaining them should be feasible.
The problem mostly is: How many packages would need to get
cooked/ported? If you use (a) cooking script(s), you put all the
porting/cooking burden on a single (small group of) developer(s). If you
introduce multiarch support in the dpkg/.deb system however, you
distribute the burden on many shoulders. And if the arch specific
packages are sufficiently cleaned up (i.e. no arch-indep files in
arch-depending packages), the archive will also profit from that by
using less space (though probably more packages).
>>It also is disruptive to building packages. Build-Depends will only
>>work for the native arch and not for the cooked packages and
>>building for the cooked arch will give precooked Depends (I do cook
>>shlibs files) so they are invalid for uploads.
>
> This problem is only implied by "porting the whole arch and using
> everything like a native package".
Actually, I admit I don't really understand the problem. If packages get
cooked, don't the get cooked from the original .deb files? Or does it
mean that when I want to compile a package which build-depends on X-dev
(where X and X-dev are not available on the native arch, but their
cooked versions are available as X-i386 and X-dev-i386), that the
build-depends can't be satisfied automatically and that when those
build-depends are satisfied/modified manually, that the resulting
package will depend on the cooked package (X-i386) instead of the real
package (X)?
This would be a major downside to real multi-arch setups IMHO.
Regards,
Sven
PS: I would really be interested to know how far dpkg&friends already
got adjusted to support multi-arch and what "normal" package maintainers
could do today to prepare their packages for multi-arch.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 188 bytes
Desc: OpenPGP digital signature
Url : http://lists.alioth.debian.org/pipermail/multiarch-devel/attachments/20060717/d0c10bf0/signature.pgp
More information about the Multiarch-devel
mailing list