[Debian-coldfire-devel] Current Status?
Wouter Verhelst
w at uter.be
Wed Sep 26 10:57:00 UTC 2007
On Tue, Sep 25, 2007 at 10:59:22PM +0200, Carsten Schlote wrote:
> HI Wouter,
>
> > -----Original Message-----
> > From: Wouter Verhelst [mailto:wouter at debian.org]
> > Sent: Tuesday, September 25, 2007 8:17 PM
> > To: Carsten Schlote
> > Cc: debian-coldfire-devel at lists.alioth.debian.org
> > Subject: Re: [Debian-coldfire-devel] Current Status?
> >
> > Yes, that's what I've been using for now.
>
> I'm using ptxdist for my other projects. So I started with some port of
> the ltib affords to ptxdist. It's pretty much straight forward.
>
> > Note that there are also some bugs in ltib; e.g., I still
> > haven't been able to create a native compiler; it doesn't
> > even build if I try to include a compiler in my ltib image.
>
> I'm also working on some m68k port for the ptxdist based OSELAS
> Toolchains. I get the compiler compiled. I'm at glibc right now... The
> m68k/Coldfire code is poorly maintained, so some work is required at
> this point.
>
> > Having precompiled coldfire-specific packages isn't hard,
> > indeed; in fact, I could theoretically set that up tomorrow.
> > It's just that that's not what I've been aiming for.
>
> Fine. I never built a rpm or dpkg upto day nor maintained a debian
> mirror. So you might help me to get all these dpks compiled :-)
>
> > > If the is project is still active,
> >
> > We are, to some extent. I'm the only person who's been
> > working on this so far, and I've slowly come to the
> > conclusion that all this currently is a bit out of my league
> > -- at least the things I've been trying to do:
> > modify the toolchain to no longer emit instructions that
> > aren't supported by the coldfire, while at the same time
> > being able to run on classic m68k machines, too.
>
> Isn't there a working cross compiler for v4e cores? This should be ok
> for the beginning. To compile a native gcc, we first need a working
> glibc port, or use the glibc stuff from the cross compilers sysroot dir.
Yeah; what's in ltib works okay as a cross-compiler--it's just that it
doesn't work if I want it natively.
> > That's a bit harder than it sounds: first, there's one
> > instruction that exists on coldfire but not on m68k; second,
> > there are some instructions that behave slightly different
> > (so far I've found FMOVEM and incrementing or decrementing
> > addressing modes in byte context on register A7)
>
> There are existing ports for v4e at CodeSourcery. I presume that this
> compiler works ok for v4e targets. So why not using their patches?
There are a few possible ways to proceed forward, really:
- Ditch whatever support we have for m68k currently, and focus on
ColdFire alone. I'd prefer not to do that, since the Debian m68k port
is still surprisingly active. Moreover, it probably isn't even
possible; if I go to debian-m68k and suggest to ditch the m68k port,
I'll probably get flamed and kicked out of the m68k port; and m68k
support will continue. So scratch that.
- Create a ColdFire port separate from the current m68k port, and keep
the m68k port in. Given how adding a new port to the archive requires
quite some additional disk space on ftp-master.debian.org and our
mirrors, and a significant amount of extra bandwidth for ftp-master,
I'm afraid our ftp-masters (that is, the people who maintain
ftp-master.debian.org) will object to this option.
- Create a ColdFire port separate from the current m68k port, and
maintain one or the other outside the debian.org mirror network. This
would be possible, but would imply a significant amount of extra work,
and the lack of things such as security support or stable releases
that get released along with the rest of Debian. Maintaining the
current m68k port out of the archive would probably not happen because
of the same reasons as the first point; so that leaves maintaining the
ColdFire port outside of the archive--which is possible, but it kindof
defeats the purpose.
- Create a hybrid port that will somehow run on both whatever the
current m68k port runs on, *and* ColdFire hardware. That's what I've
been focusing on, but it requires extra work; either in the kernel (to
emulate the missing opcodes), or in the toolchain (so that it does not
emit or use any opcodes that aren't in both architecturs). Since
neither architecture is a strict subset of the other, you can't just
go ahead and use the ColdFire patches that are out there; you need to
modify them.
The last option is what I've been working on so far, but by now I've
come to the conclusion that it simply isn't going to happen if I need to
do this all by myself; and since the stuff I've done so far is outdated
and incomplete anyway, it would need to be redone.
So I guess we're at a point where it doesn't really matter and we can
change direction, if needed; and if you're willing to work on this, then
your input is certainly more than welcome.
> > > I'd like to contribute over time:
> > > * Basic kernels for MCF 547x/548x/5445x CPUs
> > > It should be possible to pass platform Ids to the kernel
> > init, so that
> > > a single image might fit the handfull of MMU enabled
> > Coldfire boards.
> > > * A generic M68k GCC compiler targeted to Coldfire code generation.
> > > * Updates for glibc and native linux threading stuff
> >
> > That'd be cool. There's already someone working on adding TLS
> > support to glibc; you might want to coordinate with him.
>
> Do you know who? I'd like to mail this person. Maybe some efforts can be
> shared.
That'd be Brad Boyer <flar at allandria.com>.
> > As for the kernel: it was previously suggested that we write
> > some emulation layer for the missing opcodes on coldfire, so
> > that we could start using coldfire processors "right away". I
> > didn't do that, because I know even less about kernel
> > internals and opcode emulation than I do about compilers and
> > toolchains; but it'd be cool if someone (you, if you want to)
> > were to help there. After fighting with gcc on my own for so
> > long, I think I can now safely say that modifying the kernel
> > will be far easier for someone familiar with it.
>
> Yes, that's possible. I wrote some FPU/Instruction emulation on Amiga
> years ago for m68060. But such an emulation would consume noteable
> amounts of your CPU time - It's slow and not worth any effort. We need
> pure v4e/v4m code for performance.
I realize that; however, it would
- give us something to use those five boards with; we've received them
for free from Freescale a few years ago, but haven't done anything
with them yet. If nothing else, we'll at least speed up our buildd
network a bit (even with emulation, I'm pressed to think they'd at
least be faster than our current buildd hardware, some of which runs
as slow as 25Mhz)
- Give people an incentive to join the project. That, especially, is
important: when there's nothing to show, people aren't interested in
the effort. When that's the case, they're not as likely to help you,
either.
To mitigate the effects of the slowdown, we could make gcc's
-msoft-float option the default; this would avoid using any FPU
instructions entirely, so the only slowdown would be in emulated integer
opcodes (some of which would still be necessary).
Additionally, implementing kernel emulation bits doesn't have to be the
end; but I think that implementing these kernel bits can be done a lot
faster than implementing the needed changes in the toolchain (for the
final option above), so it would speed up the usage of these boards.
Of course, all that depends on us actually going ahead with the hybrid
option.
> You should give the CodeSourcery cross gcc a try. It uses glibc 2.5 and
> a gcc 4.1.x.
>
> > What's in subversion. I know, that isn't much, and what's
> > there is incomplete and already outdated; but I had to do
> > this in my spare time, and for now I've been mostly pouring
> > over documentation, really.
>
> So to conclude: We need a working and more or less up-to-date port of
> glibc to v4e or some generic runs on all Coldfire mode. But as Linux is
> only useful with MMU, we can savely assume v4 code and should optimize
> for these targets.
Yes; we would *definately* need an MMU, so non-MMU ColdFire is totally
out of the question. I'm not even thinking of supporting that :-)
> With such a glibc we can setup a cross gcc. And finally create a native
> compiler with this cross compiler. You will a cross compiler most time
> (faster on modern PCs). But a host native gcc is some kind of must to
> provide a distro.
Indeed. Not just because the distribution needs it; also because to
build the 10 Gig of Debian, you definately need a native compiler --
while some packages do support it, there isn't a requirement that Debian
packages be crosscompilable.
--
<Lo-lan-do> Home is where you have to wash the dishes.
-- #debian-devel, Freenode, 2004-09-22
More information about the Debian-coldfire-devel
mailing list