Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230
Ideas for separation of "base system"
-
- Posts: 4
- Joined: 2008-08-04 23:02
Ideas for separation of "base system"
Here is a suggestion for Debian development, if you'll forgive my audacity:
I love the stability and reliability of Debian's stable branch, but there is a well known compromise in oldness. For me, the oldness is mostly fine, but there are a handful of applications which I feel I must upgrade manually, (OO.o, Sauerbraten, Ardour...)
Perhaps Debian could borrow from FreeBSD conceptually in this regard; by separating the "base system" (kernel plus userland) from the "applications". Ideally the "applications" should get cutting-edge updates, but the "base system" should stick to the stable-branch policy, such that only security and bug updates are applied, thereby retaining (much of) the stability and security that Debian is famed for.
Stop me if I'm wrong, but implementing such a scheme might be done by changing the repo categories from "main, contrib" to "base, main, contrib" and changing the branches from "stable, testing, unstable" to "stable, stablenhalf, testing, unstable", where stablenhalf would not have a base repo category. The traditional system-wide stable config could also be preserved as an option. This would mean that stablenhalf would receive package updates from unstable just like testing, but they would be compiled against the stable base-system. One has to assume that most packages have compile-time compatibility going back at least as far as the latest stable base, but it's my impression this would be true the vast majority of the time.
Another way to implement this might be to increase granularity of apt, such that different update policies can be applied to different package categories. This seems non-ideal to me, however, because it radically increases the number of possible end-user configurations, tends towards package incompatibility, and the bugs thereof may become unmanageable.
It looks like this mindset has already been adopted for hardware drivers, which I am excited to see, but it's my humble opinion that it would be a benefit to many (the majority?) of users to have this done for applications as well.
Thanks for you consideration!
Chris
I love the stability and reliability of Debian's stable branch, but there is a well known compromise in oldness. For me, the oldness is mostly fine, but there are a handful of applications which I feel I must upgrade manually, (OO.o, Sauerbraten, Ardour...)
Perhaps Debian could borrow from FreeBSD conceptually in this regard; by separating the "base system" (kernel plus userland) from the "applications". Ideally the "applications" should get cutting-edge updates, but the "base system" should stick to the stable-branch policy, such that only security and bug updates are applied, thereby retaining (much of) the stability and security that Debian is famed for.
Stop me if I'm wrong, but implementing such a scheme might be done by changing the repo categories from "main, contrib" to "base, main, contrib" and changing the branches from "stable, testing, unstable" to "stable, stablenhalf, testing, unstable", where stablenhalf would not have a base repo category. The traditional system-wide stable config could also be preserved as an option. This would mean that stablenhalf would receive package updates from unstable just like testing, but they would be compiled against the stable base-system. One has to assume that most packages have compile-time compatibility going back at least as far as the latest stable base, but it's my impression this would be true the vast majority of the time.
Another way to implement this might be to increase granularity of apt, such that different update policies can be applied to different package categories. This seems non-ideal to me, however, because it radically increases the number of possible end-user configurations, tends towards package incompatibility, and the bugs thereof may become unmanageable.
It looks like this mindset has already been adopted for hardware drivers, which I am excited to see, but it's my humble opinion that it would be a benefit to many (the majority?) of users to have this done for applications as well.
Thanks for you consideration!
Chris
- industrialpunk
- Posts: 731
- Joined: 2007-03-07 22:30
- Location: San Diego, CA, USA
Re: Ideas for separation of "base system"
I suspect the greatest problem would be in defining what a "base system" consists of. Should it include all the X and graphics libraries (GTK, QT, Cairo, Pango, etc)? If not, then you are left with pretty much the core basics which don't typically change rapidly enough to be a problem.
If those libraries are to be "locked in" by a distro, then you will find upstream projects will basically ignore that distro. Developers aren't going to avoid availing themselves of a library's feature just because some distro has decided not to upgrade to a newer version of that library.
If you decide to backport such libraries, you are basically replicating the efforts of the distro's testing branch within its stable branch. If you have the manpower then this would be fine; but it seems to me that taking away development efforts from Testing to maintain Stable would tend to increase the bugginess of both Stable and Testing.
Also, keep in mind that with Unix-type systems the distinction between an application and a library can be very then. Programs such as FFMPEG and MPlayer (not to mention all the GNU Tools) are quite commonly employed as "library subroutines" within other programs.
If those libraries are to be "locked in" by a distro, then you will find upstream projects will basically ignore that distro. Developers aren't going to avoid availing themselves of a library's feature just because some distro has decided not to upgrade to a newer version of that library.
If you decide to backport such libraries, you are basically replicating the efforts of the distro's testing branch within its stable branch. If you have the manpower then this would be fine; but it seems to me that taking away development efforts from Testing to maintain Stable would tend to increase the bugginess of both Stable and Testing.
Also, keep in mind that with Unix-type systems the distinction between an application and a library can be very then. Programs such as FFMPEG and MPlayer (not to mention all the GNU Tools) are quite commonly employed as "library subroutines" within other programs.
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian Kernighan
I really think developers should change something.
For instance, instead of backports.org being an additional project, it could be -stable.
Not everything need to be that stable. Who *really* wants stability? And who can afford a Red Hat?
Yes, I think you can freeze only the libraries. When a software can't be compiled against your old libraries, then you can begin to maintain and freeze this software exactly as Debian does in Etch or Sarge etc.
I don't know what Debian is. Debian is as well the kernel than a little game like tetris.
If Debian had focused on less packages, maybe the openssl problem would have not happened.
For instance, instead of backports.org being an additional project, it could be -stable.
Not everything need to be that stable. Who *really* wants stability? And who can afford a Red Hat?
It's exactly what happens with -stable.If those libraries are to be "locked in" by a distro, then you will find upstream projects will basically ignore that distro.
Yes, I think you can freeze only the libraries. When a software can't be compiled against your old libraries, then you can begin to maintain and freeze this software exactly as Debian does in Etch or Sarge etc.
What is an operating system? An OS is kernel + libc + X + a windows/desktop manager.I suspect the greatest problem would be in defining what a "base system" consists of. Should it include all the X and graphics libraries (GTK, QT, Cairo, Pango, etc)?
I don't know what Debian is. Debian is as well the kernel than a little game like tetris.
If Debian had focused on less packages, maybe the openssl problem would have not happened.
- FolkTheory
- Posts: 284
- Joined: 2008-05-18 23:02
dude you dont even know what youre talking about. you mention 50 different things (wtf tetris?) then your solution to solve all security problems is have less programs...well no crap sherlock. if we had no openssl we wouldnt have had an openssl vulnerability but guess what? we'd have no users either!
then you say nobody wants stability...where the hell did you get this idea from? the people that run stable are very much interested in stability!
then you say nobody wants stability...where the hell did you get this idea from? the people that run stable are very much interested in stability!
I did not say that. What I said: if you only backport security fixes for a base system (and openssl can be in the base system), you have more developers to check fewer packages.if we had no openssl we wouldnt have had an openssl vulnerability
For the non-essential applications, you can upgrade to the last upstream version (if possible), instead of backporting.
I think Debian does -stable but does not know what its target is.then you say nobody wants stability...where the hell did you get this idea from? the people that run stable are very much interested in stability!
The people who need stability (stability for Debian means the software doesn't change its behavior) are servers administrators for very critical sites (i.e very few people, not everyone administrate the NASA servers).
I don't think Apache 2.2.9 is less stable than Debian Apache 2.2.3-4+etch5.
I think it's not a matter if non-essential applications crash during an update, because:
1) All your system still runs.
2) You know where the problem is (since the libraries stay frozen).
3) You can simply downgrade.
4) It will be the upstream fault. Users can understand that.
So you think abandoning a TRIED AND TRUE development model to ape BSD? Most of the base system doesn't change very often(being recompiled against newer libraries is probably the biggest change some of them have gotten in years) and having a frozen target lets the security team handle the three myriads of packages well enough.
1) The hardest problems (to find and to fix) are not that an application "crashes"; they are that an application runs "wrong". If a project asserts that you should version 2.8 of a library, that typically means that using version 2.10 hasn't been tested, that it has been tested and something failed, or that there is a known conflict in the API or behavior of the library which has not yet been addressed.ciol wrote:I think it's not a matter if non-essential applications crash during an update, because:
1) All your system still runs.
2) You know where the problem is (since the libraries stay frozen).
3) You can simply downgrade.
4) It will be the upstream fault. Users can understand that.
2) Therein lies the rub. You can't upgrade Apache from 2.2 to 2.9 unless you also upgrade the libraries. If you upgrade the libraries that Apache uses, you introduce problems for other programs which used the older library versions. If you don't upgrade the libraries, you can't upgrade Apache.
4) What would be upstream's fault? The fact that they depend on newer libraries than a distro provides? Or that a distro provides newer libraries than the upstream specifies? Either way to see it as upstream's fault is to suggest that it is the responsibility if upstream projects to follow the dictates of a distribution (which one?).
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian Kernighan
I don't think it works excellently. A lot of people use -testing because -stable is too obsolete, but they should not use -testing neither.BioTube wrote:Debian's model's worked excellently. If it ain't broke, don't fix it.
Can you explain me why all the windows managers are frozen in -stable? I don't think an administrator needs e.g ion in -stable.
2.8 is compatible with 2.10. If not, you can change the soname or send a patch to the library developers.If a project asserts that you should version 2.8 of a library, that typically means that using version 2.10 hasn't been tested, that it has been tested and something failed, or that there is a known conflict in the API or behavior of the library which has not yet been addressed.
But you can do what I said: upgrade from 2.2.x to 2.2.9 without upgrading the libraries.You can't upgrade Apache from 2.2 to 2.9 unless you also upgrade the libraries.
You did not understand.What would be upstream's fault?
In a separated base system, if a non-essential application like Firefox has a new bug after an upgrade, it will be very unlikely the Debian fault.
On the contrary, in e.g -testing, since you upgrade more packages at a time, it's more difficult to find the problem.
I apologize for misreading your version numbers; but regardless, if the changes between versions are security or bug fixes, they are backported to Stable releases. If they are changes in functionality and insignificant, then why backport?ciol wrote:But you can do what I said: upgrade from 2.2.x to 2.2.9 without upgrading the libraries.saulgoode wrote:You can't upgrade Apache from 2.2 to 2.9 unless you also upgrade the libraries.
If they are changes in functionality and significant, they will require testing before being backported. This testing takes away developer resources from the testing being done in the Testing branch and will cause delay in the release of the next stable.
And in a non-essential application like GIMP, upgrading to a newer version may require upgrading about 50 different libraries. Are the libraries upon which GIMP depends to be considered essential, in which case what you propose can't be done; or non-essential, in which case all other programs which use any of those same libraries must be re-tested?ciol wrote:You did not understand.
In a separated base system, if a non-essential application like Firefox has a new bug after an upgrade, it will be very unlikely the Debian fault.
And the good people who contribute to Testing know what to expect. They are familiar with applications they are testing and often know how to use debugging tools to provide useful feedback to the developers. At a minimum, they are familiar with provided mechanisms for reporting the problems and have an understanding of the type of information developers require (and if such is not the case, they will quickly be educated about it).ciol wrote:On the contrary, in e.g -testing, since you upgrade more packages at a time, it's more difficult to find the problem.
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian Kernighan
I think the guys from the Apache Foundation are strong enough to trust them. The more you trust the upstream, the less work you have as a distribution maintainer.I apologize for misreading your version numbers; but regardless, if the changes between versions are security or bug fixes, they are backported to Stable releases. If they are changes in functionality and insignificant, then why backport?
If they are changes in functionality and significant, they will require testing before being backported. This testing takes away developer resources from the testing being done in the Testing branch and will cause delay in the release of the next stable.
In a bump from The Gimp 2.2 to 2.4 maybe. But look: Debian has The Gimp 2.2.13 in etch. The last from the 2.2 branch is 2.2.17. I think you can safely and easily upgrade in this case. That's all what I'm trying to say.And in a non-essential application like GIMP, upgrading to a newer version may require upgrading about 50 different libraries.
If The Gimp released 2.2.17, there are some reasons. I don't know why ignore them.
It's hard to say. It's something that can be discuss for each library.Are the libraries upon which GIMP depends to be considered essential, in which case what you propose can't be done; or non-essential, in which case all other programs which use any of those same libraries must be re-tested?
Have you considered that maybe Debian Stable is not for you?
Instead of trying to change something that's not for you, try to find something that is. From 400 distros I'm sure you'll find something appropriate, but if you still don't find something you fully like and you still like Debian then maybe you should give them the benefit of doubt, maybe, just maybe they do things right.
Instead of trying to change something that's not for you, try to find something that is. From 400 distros I'm sure you'll find something appropriate, but if you still don't find something you fully like and you still like Debian then maybe you should give them the benefit of doubt, maybe, just maybe they do things right.
Ubuntu hate is a mental derangement.
The main priorities of Debian is their users, not you specifically. But whatever if you think that you talk for most of the users I will let you believe that...ciol wrote:I thought one of the priorities of Debian was their users.
If not, they should remove "The Universal Operating System" from their main website.
But again, if you don't like how Debian does things you should probably use something else, why not BSD since you seem to appreciate how they do stuff?
Ubuntu hate is a mental derangement.
I agree with you here. It would seem reasonable to provide a package to update GIMP to version 2.2.17. But this is permitted under the existing updates policy and the failure is most likely an oversight on the part of the package maintainer.ciol wrote:In a bump from The Gimp 2.2 to 2.4 maybe. But look: Debian has The Gimp 2.2.13 in etch. The last from the 2.2 branch is 2.2.17. I think you can safely and easily upgrade in this case. That's all what I'm trying to say.
If The Gimp released 2.2.17, there are some reasons. I don't know why ignore them.
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian Kernighan