Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

Quite the war plan, or how proprietary software corps strike

Here you can discuss every aspect of Debian. Note: not for support requests!
Post Reply
Message
Author
User avatar
jobezone
Posts: 214
Joined: 2005-06-12 07:20
Location: Portugal

Quite the war plan, or how proprietary software corps strike

#1 Post by jobezone »

Take a look at this from http://autopackage.org/NOTES from autopackage's site.

It's quite an interesting reading of a future of free software. Starting with the argument of creating "desktop Linux software installation easy", especially "for migrants [ex-windows users]", and other potential "problems", like the fact that proprietary game vendors are "often not interested in fixing bugs after [the release] date [of their games]" , some technical changes, and a "change in the open source culture" is required.

Since this could affect us all, I'm posting it here, as you may find it an interesting reading. (I love to read this stuff. It's the same with the Economist.)
Making desktop Linux software installation easy
28th September 2004

Isn't it easy already? No ... technologies like apt have serious
drawbacks we should recognise:

* Centralisation introduces lag between upstream releases and actually
being able to install them, sometimes measured in months or years!

* Packaging as separate from development tends to introduce obscure
bugs caused by packagers not always fully understanding what it is
they're packaging. It makes no more sense than UI design or artwork
being done by the distribution

* Distro developers end up duplicating effort on a massive scale. 20
distros == the same software packaged 20 times == 20x the chance a
user will receive a buggy package. Broken packages are not rare:
see Wine, which has a huge number of incorrect packages in
circulation, Mono which suffers from undesired distro packaging, etc.

* apt et al require extremely well controlled repos, otherwise they
can get confused and ask users to provide solutions manually : this
requires an understanding of the technology we can't expect users
to have.

* Very hard to avoid the "shopping mall" type user interface, at
which point choice becomes unmanagably large: see Synaptic for a
pathological example of this. Better UIs are possible but
you're covering up a programmatic model which doesn't match the
user model esp for migrants.

* Pushes the "appliance" line of thinking, where a distro is not a
platform on which third parties can build with a strong commitment
to stability but merely an appliance: a collection of bits that
happen to work together but may change in any way tomorrow: you can
use what's on the CDs but extend or modify it and you void the
warranty. Appliance distros have their place: live demo CDs, router
distros, maybe even server distros, but not desktops. To compete
with Windows for mindshare and acceptance we must be a platform.

Apt does have its place: useful for servers, core OS packages (see below)

We want to support a variety of UI paradigms: MacOS X style drag/drop,
setup.exe, apt-get install foo - all of these are possible in a
distributed world

IMPLIES:

- A way for maintainers to build easy to use universal binary
packages in a decentralised fashion (but which can still
perform dependency resolution)

IN PROGRESS: autopackage, ZeroInstall
ALREADY EXISTS: Loki Setup, BitRock etc (no dep resolution)

- A unified platform to reduce the strain on the dependency
resolvers and increase reliability

consistent/coherant API policy, lack of API duplication,
centralised management not required, win32 shows us that
a platform doesn't have to be coherent to be widely used

basically means a large set of functionality that you can
get with a single expressed dependency. Either your distro
provides these packages or it doesn't, in which case you
have to do dep resolution manually. Backwards compatibility
is managed, ie if you depend on Platform v1.0 then
Platform v1.2 will satisfy this.

Can be expressed via virtual dependencies in a repository
using todays technology (apt et al) so does not require
distro co-operation.

NOT STARTED: lsb does not meet above requirements of size,
informal backwards compatibility efforts etc.

- Sucking bulk of distribution policy upstream, eg init
scripts, config files (debian .d dirs) rather than having it
expressed via proprietary packaging policy.

This is already underway, eg with freedesktop.org menus
replacing each distros/desktops customized menu
scheme. Currently not widely implemented, Fedora forked
Gnome to add it but upstream has more stringent requirements
- no action on this as of writing. Long term will (hopefully)
replace vFolders, Debian/Mandrake Menu System etc.

Need forum like xdg-list but for distro developers rather
than desktop developers where policy can be standardized:
currently does not exist, no plans to create one.

NOT STARTED: no distribution developer forum exists

- Desktop integration via a dedicated PAL (packaging
abstraction layer): allows us to drive good package
management UI deep into desktop without compromising
portability: despite different implementations primitive
operations mostly identical therefore ripe for abstraction

DECENTRALIZED BINARY PACKAGING

- Examples in use today: Loki installers, self extracting
archives, distro neutral RPMs (eg, Sun JRE). Typically lack
dependency resolution but not needed due to highly defensive
coding style (lots of dlopen action, no assumptions about
base system)

- Future possibilities: autopackage, Zero Install, LSB RPMs

- autopackage is interesting because it provides a good user
experience right out of the box: people using unmaintained
alpha autopackages of Gaim from over a year ago because "it
just worked" and "I couldn't figure out how to install it any
other way".

=> Many Linux users without much technical ability, major
change from the past

=> the "auto" in autopackage is important to people

However, InstallShield type UI *not* long term goal.

Complex solution to a messy world.

- ZeroInstall is interesting as it has a MacOS/RISCOS style
appfolder UI system working today, and because it's
conceptually clean. However, requires installation of a
kernel module (so initial install can be difficult) and does
not integrate so well with existing distributions.

- These two have another advantage: they are not native to any
pre-existing distribution so can be hated equally by all

- Binary portability not so simple, technique used today is
build on RH6.2 box and pray, technique used by
autopackage/ZeroInstall is apbuild.

apbuild is "policy in a box", will evolve and adapt over time
to match peoples needs. Essentially about altering upstream
decisions.

- Big issue is library maintainers not following best
practices like symbol versioning, header versioning,
parallel installability

- Key libs break backcompat too often (eg, OpenSSL)


- Meets resistance from those who believe there's nothing wrong
with the old way. eg, "project policy is that we don't
provide binaries", "compiling software is more pure, it's the
open source way", "it's the job of the distribution to
provide binaries" etc etc

- Plan is not to replace apt and distro packaging but to
complement them, distro packaging becomes only for "core"
software. Good interop between neutral and native solutions
needed.

What is core?

UNIFIED BASE SET:

- Could be an upstream project like any other, or could be
freedesktop style informal agreements.

- Most distros already package most stuff that'd be in such a
base set anyway, so key challenges are to:

- manage versioning (ensure *-compat packages are always available)
- ensure it's easy to install the base packages, ie by providing
package repositories and virtual dependencies
- ideally get it into the distro "base sets" where that makes sense
(fedora, mandrake, suse ...)

- convince people it's a good thing to do!

- Linux Standard Base:

- Different approach, based on specifications of libraries rather than
specifying individual package versions. Reasoning: allows forking,
blocks distro patching which modifies API/ABI.

- Good:

* Formal standard people can write test suites for
* Support from large [proprietary] ISVs

- Bad:

* Not realistic to specify ABI for every library we need
in unified base set to be competitive, doesn't achieve
much anyway once you move beyond the lower levels,
ABI spec only a subset of what's needed

* Moves too slowly

* Standardises existing practice, ie is not able to stop
NPTL style breakage

* Difficulties w.r.t conflicting goals between LSB and
upstream, for instance C++ ABI vs ISO

* Blocker: experience shows that real world software developers
don't write to specs, they write to implementations: very few
examples where this is not the case. Therefore attempt to specify
eg GTK+ ABI doomed to fail w.r.t backcompat in long run:

a) Too much behaviour unspecified: callback ordering,
handling of error conditions, sorting etc

b) People end up depending on behaviour accidentally anyway:
construct only GObject properties

- People don't always deserve to lose because they wrote buggy code:

- Often docs are unclear or exact behaviour is
unspecified. GObject properties case: no official specs at all

- No other way: how many people write their code to be
independent of callback ordering/multiple callbacks?

- Everyone writes buggy code. People are expected to
support each other even when they screw up.

- Ultimately it's the user who loses

- Potential solution? A mostly informal, flexible base set
maintained by a standard heirarchical team rather than a
committee who are willing to patch upstream binaries in
order to maintain compatibility with pre-existing code: the
Microsoft approach

- Not as extreme, majority of software on free desktop
free software, can be upgraded for zero dollars
(automatically?) even if it's a pain for the user, so
affects breaking change cost/benefit analysis

- However, basic theory is the same: don't break
compatibility, this is more than ABI/API stability and
includes things like buggy behaviour apps depend on.

- We want to support games, these are almost always
proprietary and worse they have a max shelflife of a few
months: vendors often not interested in fixing bugs after
this date, even if people are still playing years after
release

- Advantage: can be fast moving, can do things a formal
organization would frown upon, can JUST DO IT

- Disadvantage: lacks legitimacy

- Packagers "opt in" to shared base set by setting their only
dependency as "base-set > 1.0", by adding rpaths to their
binary ${ORIGIN}/../lib/base-set to allow for upstream
overrides

- Only applies to userspace, kernel is fundamentally screwed
from an stability (== ease of use) perspective *however* as
kernel becomes ever more mature it becomes more practical to
simply fork a stable series and backport non-breaking
changes/driver upgrades. Kernel is mature, biggest reason to
upgrade is improved hardware support.

- Alternative: attempt to abstract subset of unstable
kernel ABI behind a stable one provided by a
module. Fighting a losing battle?

- Software is not art, hacks OK if they serve the user and
don't compromise future engineering.

- Only keep driver compat in stable series

SUCKING DISTRO POLICY UPSTREAM

- Need equivalent of xdg-list for distributions, need more
manpower devoted to actually implementing and ratifying
standards, eg recent files spec, menu spec mostly in limbo.

- A standard that is only implemented "sometimes" or
"occasionally" is just as bad as no standard at all,
therefore implementability crucial in design phase otherwise
you end up with the menus problem.

- Examples of distribution policy:

- Dealing with upstream goofs, eg libpng parallel
installability, glibc non-releases
- Extensions to standard config files, /etc/modules
- Co-ordinating potentially breaking changes like NPTL
- Obvious stuff: init scripts, shared config ui (overlap
with xdg), maybe even co-ordinated branding.

- new freedesktop list appropriate forum, or new project entirely?

- Can be hard to get distros involved, often distro developers
out of sight/not active in community. Let's get Havoc to do
it! :)


-------------------------------- PROBLEM ----------------------------------

- If versioning policy is moved upstream, and packaging
decentralised WHAT IS A DISTRIBUTION?

- Some distros define themselves through their packaging
policy, others through their desktop, others through their
support (mix'n'match to your pleasure), others are highly
specialised (eg knoppix live CD, router CDs, embedded
distros etc)

- focus is on standardising platform not higher levels: nobody
cares if you prefer compiling your own X server, or use a
different icon theme as long as it's compatible with
everybody elses and can be depended on via a shared base set

- will meet resistance from those who believe their
preferred distro is the one true way: "why do people
complain, just use Debian" mentality.

- we all have more interesting things to do that reinvent the
distribution wheel over and over : "wouldn't you rather be
hacking anyway?"

DESKTOP INTEGRATION

- Desktop projects are portable, therefore requires an
abstraction that allows them to maintain that
portability. Do what HAL is doing for hardware, but for
packaging: PAL!

- 'Ideal' UI varies between users: drag/drop appfolders,
apt-get install foo, setup.exe, different paradigms but all
supportable if we're smart

- drag'n'drop a la MacOS X/RISC OS except that actually works

- MacOS X not consistent, Apples own software doesn't use
appfolder installs, custom installer program shipped with
own OS and even then developers roll their own.

- RISC OS/NeXT never had to survive in the internet age of
100s of programs installed at once, exponential OS
complexity/integration etc.

- How? Extend the .desktop abstraction to include packaging
data, such that a desktop can take an arbitrary application
.desktop file and install the package it "points" to behind
the scenes, with no user interaction required.

Therefore drag from webpage to panel == download+install
operation. Can click icon in web page directly to launch it
(or from CD, or email, or IM conversation).

- Requires standards for packaging metadata, see PAL

- Not hard, requires several months of effort from the right
people, requires getting desktop projects on board (ie,
backwards compatibility with existing systems like apt).

- apt UI can still work in a distributed world: DNS type
hierarchy for software.

- Possibility for insertion of whitelisting, attack malware
by giving user warning if they try and install it
(optionally block entirely)



- Can be done in parallel with other tasks.



******************************************************************
* None of this is technically difficult, it just *
* requires a change in the open source culture. *
******************************************************************
Last edited by jobezone on 2005-08-23 05:29, edited 1 time in total.
The Debian Documentation website contains the FAQ, Installation Manual and the Release Notes for Etch. They're helpful if you want to learn more about debian!


User avatar
jobezone
Posts: 214
Joined: 2005-06-12 07:20
Location: Portugal

Re: Quite the war plan, or how proprietary software corp's s

#3 Post by jobezone »

You may also find interesting this blog post by a debian developer about autopackage, and its many comments, some of which (towards the end) are mine. I commented on it before reading this NOTES text.
http://www.netsplit.com/blog/tech/autopackage_II
The Debian Documentation website contains the FAQ, Installation Manual and the Release Notes for Etch. They're helpful if you want to learn more about debian!

Post Reply