Ottawa Linux Symposium, Day 1

39

Author: David "cdlu" Graham

OTTAWA — The seventh annual Ottawa Linux Symposium was kicked off by LWN.net‘s energetic Jonathan Corbet giving his
interpretation of the Linux kernel road map. Corbet was followed by Bert Hubert, who spoke about faster boot and application load times.

The official title of Corbet’s session was “A 2.6 Kernel road map (as
drawn by a blind man),” and he started by quoting Microsoft CEO Steve
Ballmer: “There is no road map for Linux,” and William Gibson: “The future
is here, it’s just not widely distributed yet.”

Corbet told us that there is indeed a Linux road map, and if you look deep
enough, you can find it. So, he said, he looked for it so we wouldn’t have
to.

He then proceeded to review the history of Linux kernel development with
us.

In the “Good Olde Days,” as he put it, odd-version-numbered kernel
releases were development releases, and even-version-numbered kernel
releases were stable releases. Stable releases were intended and could be
expected to work in a production environment, while development kernels
were more or less to be used at your own risk.

With this system, feature freezes in an effort to get kernels stable could
result in new features in the kernel literally taking years to be actually
implemented. Kernel patches could be dropped by those submitting them in
the interim and, as a result, the development was not as efficient as it
could have been. The lengthy kernel development cycle degenerated into a
kind of slushy kernel feature freeze where small features would be
implemented, then more small ones — until the feature freeze had to be
re-announced and re-implemented.

To counteract the long development cycle and the integration of new
features being delayed, Linux distributors began to include modified Linux
kernels with back-ported features, making their kernels not necessarily
compatible with official kernel.org kernels.

Kernel 2.6.0 was released in December 2003 with bug fix releases about
once a month. By OLS of 2004, the kernel was up to 2.6.7, and in the year
since it has climbed to 2.6.12. In the first six months of the kernel 2.6
development, an ostensibly stable kernel release, 600,000 lines of code
were removed and 900,000 new lines of kernel code were added,
representation a replacement of 1/4 of the kernel, said Corbet.

Corbet noted that Linus replaced the virtual memory subsystem in the
stable kernel tree last year, calling it an “implementation detail.”
Kernel 2.7, he explained, is not forthcoming. The development process of
2.6 has evolved into the new development methodology where the current
kernel is developed through the -mm and release candidate trees, resulting
in the so-called “sucker kernels,” the 2.6.x.y pre-release kernel trees.
In essence, every kernel revision has its own development cycle instead of
one larger one for each major kernel version.

Under the evolved process, while Andrew Morton was intended to maintain
the stable release while Linus worked on the development release, the
opposite has come to pass, with Linus releasing and maintaining the stable
kernel releases while Morton manages the development releases.
Morton, said Corbet, is bringing professionalism to the Linux kernel
development process, offering a voice saying “this is a good patch that is
needed, please add comments to it,” instead of ignoring it or not getting
to it.

Morton, explained Corbet, believes that anyone who takes the time
to write a patch for the kernel at the very least deserves a response. The
basic philosophy there being that, metaphorically, no one should be kicked
out of the kitchen who wants to cook a good meal.

With the changed development system, useful patches are getting
into the mainline kernel releases more quickly, and the kernels released by
the distributions are increasingly looking like the kernels released by
the kernel development teams.

A major reason for the evolution of the development process of the Linux
kernel was the advent of BitKeeper, said Corbet. BitKeeper was the first
time the Linux kernel development process made use of a source code
management system and resulted in no more lost patches. Everything had a
place to go and a thorough patch history was introduced.

On April 5, 2005, BitKeeper’s provider retracted the free version of the
client and forced Linux to find a new management system. Two days later,
Linus released git, a quickly written program to have the same effect.
Two weeks after that, kernel 2.6.12-rc3 was released done entirely on the
freshly written git development system.

Corbet’s presentation was interrupted by a chipmunk who pretty
well stole the show when it scurried around in front of the podium with
people trying to take pictures of it standing up to see, as he was
discussing git.

A possible alternative to git is mercurial.

What is the future of Linux? Corbet noted even Linus does not have
an answer for that. The next kernel, he predicted, would come out in
August and include Inotify and kexec, respectively a system for notifying
user-space applications about file-system changes and a system for loading
a kernel from within the kernel in emergency situations. The latter is a
subject of its own presentation on Thursday.

Corbet spent a good deal of time explaining preemptible kernels and
various issues related to scheduling to be addressed in near-future
kernels.

Upcoming kernels, Corbet expects, will address cluster file systems, FUSE
— file-systems in user space, software suspend, desktop support, video
device support, and a number of other things.

In the area of video device support, a question that is being debated and
needs to be answered is who is responsible: X or the kernel? Does the
kernel control and configure the graphics card in a Linux system, or does
the kernel handle the drivers as it does for all other hardware?

On the security front, the kernel is to get its own contact for security
issues. Trusted computing — digital rights management support — is being
implemented for better or for worse in the Linux kernel.

One other issue, Corbet noted, being addressed is the issue of
memory fragmentation. Corbet’s slide show is available at
LWN.net.

In the afternoon, Bert Hubert made a presentation on improving application
start times and system boot times.

Using a series of GNUplot graphs, he showed the time delay between disk
requests and responses, and where and when the hard drive heads were
reading the hard drive. He noted it takes about 20 seconds for Mozilla to
load on his laptop, while reading only 20 MB of data, something the disk
should be able to do in only one second. The graphs showed the data to be
scattered all over the hard drive and the program not reading it in any
efficient order, resulting in the hard drive head spending most of its
time seeking instead of reading.

Hubert noted that the time it takes a hard drive to move the hard drive
head from one place to another on the platter could be put to better use
reading several megs of data in the same amount of time. Organizing data
more rationally on a hard drive can reduce latency times and improve
system performance.

Using several similar examples, Hubert demonstrated how applications and
sometimes the kernel waste time by not treating the hard drive rationally,
in some cases even reading the hard drive backward, which is very slow
and inefficient. He is releasing a
rough kernel patch to study this to help improve kernel and program
performance as far as disk read and write latency.

He noted the Linux’s boot cycle can be cut down by 10 seconds on
his computer simply by disabling a process called atime, which writes data
to disk on boot, resulting in more wasted seek time.

Hubert offered some solutions to the problem of high disk latency,
including dumping hard drive data to memory and reading it directly from
there, which can cost on reliability if the hard drive and memory versions
get out of sync. Other solutions include recording hard drive use patterns
and having the hard drive read ahead so that data is preemptively placed
into memory before it is needed, resulting in lower latency when it is
actually used, and reorganizing binaries on the physical storage media to
be loadable more rationally.

The day wrapped up with a reception scheduled for 20:00 sponsored
by Intel. Doug Fisher got up to speak around 20:30 while hundreds of
attendees milled about feeding on the tray of snacks and drinking the
complementary alcohol. They never completely settled down for his talk.

Fisher identified himself as the general manager of Intel’s Core Software
Division, specifically interested in a development group known as the Open
Source Technology Centre (no relation to us, the Open Source Technology
Group). Intel, claimed Fisher, uses Linux in their business and
contributes back to the Linux community.

He offered a laptop with a 100GB drive and 17″ display, in his
description, to the first person who could tell him how many Linux servers
Intel has deployed. After much uneducated estimating, someone guessed
50,000 Linux servers are deployed at Intel. Fisher called it close enough
and gave out the laptop, while claiming 52,000 Linux servers are deployed
at the company.

Linux is used for 100 percent of the work involved in the development of new
processors at Intel, Fisher stated.

Intel invented the integrated microchip in 1971. The first included 2,600
transistors and had cache memory and I/O directly on the processor for the
first time.

Intel’s latest processor, he said, boasts 1.72 billion transistors.

The main point of Fisher’s presentation was that Intel supports
Linux and open source in and throughout its business and contributes
back to the community. Over the course of the presentation, he gave out
two laptops and two Palm Pilots to members of the audience for answering
minor questions of trivia.

When asked if the laptops he was giving out ran Linux, he said
“no, but I bet they will within 24 hours.” He wrapped up his presentation
to the usual polite applause and closed his slide show to reveal the
message “Windows XP has locked your desktop,” resulting in the single
loudest and most sustained booing by nearly everyone present I have ever
heard, followed by a member of the audience rushing to the front
brandishing a Linux installation CD to widespread applause.

The very last presentation of the evening was by IBM’s Art Cannon.
His laptop, in contrast to Fisher’s, ran Linux, but perhaps demonstrated
why Fisher’s didn’t. After a lengthy battle with X to have an appropriate
resolution for the overhead projector, he launched his presentation
entitled “How to talk to business people about the value of open source,”
which sparked the comment of an audience member sitting near me: “not like
this.”

Problems resolved, Cannon began by explaining that he works in the
Sales and Distribution division of IBM and is not a technical person. He
told the audience that, in the Linux community, there are two kinds of
trust: granted trust, and earned trust. He said he had been granted trust,
but was hoping with his talk to earn the trust of the Linux community.

At ISPCon 1998, IBM first indicated a public interest in Linux by
posting a penguin on the roof of its booth. Jon ‘maddog’ Hall showed up,
innocently asking questions of IBM about the stuffed bird on the roof of
the booth, testing the company’s knowledge and commitment, before
identifying himself as the president of Linux International.

Cannon quoted Miguel de Icaza: “How many barrels of oil
does our country have to export to pay for the operating system?”

Foreign investment in Linux and open source is largely related
back to this fundamental premise. Venezuela, for example, would take this
point of view literally. To pay for a Microsoft Windows license, how many
barrels of oil have to be exported?

China, for its part, said Cannon, is intent on joining the World
Trade Organization. As part of the process, it has to synchronize its
intellectual property laws with other countries and come into line with
property rights. To accomplish this, among other things, China would have
to legalize its copies of Microsoft software at an estimated cost of $32
billion paid directly to Microsoft in licensing fees. Rather than going
through that kind of expense, China chose instead to invest in Linux.

Brazil, thinking along the same lines, announced a three-year plan to
switching 80 percent of its government systems to Linux and funded the project
properly to accomplish it.

Cannon wrapped up by reiterating IBM’s commitment to release many
of its patents to the open source community.

Category:

  • Linux