Blaming Intel for How the World Is

87
Article Source MoblinZone Blog
October 27, 2009, 1:14 pm

Are you mad at Intel because you can’t use your favorite Linux distro on a CompuLabs FitPC2, Acer Aspire One 751h, or perhaps an early Dell Mini netbook? Maybe you should think it through again.

All of these devices use Intel’s “Menlow” platform, comprised of a Z-series Atom processor (formerly codenamed “Silverthorne”) and an SCH-US15W companion chip (formerly codenamed “Pouslbo”).

These parts were intended to help Intel enter the device market. So, Intel chose to incorporate best-of-breed device graphics, even though they require closed drivers. Yet, some Intel customers used these parts in PC-like applications, leading to some end-user confusion and frustration, particularly among Linux users. Yet, Intel’s not at fault, because they didn’t create the market conditions that have led to the acceptance of closed drivers in the device market.

That’s the short version. Here’s the long one.

What’s wrong with closed drivers in PC-like devices?

The Menlow platform’s companion chip — easier to call it Poulsbo, though that isn’t really its proper name anymore — integrates a PowerVR graphics processor unit (GPU) supplied by Imagination Technologies (ImgTec). This PowerVR GPU requires closed drivers in order to exploit its rather phenomenal, best-of-class 2D/3D hardware acceleration and HDTV display capabilities. These closed drivers have rubbed a lot of Linux folks the wrong way.

Why? Well, by design, the Linux kernel’s binary interface is dynamic. It may well and usually does change significantly with each release. The programming interfaces (i.e., the C, python, perl, etc. hooks) are stable, but the binary interface is not (this is one of Linux’s advantages over other OSes, but that’s another story). So, a binary driver can typically be expected to work only with the specific kernel version it was compiled to support.

In the case of the PowerVR, a few binary drivers are freely available to Linux users. However, the emphasis here is on “few.” There’s a version for the kernel used in Ubuntu 8.04 LTS. If you have an early Dell Mini, or a FitPC2, that’s what you’re using. There’s a version for the kernel used in the new Mandriva One release slated for Nov. 3, and while I haven’t seen it yet, I understand there is or will be one available for the kernel in Ubuntu 9.10. These are available, because the Canonicals and Mandrivas of the world have negotiated redistribution license deals with ImgTec. You can use any of these kernels on your Menlow-based device. However, stray beyond this tiny subset of Linux kernels — or heaven forbid, compile your own (does anyone do that, anymore?) — and you’re SOL (sorely out of luck).

Take, for instance, Linux Journal editor Shawn Powers, who today wrote a brief but biting essay accusing Intel of “kicking its friends in the face” with its closed GMA500 drivers.

But is Intel really the party at fault, here?

Different markets, different realities

This is going to take some background. Bear with me.

In the PC world, all components of the system, from hardware peripherals to operating systems to applications, are subject to frequent user upgrades, replacements, and additions. The BIOS has to probe everything, just so it knows what memory banks and disks and peripherals it’s dealing with. The OSes are built in a highly modular fashion. In the case of Linux, there’s usually a two-stage boot-up, with a small initrd filesystem loading first, detecting the hardware, and then loading drivers and OS modules needed for the system as currently configured.

In the device world of embedded systems and consumer goods, though, there’s very little expectation that users will modify their hardware or their software. In fact, it’s safe to say there is no expectation at all that they will do so. Most devices are appliances, like toasters — they are not meant to be user-serviceable.

So, instead of a BIOS with a lengthy probe phase, there’s a bootloader purpose-built to initialize the hardware and hand control to the OS as soon as possible. The OS, too, can be configured ahead of time to support the exact chipset and peripheral mix that the device comes with. The upshot? Sub-second Linux kernel boot times are not uncommon in the device world. Compare that to your PC!

If some of the device drivers are closed, what does it matter? The system is “embedded” — it’s tied closely to the actual hardware present on the platform — and the user is never expected to change anything about the core system, neither hardware nor software. Even the manufacturer might not ever expect to upgrade the firmware on the device, once it’s shipped. Closed drivers? Who cares!

No harm, no foul

Not only is there no significant penalty for closed drivers in the device world, sometimes, they work out better. There’s a business advantage, in terms of vendor lock-in. If I’m a chip maker, my customer has to come back to me for a new driver or source-level license (with non-disclosure agreement) when they begin working on a new product model, or a firmware upgrade. In the thin-margin world of device parts, that kind of ongoing revenue stream might make the difference between getting by or having to lay off engineers.

Furthermore, graphics drivers, in particular, are a minefield of software and hardware patents — all the big graphics vendors seem to have perpetual ongoing litigation against all the other ones. So, engineers actually avoid looking at source code, because that’s the best way to prove you haven’t knowingly “stolen” someone else’s proprietary algorithm. NDAs for source access formalize the documentation around who has seen what, and when. Executives might see that as letting them spend more on engineers, and less on attorneys.

So, in a nutshell, PCs and devices are fundamentally different. In the device chip market, the expectations around driver openness are different than they are in the PC market. And, they’ve been that way for a long, long time — certainly long before Intel ever entered the market.

Different markets, but similar technology

Novertheless, devices and PCs tend to use similar technology. Both have memory, storage, and peripheral interfaces, whether exposed to users or not.

I like to think of the difference as resembling that between bicycle technology and car technology. Both use pneumatic tires, chain-driven transmissions, sealed bearings, spoked wheels, incandescent and LED lighting, cable control systems, disk and caliper brakes, petroleum lubrication, and so on and on and on. Yet, the design criteria and implementations diverge, big-time. You wouldn’t expect to use car parts on a bicycle, or vice versa.

Yet, there is actually a lot of crossover between PCs and devices. Device makers like the economies of scale of PC parts. It’s just cheaper to build devices with mass-produced chips from the PC world. So, you have industry groups like the ATCA, aimed at taming PC parts for use in specialized telecom infrastructure gear. Or, efforts like ATA-over-Ethernet, aimed at transporting gig-plus Ethernet into enterprise storage devices, where exorbitant fiber channel has long ruled.

On the other side of the fence, you can certainly see the appeal, for PC makers, of device parts. Generally speaking, the hardware in the device world is more interesting, efficient, and just more magical. Crack the cases of a device, and its hard not to marvel at the site of chip-on-film interconnects, nano-ball-grid arrays, and near-invisible circuit board traces. It’s a whole other class of engineering.

To stretch the analogy, if you look inside a PC or even under the hood of a fine automobile, you may notice how crude everything is, with mass making up for poor design and cheap manufacturing processes at every turn. Now look at even a humble bicycle and marvel. There’ll probably be exotic iron or aluminum alloys, forged parts, and ingenious engineering everywhere. The better gear may have silver brazed lugs, cold-forged alloy parts, and the most minimal, elegant design engineering.

In the case of Menlow, the appeal for PC-like device makers is generally in the superior graphics capabilities, although lower power draw and the lack of a need for a fan are kind of compelling, too. With closed drivers, Menlow’s GMA500 supports HDTV output, something Diamondville-based Atom netbooks can’t do (Ion-based ones are a bit better). So, when an Acer wants to go with a larger 11.6-inch display on a netbook, well… you can see why they’d need a better GPU.

Incidentally, those ImgTec PowerVR GPUs also power many of the best smartphones of today — some of which support HDTVs as external displays! That is some amazing hardware! Closed drivers aside, it’s at least as sexy as Campy SL Blacks.

Yet, bicycle parts are not meant for use in cars, and neither are embedded parts like Menlow really intended for PCs, which I’ve defined as devices (like netbooks and nettops) where users can be expected to modify the hardware, software, and application configuration. Yet, is it Intel’s fault if its customers wind up experimenting with its device parts in PC-like applications? Should it refuse a customer’s money? Isn’t this the era of PC/device market cross-pollination?

Intel’s influence on devices

Intel needs to diversify into devices to make it in the “post PC” world. It did not create the market conditions in the device sector. Instead, it has to live with them. In order to compete, it needs to use best-of-breed parts, like the PowerVR. It is too new to this market to have its own embedded GPUs.

Once Intel does have its own embedded GPUs, though, they may well have open drivers! It certainly has a long history of commitment to openness in the PC market, and I wouldn’t be surprised to see it exert some influence over the device market, too — eventually.

Wouldn’t it be marvellous to be able to upgrade the memory on your phone? (Actually, Intel’s StrataFlash line began with that exact goal in mind, if my own memory serves…). Wouldn’t it be cool to install a new OS on, say, your car stereo, if you wanted to? Personally, I think Intel can set a real example for the benefits of openness in the device world. At the same time, it’s early days for that company in this market.

The worst you can say about Intel’s role in the current Menlow-in-PC-like-device situation is that it made things pretty confusing for everyone, by applying the same “Atom” branding to both the Menlow and the Diamondville platforms. It’s pretty dumb to use the same branding for products spanning different markets. Yet, as noted earlier, it is a time of crossover and cross-pollination, for PCs and devices, so it’s hard to blame them too much. A far more common branding mistake is when companies allow their branding efforts to proliferate. In other words, it’s better to err on the side of fewer brands than more, because more brands means more for consumers to remember.

As ever, it really comes down to caveat emptor. If you want to be a Linux user, there have always been extra requirements around that, in terms of finding hardware that’s compatible. Luckily, as noted, there are a few PowerVR (GMA500) drivers out there. If like me, you are the proud owner of a Menlow-based device (I set up a FitPC2 at a local cafe, so they could add a PC without increasing their electric bill), I guess you just have to think of closed drivers as the price that comes with something so elegant, minimalist, and beautiful.

For now.

More about the FitPC can be found here. Shawn’s “kick in the face” story is here. There’s a nice picture of some Campy SL Blacks, here.