Wednesday, November 28, 2007

New version of cgit

Lars Hjemli recently released a new version of cgit, the unbelievably fast web front-end for git. I just updated the cgit installation on and the end result is pretty nice. It still has good looking, easy to type URLs such as

and a cool new feature is the ability to only show the 10 most recently active branches and tags in the summary page. The X server has over 50 old branches sitting around from the CVS import and they're typically not interesting. Also new is the sidebar, which has a very useful drop-down box for switching between branches, and the fast and useful search field.

Thursday, October 18, 2007

I'm in ur X server adding visuals!

Maybe you've wondered why, when you run xdpyinfo, the X server reports a bunch of identical looking visuals. And in fact, for all xdpyinfo and X cares, they are identical. However, to GLX, the visuals are all different and they give you different features when you use them with OpenGL. Some specify a double buffer, some have a stencil buffer, and some have buffers I don't really know anything about. glxinfo is a good way to find out. In GLX version 1.3 and up, a new mechanism for specifying the configuration of the OpenGL drawable is introduced. Using a so called GLXFBConfig, you can specify, more or less orthogonal to the visual you're using, the OpenGL properties of your drawable.

What I've been working on recently is to clean up the way visuals are initialized and handled in the X server and to disentangle GLX visuals and GLXFBConfigs. There's a lot of nice code clean-up landing and the X server now has a lot less visuals by default. There's a new xorg.conf option that lets you pull in all the visuals or a minimal set if you want. The git version of glxinfo (found in mesa in progs/xdemos) will list GLX visuals and GLXFBConfigs separately now so you can see how the X server sets it up. It's bound to break something, but we'll fix it.

Sunday, August 19, 2007

Git survey

The git developers are doing a git survey again, follow this link to take it.

Thursday, August 16, 2007

Compiz and Fedora

There was a big discussion about enabling compiz by default in Fedora on fedora-devel list a couple of days ago. Reading through the thread I realized that we never really communicated our plans here very well. I sent out the writeup below to explain a bit where we're going with Fedora and what the road blocks are, but I figured it wouldn't harm to also blog it.

We are working towards being able to enable a compositor by default.

Redirected direct rendering, Xv, Java problems, these are problems in the drivers and X server and affects all compositors. To get this done, we need to land a big chunk of code in the kernel (the DRM memory manager), we need to update the X server, the 2D drivers and the 3D drivers to take advantage of the new memory manager. We need to fix Xv. Hopefully the OpenGL support will broaden to support more chipsets (nouveau, avivo). Getting all this to a shippable state may take years, and in the mean time, the composited desktop, whether it's compiz, kwin4 or something else will only be an opt-in tech demo.

Once we get there, the default compositor will be compiz.

Some people can't get over on the wobbly windows and spinning cube, and claim it's over the top or apparently get nauseous. At the core, compiz is just a very efficient OpenGL redraw loop with a flexible plugin architecture. Compiz allows burning windows and spinning cubes, but we ship it in a very toned down default configuration that looks a lot like good old metacity. Consider xeyes; like wobbly windows, it's a neat tech demo and shows off the flexibility of X, but just because nobody runs xeyes for more that 2 minutes at a time that doesn't mean that X (or shaped windows) isn't useful.

A valid point about compiz is that it is not yet a great window manager. Metacity has a few years of head start there, but that doesn't mean that we can't fix compiz or that it'll be duplicated effort. For now, our (Red Hat) efforts have gone towards fixing the big underlying shortcomings in the X and OpenGL stack, which has left compiz in Fedora with a set of minor (but certainly annoying) windows manager bugs. Other spins (Fedora KDE or Fedora XFCE) can set up different default compositing managers, and they will of course benefit from the infrastructure work mentioned above.

We're not working on making metacity a compositing manager.

We tried it and decided that the metacity code base was not a welcoming place for a compositor. When we put down the work in metacity, the conclusion was to leave this very stable and well tested code base alone instead of injecting an OpenGL based compositor. The flip side of this decision is that we can now do a clean break in compiz and assume throughout the code that we're an OpenGL based combined compositing and window manager.

Compiz fusion and configuration tools.

desktop-effects is close to what we want to ship by default, perhaps we want to fold it into the new gnome-appearance-properties capplet as a new tab. The default compiz configuration and the set of options we expose outside gconf-editor is intended to be very conservative. At the same time, we want to allow installation of other compiz configuration managers and the compiz fusion plugins for people who like the tweak-ui kind-of tools.

Power consumption and other hand-wavey fud arguments.

Clearly running a 3D screen saver at the full frame rate is going to burn more battery than the blank screen saver, but that's not really relevant here. When idling, compiz and metacity use exactly the same amount of power. There are no code paths anywhere in the stack that "turn on 3d powerplanes". My bet is that it's more power consuming to wake up all applications to redraw their exposed areas under the window you're dragging than it is for compiz to just recomposite that out of textures already in video memory.

Wednesday, August 8, 2007

Redirected direct rendering

Also known as AWESOME. Normally, when an application renders to an X window, the contents appear in the framebuffer immediately. The COMPOSITE extensions lets an application, typically the compositing manager, redirect that rendering to an offscreen pixmap. The compositing manager is then responsible for compositing the window contents into the framebuffer, for example by mapping it onto a cube or a wobbly window. All regular X rendering respect this redirection, since the rendering requests go to the X server, which can then target it to whichever pixmap it wants.

However, the default rendering mode for accelerated OpenGL, direct rendering, is a little different. With direct rendering, the application goes through an initial setup sequence with the X server, to acquire the address of the framebuffer and the shared back and depth buffers. After this step, the application renders by programming the GPU directly to render into those buffers. This, of course, is a feature - direct rendering acheives a siginificant performance boost over having to send every request to the X server. However, the direct rendering application isn't aware that a window might have been redirected and renders to the framebuffer regardless. This is typically not what you want:


To me, this is one of the biggest problems in our rendering stack. I'm convinced that the composited desktop has to be a standard feature of the future Linux desktop: Whether or not you want spinning cubes and wobbly windows, there is no denying the benefits of flickerless redraws and ARGB visuals. Unfortunately, with a bug like this, we can't really enable it by default. Especially when interest in OpenGL backed toolkits (clutter and pigment) is starting to pick up.

What you want, is to let the OpenGL driver know when a window is redirected and make it target its rendering to the offscreen pixmap. Conceptually, it's easy enough and we've known how to do it for a while, but there's a lot of details to get right. One of the big pieces of the puzzle fell in place with Tungstens Graphics' DRM memory manager (TTM). Dave Airlie and I added TTM support to EXA and the intel DDX driver, and from there I was able to make the DRI driver render into the TTM buffers allocated by EXA. And given that the Pixmaps for the redirected windows are now backed by TTM buffer objects, we can add a zero-copy texture from pixmap implementation to the mix:


If you want to try it out, I have the code in the exa-ttm branch in my personal drm, xserver, mesa and xf86-video-intel repositories on It's not at all easy to set up and the current implementation is only a prototype. It will be a while before this stuff can get upstream; we need to discuss how we want the interfaces between DRI/DDX/DRM/EXA to look etc, but hopefully we can figure that out at XDS.

Tuesday, June 26, 2007

Death to xfs

That's xfs the X font server, I have no beef with the xfs filesystem. We've been using xfs for core fonts[1] since Red Hat 6, mainly because we've had the chkfontpath tool for editing the xfs config file and restarting xfs. This tool lets us add and remove new font directories to the xfs config file and restarts xfs when necessary, so that a running X server will pick up new fonts as new font packages are installed.

While all that is nice, it's a pretty weak reason for running an system daemon, considering the maintenance overhead, security problems and boot time slowdown. So to get the best of both worlds, I wrote a patch to libXfont (which is what xfs and the X server uses for locating and rendering fonts) to let it pick up fonts paths in a rc.d style way. Instead of relying on a config file for the font path, you can now additionally redirect to a directory of symlinks, such as /etc/X11/fontpath.d. The symlinks point to the directories that make up the font path and the symlink names can include attributes such as :unscsaled. libXfont will automatically detect when links are added and removed and rescan the directory.

No, core fonts are not going away. We can't ever do that, to much legacy. But since Fedora have the two required core fonts ("misc" and "cursor") compiled into the X server, you can have a usable setup with no core fonts (say, maybe you don't like bitmap fonts, or maybe you're OLPC), no xfs, no chkfontpath installed. And at the same time, if you must have that belowed bitmap courier in your terminal or are stuck with an old motif app, install the core font you need and it will just work.

[1] Core fonts is the font mechanism originally designed into the X server. The X server locates and renders the fonts on behalf of the client. This requires the X server to know about parsing font files, glyph rendering and arbitrary encodings, which you typically would want to keep out of a priviledged process. Modern applications locate fonts using fontconfig, render glyphs using freetype, and only upload the resulting images to the RENDER extension.

Wednesday, June 13, 2007

Setting up a cloned git repo

I was going to set up a clone of the xserver git repo in my home directory and I figured I'd document the steps. If you want the cheat sheet just skip to the last couple of paragraphs. First of all, to set up a repo I need ssh access to, but since I was going to put the repo in my home directory there, I already have that. Now, the official git repos are read only mounted on /git, so the obvious thing to do is to say

krh@annarchy:~$ git clone --bare /git/xorg/xserver.git xserver.git
Initialized empty Git repository in /home/krh/xserver.git/
krh@annarchy:~$ du -sh xserver.git/
24M     xserver.git/

Most of this space is in the objects representing the files, directories and commits of the project history. Compared to the repo we cloned from, it's pretty efficient, as git compresses all those object into a pack as part of the cloning process:

krh@annarchy:~$ du -sh /git/xorg/xserver.git/
135M    /git/xorg/xserver.git/

But given that both repos are on the same filesystem, we can ask git clone to share the underlying objects. This is a pretty clever trick that uses the fact that git objects are immutable. Once you've stored a file or an entire revision of you project as part of a git commit, that object never changes. It would be nice if git could just detect this and do the right thing, but you have to pass a couple of options:

krh@annarchy:~$ git clone -s -l --bare /git/xorg/xserver.git xserver.git
Initialized empty Git repository in /home/krh/xserver.git/
krh@annarchy:~$ du -sh xserver.git/
1.3M    xserver.git/

That's a lot better though, we're down to 1.3M for my own little copy of the xserver repo. However, when you think about it, it's just a glorified symlink to /git/xserver.git and 1.3M is a pretty heavy symlink. It's pretty easy to spot the problem

krh@annarchy:~$ du -h xserver.git/
252K    xserver.git/refs/heads
932K    xserver.git/refs/tags
1.2M    xserver.git/refs

When we converted the xserver repo over, we of course imported all branches and tags, most of which now are just in the way—when was the last time anybody looked at xprint_packagertest_20041125? One solution is to just nuke the branches and tags you don't care about. But if you need to preserve this important historical information in your repo or are just to lazy to weed it out, you can say

krh@annarchy:~$ GIT_DIR=xserver.git git-pack-refs --all
krh@annarchy:~$ du -sh xserver.git/
124K    xserver.git/

Again, git should just do this by default in the same way it compresses the objects when cloning. As a final step, I want the gitweb script and the git daemon to pick up my new repo so I can browse it on (or cgit) and clone it using the git protocol. To do that I need to touch a special file in the repo:

krh@annarchy:~$ touch xserver.git/git-daemon-export-ok

which is just a way to say that it's ok to export this repository to the world. It typically takes a little while before the gitweb script finds your repo.

I'm sure there's somebody shaking their head now thinking, "gah, git is useless, look at all those commands", so to address that let me first sum up what you have to do to create your own repo and make it visible to the world:

krh@annarchy:~$ git clone -s -l --bare /git/xorg/xserver.git xserver.git
Initialized empty Git repository in /home/krh/xserver.git/
krh@annarchy:~$ touch xserver.git/git-daemon-export-ok

Second, this is my own repo, I can create all the branches I want and commit crazy stuff, without affecting the upstream repo. I set it up without bugging any of the admins and I didn't even need xserver commit access. Third, if it turns out that I do useful work there, we can merge it back into the main repo, without loosing any history, and without polluting the upstream branch name space with branch names suchs as gah, doh and gahgah, which are among my favorite choices. Allthough, they'd fit right in with the rest of the xserver branches.

Thursday, June 7, 2007

Version Control

Everybody is blogging about SCM tools, and since I have this soapbox at my disposal I might as well too. I've been using git for over a year now, and I think I've earned the right not to have my preference dismissed as flavor of the week at this point. And I feel that the time I've spent learning git has been more than recuperated, and I'm sure anyone who sits down and learns git will feel the same way within a month or so. But OMG it has over 100 commands in /usr/bin, there's this thing called the index, sometimes you have to edit a text file, and hey I got this scary looking warning, bla bla... still, I'll claim—and I don't do this lightly—that git is the single most useful software development tool I've learned in the past 10 years or so.

CVS is shit. Subversion is gold plated shit. Some people claim that SCMs doesn't matter and that we should just keep using what "works". I hear this from people who typically don't do much development anymore, and certainly haven't tried anything other than CVS or SVN. If you truly don't care about SCMs, get out of the way and let people that care and have investigated the options pick a system that actively assists us in software development.

Centralized SCM advocates often think that it's either or; either it's the SVN/CVS model or it's the Linux model where everybody has his own tree, everybody pulls from everybody and nobody knows where the latest official version is. The way it works, though, is that most distributed systems provide a superset of functionality of centralized systems, and can be set up—through configuration and convention—to work with a centralized repository. This is how most of the git repositories work.

People have been using cvsup, rsync or other mechanisms to copy and synchronize remote cvs repositories long before distributed systems were available. git can clone an svn repository to a local git repository where you can commit and branch, and then eventually push those changes back to the upstream svn repository. However, if you're choosing an SCM for your new project why would you choose a system that requires your developers to jump through these hoops to achieve that? Given that git can be set up with a central repository, ssh accessible to a group of developers just like cvs and svn, there is no reason to cripple your co-developers workflow, just because you feel that distributed SCMs have nothing to offer.

Another point often raised about distributed SCMs is that they encourage private development. No they don't. My CVS workflow when implementing a new feature used to be: hack on huge patch, break it, fix it, add more stuff, break it, fix it, etc until the patch was done. That is private development. Now, with git, I add a branch in a private copy, work on the feature in a series of commits, and if the feature works out alright, merge it to the upstream repository. During the entire process the branch is visible and the repository can be cloned by people who wants to test or contribute.

One tool I'd like to do if I ever get some spare time is a svn commandline compatible wrapper for git. It's entirely doable, and shouldn't be too much work. svn checkout becomes git clone, svn update becomes git pull, svn commit becomes git commit -a; git push. I'm sure the devil is in the details, but having this wrapper lets you choose a repository format that preserves branching and merging history and is designed to support cloning, while still letting people use the CVS/SVN workflow.

Monday, May 21, 2007

i830 Update

Since my previous post about the intel driver and the i830 chipset, Keith has fixed the remaining issues and as far as I can tell, everything is working as expected. Specifically, the load detection code now works and can see whether or not there's an external monitor connected, scaling of non-native panel modes work, and the output code does a better job of automatically assigning CRTC to outputs.

If anybody out there with i830 laptops wants to give this a spin, that'd be great. The master branch in the xf86-video-intel git repository should work out of the box, or you can try one of these RPMs, if you have at least a 1.3 X server:

Once rawhide gets rolling again, we'll push it in there, and if everything goes well, phase out the old bios-based driver.

Tuesday, May 15, 2007

Testing cgit

Loading the front page takes about 10 seconds, and it annoys me whenever I load that page. I typically only load it once and then keep it open in a tab. This prompted me to set up a test of Lars Hjemli's cgit which is 1) written in C instead of that horrible language, and 2) has a built-in cache.

It's still pretty new, but it's come a long way in a short time, and has cool features such as a graphical diffstat. Try it out here:

Saturday, May 5, 2007

Oh hi, I upgraded ur screenz

Old dogs can learn new tricks. In this case the dog is my old Thinkpad X30 laptop and the trick is dual head output:

I've wanted to use this configuration since I got the laptop 3 years ago. I could never get the BIOS based driver to set it up and the new native-modesetting driver didn't work on the Intel 830 chipset that the laptop has. But with a little snooping around in the BIOS and some help from keithp and anholt it's now working - yay! And we're one step closer to being able to use the native modesetting intel driver by default in Fedora.

Tuesday, February 6, 2007

Pushing the Envelope

I saw this review of beryl linked from and this quote really irked me:

As nice as Compiz is, Beryl is the group that is really pushing the envelope of what a next generation desktop should be like.

I can't just let that slide, especially since David Reveman (who wrote most of the code in compiz and beryl), just submitted a patch to the X server and related libraries to implement proper input event remapping. If anything is pushing the envelope for the next generation desktop, that is it. The patch lets you pass a triangular mesh to the X server that the server uses for translating input event coordinates on the transformed (say, minimized or wobbling) window. The server will transform the position in the triangle mesh into normal orthogonal coordinates as expected by the application. There has been other ideas floating around for solving this problem, but since you have to break the window into triangles to render it anyway, the input triangle mesh wont restrict the types of deformations you can do.

This work will allow every compositing manager to do input redirection the right way, not just, say, metisse. In general, David solves the root of the problem and gets the solution upstream, improving the platform for everybody, instead of papering over the problem with cheap hacks. This is why we ship compiz as the default compositing manager in Fedora, not beryl. Of course, beryl is available from extras and can be installed by saying yum install beryl-gnome.

On a related note, Fedora 6 now has compiz 0.3.6 in testing, which supports Xinerama, metacity decorations, and more. For the more adventurous, we've updated to 7.2 in rawhide. Thanks to Ian Romanick, in this release, the X server now has protocol for GL_ARB_fragment_program extension, which means that compiz on AIGLX can use fragment shaders. This enables the ever so useful 'rain' plugin, but more seriously, it allows the blurred translucency work that's currently happening in compiz git to work on AIGLX.

Thursday, January 25, 2007

Bugzilla git mirror

Daniel was updating the bugzilla at and pinged me about setting up a git mirror for the bugzilla CVS repository so he could keep track of the customization changes. This makes a lot of sense since bugzilla is probably one of the most customized projects - at any rate, I haven't seen a deployment that hasn't had some tweaking, either bug-buddy integration, work-flow features, changes to make it work with the rest of the site infrastructure or just plain branding.

Whether or not the local changes are to be upstreamed, it makes a lot of sense to track them using git, since git was pretty much designed to handle the bunch-of-local-changes use case. And the git mirror is live, that is, it's updated twice a day, so if you plan to upstream your local changes, it's easy to get the latest upstream work and diff against that.

The git repo is browsable here and can be cloned using the command

git clone git://

Might be useful.

Monday, January 22, 2007


Leading up to FUDCon, I figured I should expand a bit on the "executive summary" from the previous post. Plymouth is the code name for the next generation RHGB (Red Hat Graphical Boot) effort, and as I hinted in the previous post, the key idea is to put the X server in the initramfs. I'll try to summarize the discussions that have led to this approach below.

A number of distros have an fbdev based graphical boot solution, and while these generally work reasonably well, the problems outweigh the benefits, even in the short term. First of all, we don't want to rely on and maintain two different code paths for mode setting. Mode setting is difficult, yet boring, so it's not like there's a lot of developer manpower in this area. All in all it makes sense to choose one code path and make sure that works.

Second, the fbdev drivers is a point of contention - on one hand they seem to work very well for embedded uses where you have a small PDA type of device with just one screen, on the other hand, the fbdev abstraction just doesn't scale to the requirements of a modern, multihead, monitor-hotplugged desktop. A more flexible kernel based mode setting might just happen and that is great, regardless of whose idea it was to begin with. But that is the longer term solution, and until that work materializes we need something better than the current RHGB, and ideally something that pulls in the right direction.

Since X has to work for desktop usage it only makes sense to bet the farm on the X modesetting code. This is what RHGB does, however, RHGB loses a lot of its punch when it's only started halfway through the boot sequence. Also, all the flickering as we go from grubs 640x480 graphical menu back to text mode, into the RHGB X server and then into the real X server doesn't exactly make for a nice experience. Which leads us to the idea of starting X as early as at all possible - in the initramfs.

Short of doing modesetting in the kernel, this is the earliest we can initialize the video card. Once the X server is running, the idea is to use this server throughout graphical boot, login screen and user session. This allows for controlled transitions between these phases as opposed to crazy flickering while your monitor goes in and out of sync a few times. Of course, a fair bit of integration work is required across the initramfs tool, init scripts, GDM and gnome-session (or KDM and ksession) for this to work. Another quirk is that starting the X server from the initramfs prevents us from unmounting, which keeps the 5M of unswappable memory around. We've been discussing shutting down the X server while keeping the mode set and then starting the real X server from the real root file system in a way such that it just "inherits" the mode already set.

Using the mayflower initramfs creator from Davids livecd-tools with this patch, I was able to create an initramfs that included the X server, the DDX driver and the necessary modules in just under 5M. Booting with this ram disk lets me get from grub to X in under 5 seconds on a typical desktop machine. As a proof on concept, that's pretty encouraging, but it's just the first tiny step towards implementing this idea.

Thursday, January 18, 2007


The last weekend of February I will be in Brussels for FOSDEM giving a talk about AIGLX. I will probably recycle some of my talk from XDevConf 2006, but a lot of has happened since then - with the memory manager work from the Tungsten Guys and anholts damage 1.1 work, we're a lot closer to redirected direct rendering.

And just now I signed up for FUDCon Boston 2007, where ajax and I will be doing a hacking/brainstorming session on graphical boot (aka "Death to RHGB"). Executive summary: X in the initrd!

First post!

Hah, I have a work blog. People have been nudging me to blog about my work on occasion and I've wanted to do so several times. My standing excuse have been that I don't want to clutter up my personal blog with work trivia and vice versa and I can't be bothered to figure out how to tag my posts. But it turns out it's as easy as just starting a separate blog. Gah.

So, what's going on work-wise these days. The last few days I've been severely side-tracked by the Adobe Reader 7.0.9 security errata, but hopefully that'll be over soon. Then it's back to FireWire, which is what I've been spending pretty much 100% of my time on the last couple of months. At Red Hat we'd like to ship and support FireWire, but the stability of the Linux FireWire stack has been hit and miss. Stefan Richter, the upstream maintainer, has been doing a great job of raising the quality level and keeping it stable lately, but longer term it's a dead end.

It turns out that this is fixable by writing a new stack. It's not as crazy as it sounds, a full-featured FireWire stack with a reasonable level of backwards compatibility is doable in less that 10k lines of code. I announced the first version of the stack back in December on LKML and we've been discussing the features and user space interfaces on the linux1394-devel list quite a bit. There's been a lot of great feedback and ideas from the Linux FireWire application developers out there, and people seem generally interested and positive. A storage-driver-only version of the stack is in the latest mm kernel and we're now shipping that patch in Fedora rawhide. Good times.