Wednesday, July 07, 2010

Parallels

I keep seeing parallels and overlapping functionality between various programs. systemd overlaps with cron (envisioned, not yet implemented, I think), and the session manager. I think package management would integrate usefully with systemd. I've been reading up on cfengine. It also overlaps with cron and the package manager.

I want to bring together many of the good ideas I've found on the interwebs into one well integrated system. I'd thought to start from the kernel level and build up, but now it occurs to me that I can start with something akin to OpenQRM and work down through the package manager and init system to the kernel.

Tuesday, July 06, 2010

I have found managing a small handful of servers with a slightly larger handful of services that they need to provide. From time to time I need to add a service without adding a server. While at the outset the idea seemed simple, the realization turns out to be somewhat problematic. The problem is that it can be difficult to recall exactly what each server is doing, and to imagine all the problems that might crop up if you change something. Virtualization offers an appealing approach to managing this. Unfortunately, only the very newest of the servers I'm managing supports virtualization, and it has already been put to use.

There are a number of other issues that come up. It's a little silly having the same user account across all the servers, and is somewhat inflexible; a kerberos system would be nice here, but stresses the allocation of resources further. Additionally, making good use of hard drives and drive bays between these servers is non trivial. A storage virtualization system would be nice. Perhaps a cluster filesystem that spans all the drives in the network, using and providing storage as needed with extreme flexibility.

cfengine offers an interesting approach to dealing with services with forgotten requirements that you're afraid to touch. cfengine allows you to specify 'promises' that your systems are supposed to keep. It sounds as though this is a perfect solution for this problem. I still need to investigate further - I'm working off my interpretation of a small quantity of documentation.

OpenQRM offers exactly what I've always wanted for managing my few servers - dhcpd + tftpd + nfs/iscsi/aoe + lvm, all in one convenient bundle. Unfortunately it doesn't deliver. Besides having an unfortunately complicated interface (to be fair, it could be much, much worse), I can't for the life of me get the stupid thing to do anything useful.

I'm still sorting out some ideas, and I think attempting an implementation will be the best way to figure this out, but it seems to me that there is significant similarity between the functionality of cfengine and OpenQRM. I just want managing my servers to be easy. Why can't it be easy?

Tuesday, June 15, 2010

Calling Conventions

In *n?x the C calling convention reigns supreme. The C calling convention allows libraries and programs to be written in different languages - to a certain extent. OpenVMS extends the idea of a calling convention with its calling standard[1]. The OpenVMS approach provides more detail than the C one, allowing better interoperability between languages.

Another issue, that might be related is that of bindings. Many libraries are written in compiled languages, and must have bindings made to be used from a scripting language. This strikes me as less than satisfactory - I would like to get access to a new library for free, no need for bindings. Unfortunately, I think that there is a problem with this idea - namely, scripting languages generally have higher level constructs available than the lower level language the libraries are written in. This means, to provide an interface appropriate for the scripting language, taking advantage of its idioms and syntax, additional information is needed. The alternative is for a standard for libraries to follow which recognizes, and encodes information for all desired idioms. This strikes me as absurd, since it requires an update to the system in the case that a language introduces an idiom that isn't yet recognized, and coded for.

What I think I want is a system where a common set of functionality is available to any language on the system, and any language can be used to add functionality to that set. Perhaps the executable or process provide this ideal in *n?x.

[1] OpenVMS Calling Standard

Taking Out The Trash

Most (all?) dynamic programming languages these days use a garbage collector. Even some static languages use a garbage collector. This is generally considered a good thing. Google can help convince you of the benefit.

Unfortunately garbage collectors have a couple downsides. One downside is that they tend to conflict with parallelism, a growing problem with today's abundance of multicore processors. The main disadvantage, however, is the pause in processing when the garbage collector pauses processing to clean the heap. This means that real-time processing (audio, video, games) can't be done in today's high-level languages. The typical solution is to rely on an external library written in a language without garbage collection, typically C or C++.

Perhaps all we need is better garbage collectors. What I think, however, is that we need some means of designating which portions of code a garbage collector may interfere with. Honestly I have no idea what I'm proposing, but I like dynamic languages like Ruby and Javascript far too much to decide that I have to simply rely on a different language for real-time code.