Half-baked idea: Autodetect dependencies

For more half-baked ideas, see the ideas tag.

Large “coordination” libraries like libvirt unify a lot of disparate features through one API, and as a result they depend on many other libraries and external programs. Libvirt directly links to 20 libraries and requires countless other external programs.

We want users to be able to compile libvirt even when they don’t have the full set of libraries (you don’t need, say, PolKit, libvirt will still work with reduced functionality). The issue though is you end up with code which looks like:

switch (cred[i].type) {
    if (STRNEQ(cred[i].challenge, "PolicyKit"))
        return -1;

#if defined(POLKIT_AUTH)
    if (virConnectAuthGainPolkit(cred[i].prompt) < 0)
        return -1;
    /* Ignore & carry on. Although we can't auth
     * directly, the user may have authenticated
     * themselves already outside context of libvirt

This code is fragile because (a) it’s hard to reason about all the pathways and (b) it’s combinatorially difficult to test all the different permutations of available libraries. This fragility leads to bug reports and possibly worse.

Before I get to the half-baked idea, I’ll throw in another thought: at the moment we do most of this detection at compile time using a long configure script. It might be better to do it at run time. You could imagine how this could work if you were a very patient programmer who liked writing tedious boilerplate:

libaudit = dlopen ("libaudit.so", 0);
if (libaudit) {
  int (*audit_add_watch) (struct audit_rule_data **rulep,
                          const char *path);
  audit_add_watch = dlsym (libaudit, "audit_add_watch");
  if (audit_add_watch)
    r = audit_add_watch (rule, path);
    goto no_func;
} else {
  // no libaudit, do something else

The half-baked idea is this: Write the code as if all the functions exist. Then transform the code into the runtime/dlsym version above. In the first iteration, for each libvirt API entry point we compute the sum of all optional libraries/functions that are required to execute that entry point, and we generate checks like this:

virFoo ()
  // The following checks are generated automatically:
  if (!libaudit)
    return error ("virFoo: you need to install libaudit");
  if (!libaudit_audit_add_watch)
    return error ("virFoo: wrong version of libaudit, "
                  "requires audit_add_watch function");
  // Here we run the programmer's code:

The first iteration is very conservative. In the second iteration of the project we’d allow the programmer to write fallback code, so that partial API functionality is available even if not all the libraries are. But how to do that and avoid the #ifdef problem?

I think you should be allowed to write alternate functions:

authenticate ()
  return polkit_context_is_caller_authorized (pkcontext, ...);

authenticate ()
  return 1;

(Remember this is not C, but some sort of C with transformations applied to it).

Our C transformation chooses the “best” function to call at runtime, where best is simply the one which has the most libraries available. In the above case, the first version of authenticate is chosen if the PolKit library is available, the second version if not.



Filed under Uncategorized

5 responses to “Half-baked idea: Autodetect dependencies

  1. DBUSMan

    IANAE, but isn’t this what COM and friends are for?

  2. Frank Ch. Eigler

    Richard, how does this help the combinatorial testing problem?

    • rich

      Insightful question! (As I would have expected from the author of systemtap …)

      Firstly here are a few of recent bugs of this type that I was thinking about when I wrote that: libguestfs bug #1, libvirt bug #1, libvirt bug #2. These bugs all happened (in shipped releases) because we don’t test by compiling with all permutations of optional libraries.

      Now I think what you’re asking (correct me if I’m wrong) is that if an optional, but important, dependency was missing, the library would still fail, because all the APIs would get disabled. This is just as bad as a compile-time failure, because the library is useless.

      I think (but haven’t proven) that an automated analysis of the API should be able to tell you if some dependency is so vital that it would cause important APIs to be disabled. eg. virConnectOpen is so vital to the libvirt API, that if a dependency it needs is missing, then we should flag that.

      I think (also not proven) that writing a root cause analysis would be easier. If we build the call graph, we could point to the specific function that is causing the essential APIs to be unavailable.

  3. John Summerfied

    I think a part of the problem is binaries are distributed pre-linked.

    For years I have wondered why this is so, I came to Linux from IBM mainframes where programs are often, if not always, linked at install time.

    In such an environment it is possible to supply, and link in, stubs that do some kind of nothing (maybe like the “dunno” response in Postfix configuration) where The RealThing isn’t present or is not wanted.

    Linking (and maybe relinking as required) could help enormously with products such as GNOME desktop environment where much of your icecream is my gall. I don’t like evolution, but removing it’s next to impossible. I don’t want NIS, why should any part of it be present.

    In the mainframe environment, executables can be re-edited by the linkage editor. This allows one to replace selected libraries used to create the executable.

    The linkage editor can rename symbols. If I don;t want nislib I can rename all the references to its symbols and link in a “do nothing” function instead.

    Using the rename feature, I can front-end or back-end functions without access to source code. Suppose sqllite has no security in place, but I wanted some. In the mainframe environment I came from, I could relink sqllite and change the name if its sqllite3_open function to something else, and provide my own sqllite3_open function that does the checks I require and, if appropriate, then call the renamed function. To back-end it, it would call the renamed function first and check or edit the results.

    Potentially, I could provide all the sqllite functions and transform them to use postgresql or mysql, and relink existing sqllite apps with my transformation. Without source.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.