Tag Archives: rpm

Fedora / RISC-V stage4 autobuilder is up and running

Bootstrapping Fedora on the new RISC-V architecture continues apace.

I have now written a small autobuilder which picks up new builds from the Fedora Koji build system and attempts to build them in the clean “stage4” environment.

Getting latest packages from Koji ...
Running: 0 (max: 16) Waiting to start: 7
uboot-tools-2016.09.01-1.fc25.src.rpm                       |  11 MB  00:10     
uboot-tools-2016.09.01-1.fc25 build starting
tuned-2.7.1-2.fc25.src.rpm                                  | 136 kB  00:00     
tuned-2.7.1-2.fc25 build starting
rubygem-jgrep-1.4.1-1.fc25.src.rpm                          |  24 kB  00:00     
rubygem-jgrep-1.4.1-1.fc25 build starting
qpid-dispatch-0.6.1-3.fc25.src.rpm                          | 1.3 MB  00:01     
qpid-dispatch-0.6.1-3.fc25 build starting
python-qpid-1.35.0-1.fc25.src.rpm                           | 235 kB  00:01     
python-qpid-1.35.0-1.fc25 build starting
java-1.8.0-openjdk-aarch32-1.8.0.102-4.160812.fc25.src.rpm  |  53 MB  00:54     
java-1.8.0-openjdk-aarch32-1.8.0.102-4.160812.fc25 build starting
NetworkManager-strongswan-1.4.0-1.fc25.src.rpm              | 290 kB  00:00     
NetworkManager-strongswan-1.4.0-1.fc25 build starting
MISSING DEPS: NetworkManager-strongswan-1.4.0-1.fc25 (see
logs/NetworkManager-strongswan/1.4.0-1.fc25/root.log)
   ... etc ...

Given that we don’t have GCC in the stage4 environment yet, almost all of them currently fail due to missing dependencies, but we’re hoping to correct that soon. In the mean time a few packages that have no C dependencies can actually compile. This way we’ll gradually build up the number of packages for Fedora/RISC-V, and that process will accelerate rapidly once we’ve got GCC.

You can browse the built packages and build logs here: https://fedorapeople.org/groups/risc-v/

1 Comment

Filed under Uncategorized

First successful rpmbuild on RISC-V

I’m very slowly bootstrapping Fedora to run on RISC-V, and today I managed to get rpmbuild to work, so that’s a sort of milestone:

...
Provides: config(setup) = 2.10.4-1.fc24 setup = 2.10.4-1.fc24
Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1
Conflicts: bash <= 2.0.4-21 filesystem < 3 initscripts < 4.26
Checking for unpackaged file(s): /usr/lib/rpm/check-files /rpmbuild/BUILDROOT/setup-2.10.4-1.fc24.%{_arch}
warning: Could not canonicalize hostname: ucbvax
Wrote: /rpmbuild/RPMS/noarch/setup-2.10.4-1.fc24.noarch.rpm
Executing(%clean): /bin/sh -e /usr/var/tmp/rpm-tmp.0iJnms
+ umask 022
+ cd //rpmbuild/BUILD
+ cd setup-2.10.4
+ rm -rf '/rpmbuild/BUILDROOT/setup-2.10.4-1.fc24.%{_arch}'
+ exit 0
Executing(--clean): /bin/sh -e /usr/var/tmp/rpm-tmp.Vdj45n
+ umask 022
+ cd //rpmbuild/BUILD
+ rm -rf setup-2.10.4
+ exit 0

Unfortunately because I haven’t got GCC working in the bootstrap environment, I’m a bit limited in the packages that I can build, so I’m starting off with some low-dependency noarch packages. In reality we won’t need to recompile noarch packages at all, they can be copied off other arch builders, but it’s a good test of rpmbuild.

2 Comments

Filed under Uncategorized

How to rebuild libguestfs from source on RHEL or CentOS 7

Three people have asked me about this, so here goes. You will need a RHEL or CentOS 7.1 machine (perhaps a VM), and you may need to grab extra packages from this preview repository. The preview repo will go away when we release 7.2, but then again 7.2 should contain all the packages you need.

You’ll need to install rpm-build. You could also install mock (from EPEL), but in fact you don’t need mock to build libguestfs and it may be easier and faster without.

Please don’t build libguestfs as root. It’s not necessary to build (any) packages as root, and can even be dangerous.

Grab the source RPM. The latest at time of writing is libguestfs-1.28.1-1.55.el7.src.rpm. When 7.2 comes out, you’ll be able to get the source RPM using this command:

yumdownloader --source libguestfs

I find it helpful to build RPMs in my home directory, and also to disable the libguestfs tests. To do that, I have a ~/.rpmmacros file that contains:

%_topdir	%(echo $HOME)/rpmbuild
%_smp_mflags	-j5
%libguestfs_runtests   0

You may wish to adjust %_smp_mflags. A good value to choose is 1 + the number of cores on your machine.

I’ll assume at this point that the reason you want to rebuild libguestfs is to apply a patch (otherwise why aren’t you using the binaries we supply?), so first let’s unpack the source tree. Note I am running this command as non-root:

rpm -i libguestfs-1.28.1-1.55.el7.src.rpm

If you set up ~/.rpmmacros as above then the sources should be unpacked under ~/rpmbuild/SPECS and ~/rpmbuild/SOURCES.

Take a look at least at the libguestfs.spec file. You may wish to modify it now to add any patches you need (add the patch files to the SOURCES/ subdirectory). You might also want to modify the Release: tag so that your package doesn’t conflict with the official package.

You might also need to install build dependencies. This command should be run as root since it needs to install packages, and also note that you may need packages from the repo linked above.

yum-builddep libguestfs.spec

Now you can rebuild libguestfs (non-root!):

rpmbuild -ba libguestfs.spec

With the tests disabled, on decent hardware, that should take about 10 minutes.

The final binary packages will end up in ~/rpmbuild/RPMS/ and can be installed as normal:

yum localupdate x86_64/*.rpm noarch/*.rpm

You might see errors during the build phase. If they aren’t fatal, you can ignore them, but if the build fails then post the complete log to our mailing list (you don’t need to subscribe) so we can help you out.

8 Comments

Filed under Uncategorized

Analysis of the size of libguestfs dependencies

In libguestfs ≥ 1.26 we are going to start splitting the package up into smaller dependencies. Since the full libguestfs package has lots of dependencies because it has to be able to process lots of obscure filesystems, the question is how best to split up the dependencies? We could split off, say, XFS support into a subpackage, but how do we know if that will save any space?

Given the set of dependencies, we want to know the incremental cost of adding another dependency.

We can get an exact measure of this by using supermin to build a chroot containing the set of dependencies, and a second chroot containing the set of dependencies + the additional package. Then we simply compare the sizes of the two chroots. The advantage of using supermin is that the exact same script [see end of posting] will work for Fedora and Debian/Ubuntu since supermin hides the complexity of dealing with the different package managers through its package manager abstraction.

The results of this, using the libguestfs appliance dependencies, on Fedora 20, sorted by dependency size, with my comments added:

  1. gdisk adds 25420 KB

    This is a surprising result in first place, since gdisk is a fairly small, unassuming C++ program (only ~11KLoC). My initial thought was it must be something to do with being written in C++, but I tested that and it’s not true. The real problem is that gdisk depends on libicu (a Unicode library) which adds 24.6 MB to the appliance. [Note: this issue has been fixed in Rawhide.]

  2. lvm2 adds 19432 KB

    The default disk layout of many Linux distros uses LVM so this and similar dependencies have to stay in base libguestfs.

  3. binutils adds 16604 KB

    This is a sorry tale. The one file we use from binutils is /usr/bin/strings (33KB). Unfortunately this single binary pulls in a huge dependency (even worse, it’s a development package, and this causes problems on production systems). I don’t really understand why strings is included in binutils.

  4. gfs2-utils adds 9648 KB
  5. zfs-fuse adds 5208 KB

    Split off in the proposed reorganization.

  6. ntfsprogs adds 4572 KB
  7. e2fsprogs adds 4312 KB

    Most Linux distros use ext4, and we want to support Windows out of the box, so these are included in base libguestfs.

  8. xfsprogs adds 3532 KB

    Split off in the proposed reorganization.

  9. iproute adds 3180 KB

    We use /sbin/ip to set up the network card inside the appliance. It’s a shame this “better” replacement for ifconfig is so large.

  10. tar adds 2896 KB
  11. btrfs-progs adds 2800 KB
  12. openssh-clients adds 2428 KB
  13. parted adds 2420 KB
  14. jfsutils adds 1668 KB
  15. genisoimage adds 1644 KB
  16. syslinux-extlinux adds 1420 KB
  17. augeas-libs adds 1404 KB
  18. iputils adds 1128 KB
  19. reiserfs-utils adds 1076 KB
  20. mdadm adds 1032 KB
  21. strace adds 976 KB
  22. lsof adds 972 KB
  23. vim-minimal adds 912 KB
  24. rsync adds 812 KB
  25. libldm adds 616 KB
  26. psmisc adds 592 KB
  27. nilfs-utils adds 520 KB
  28. hfsplus-tools adds 480 KB

The test script used to produce these results:

#!/bin/bash -

# NB: For this program to work, you must have the following
# packages (or as many as possible) installed locally.
pkgs='acl attr augeas-libs bash binutils bsdmainutils btrfs-progs
bzip2 coreutils cpio cryptsetup cryptsetup-luks diffutils dosfstools
e2fsprogs extlinux file findutils gawk gdisk genisoimage gfs2-utils
grep grub grub-pc gzip hfsplus hfsplus-tools hivex iproute iputils
jfsutils kernel kmod less libaugeas0 libcap libcap2 libhivex0 libldm
libpcre3 libselinux libsystemd-id128-0 libsystemd-journal0 libxml2
libyajl2 linux-image lsof lsscsi lvm2 lzop mdadm module-init-tools
mtools nilfs-utils ntfs-3g ntfsprogs openssh-clients parted pcre
procps procps-ng psmisc reiserfs-utils reiserfsprogs rsync scrub sed
strace syslinux syslinux-extlinux systemd sysvinit tar udev ufsutils
util-linux util-linux-ng vim-minimal vim-tiny xfsprogs xz xz-utils
yajl zerofree zfs-fuse'

# These are the packages (from the above list) that we want to test.
testpkgs="$pkgs"

# Helper function to construct an appliance and see how big it is.
function appliance_size
{
    set -e
    supermin --prepare -o /tmp/supermin.d "$@" >&/dev/null
    supermin --build -f chroot -o /tmp/appliance.d \
      /tmp/supermin.d >&/dev/null
    du -s /tmp/appliance.d | awk '{print $1}'
}

# Construct entire appliance to see how big that would be.
totalsize=`appliance_size $pkgs`

# Remove each package from the list in turn, and find out
# how much extra that package contributes.
for p in $testpkgs; do
    opkgs=
    for o in $pkgs; do
        if [ $o != $p ]; then opkgs="$opkgs $o"; fi
    done
    size=`appliance_size $opkgs`
    extra=$(($totalsize - $size))

    echo $p adds $extra KB
done

1 Comment

Filed under Uncategorized

Clean up your spec files!

Modern RPMs don’t need any of the following. You can just delete them:

  • %clean
  • %defattr
  • rm -rf $RPM_BUILD_ROOT or rm -rf %{buildroot}
  • BuildRoot
  • Group (thanks bochecha)

7 Comments

Filed under Uncategorized

Nice RPM / git patch management trick

As far as I know, this trick was invented by Peter Jones. Edit: Or it could be ajax?

Parted in Fedora uses a clever method to manage patches with git and “git am”.

%prep
%setup -q
# Create a git repo within the expanded tarball.
git init
git config user.email "..."
git config user.name "..."
git add .
git commit -a -q -m "%{version} baseline."
# Apply all the patches on top.
git am %{patches}

The background is that there is a git repo somewhere else which stores the unpacked baseline parted tarball, plus patches (stored as commits) on top.

I assume that Peter exports the commits using git format-patch. At build time these are applied on top of the tarball using git am.

There are two clear advantages:

  • No need to have lots of duplicate %patch lines in the spec file.
  • git-am restores permissions and empty files properly, which regular patch does not do.

With libguestfs in RHEL 6 we have roughly 80 patches, so managing these patches is very tedious, and this will greatly simplify things.

7 Comments

Filed under Uncategorized

Half-baked idea: “Try this patch” tool for RPMs

For more half-baked ideas, see my ideas tag.

RPM has some nice features for easily rebuilding packages. You can, for example, easily structure a source tarball so that an end user can build RPMs from it in a single step, and you can also easily rebuild an RPM from a source RPM. (See my recent notes on how to do all that here).

However for a lot of end users even these simple commands are too complex. And applying a patch to an RPM is beyond even that stage.

Here’s the idea: it’s a “try this patch” graphical tool. It takes a patch from a pastebin or email, and tries to apply it to an installed package. It downloads the source, attempts to apply the patch, rebuilds a new binary RPM, and installs it. (Of course it may not be possible to apply the patch, in which case it should either give the user a very simple message about what went wrong, or help more advanced users to manually fix rejects).

With this tool I could in confidence ask a user: “try this patch and tell me if it works”.

All the user has to do is to drag the patch file into the “try this patch” tool, and it will do the rest. If the patch doesn’t fix the problem, the tool lets the user “yum downgrade” to the previous version.

See also: A “view source” button for Fedora

3 Comments

Filed under Uncategorized

Tip: Install RPMs in a guest

This script lets you install RPMs in a Fedora or RHEL guest (Update: offline guest in case that is not clear). It works by installing a “firstboot”-type script that actually does the install (avoiding various pitfalls of installing RPMs directly from libguestfs).

You use it like this:

# ./install-rpms.sh F14x64 xbill-2.1-2.fc11.x86_64.rpm
Uploading /etc/init.d/installrpms (firstboot script) ...
Uploading xbill-2.1-2.fc11.x86_64.rpm to /var/lib/installrpms/xbill-2.1-2.fc11.x86_64.rpm ...

Multiple RPMs can be given on the command line, and it can be used incrementally.

This is not quite a complete usable solution yet. What is really needed is a way to determine the dependencies between RPMs, and also determine what needs to be updated in a guest. These are jobs that can probably be done through the yum API.

#!/bin/bash -
#
# Install RPMs at next boot on a RHEL or Fedora VM.  This uploads the
# RPMs to the VM and creates a 'firstboot' script to install them when
# the VM boots next time.  You can use this script incrementally.
# Each time it runs, it adds further RPMs.  The RPM list is cleared
# when the VM boots.
#
# For more information, see
# https://rwmj.wordpress.com/   http://virt-tools.org/
#
# Usage:
#   install-rpms.sh GuestName *.rpm

# Parse command line.
if [ $# -lt 2 ]; then
    echo "install-rpms.sh GuestName *.rpm"
    exit 1
fi

set -e
guest="$1"; shift

# Create a temporary working directory.
tmpdir=$(mktemp -d)
trap "rm -rf '$tmpdir'" EXIT INT QUIT TERM

# Start up guestfish in remote control mode.
unset GUESTFISH_PID
eval `guestfish --listen -d "$guest" -i`
if [ -z "$GUESTFISH_PID" ]; then exit 1; fi
trap "guestfish --remote exit" EXIT INT QUIT TERM


# Check guest uses RPM for package management.
root=`guestfish --remote -- inspect-get-roots`
pkgfmt=`guestfish --remote -- inspect-get-package-format "$root"`
if [ "$pkgfmt" != "rpm" ]; then
    echo "$0: $guest: guest does not use RPM (package format = $pkgfmt)"
    exit 1
fi

# Upload/overwrite firstboot RPM installer script.
# This should run early (before network starts).
cat > $tmpdir/install.rc <<'EOF'
#!/bin/sh
#
# chkconfig: 345 15 85
# description: Install RPMs at next boot
#
### BEGIN INIT INFO
# Short-Description: Install RPMs at next boot
# Description: Install RPMs at next boot
### END INIT INFO

. /etc/rc.d/init.d/functions

[ -d /var/lib/installrpms ] || exit 0

start ()
{
    if [ `ls -1 /var/lib/installrpms/*.rpm 2>/dev/null | wc -l` -gt 0 ]; then
        echo -n $"Installing new packages: "
        yum -y install /var/lib/installrpms/*.rpm >> /var/log/installrpms.log
        if [ $? -eq 0 ]; then
            success;
            rm /var/lib/installrpms/*.rpm
        else
            failure
        fi
    fi
}

case "$1" in
  start)
        start
        ;;
  stop)
        # nothing
        ;;
  *)
        echo "Usage: $0 {start|stop}"
        exit 1
        ;;
esac
EOF
echo "Uploading /etc/init.d/installrpms (firstboot script) ..."
guestfish --remote -- upload $tmpdir/install.rc /etc/init.d/installrpms

# Set the script to run at boot.
guestfish --remote -- chmod 0755 /etc/init.d/installrpms
guestfish --remote -- ln-sf /etc/init.d/installrpms /etc/rc2.d/S15installrpms
guestfish --remote -- ln-sf /etc/init.d/installrpms /etc/rc3.d/S15installrpms
guestfish --remote -- ln-sf /etc/init.d/installrpms /etc/rc5.d/S15installrpms

# Make the RPMs directory.
guestfish --remote -- mkdir-p /var/lib/installrpms
guestfish --remote -- chmod 0755 /var/lib/installrpms

# Upload the RPMs.
for f in "$@"; do
    b="$(basename $f)"
    echo "Uploading $f to /var/lib/installrpms/$b ..."
    guestfish --remote -- upload "$f" /var/lib/installrpms/"$b"
done

2 Comments

Filed under Uncategorized

Don’t forget your Epochs

I had a puzzler today. The RPM spec file contained:

BuildRequires: qemu-kvm < 0.12.2.0

The version of qemu-kvm was 0.12.1.2-… and you would think that 0.12.1.2 < 0.12.2.0. It should build, right? But rpmbuild consistently refused to find that qemu-kvm package.

My first thought was that RPM somehow needs to have a ≥ relation in order to find a package at all. I tried:

BuildRequires: qemu-kvm >= 0.12.1.0
BuildRequires: qemu-kvm < 0.12.2.0

but this failed in an even stranger way. It finds the right package, but then rejects it:

DEBUG util.py:256:  2:qemu-kvm-0.12.1.2-2.91.el6.x86_64
DEBUG util.py:256:  No Package Found for qemu-kvm < 0.12.2.0

But the clue to the answer is right there. qemu-kvm has an Epoch of 2 (hence the package name given is 2:qemu-kvm-0.12.1.2-…).

The fix is to write:

BuildRequires: qemu-kvm >= 2:0.12.1.0
BuildRequires: qemu-kvm < 2:0.12.2.0

But note a huge, hidden gotcha in RPMs with Epoch. You write:

BuildRequires: qemu-kvm >= 0.12

and it’s effectively meaningless. Any qemu-kvm version (with epoch 2) will match this, even version 0.11. In fact I could have written:

BuildRequires: qemu-kvm >= 0.95

and that would still have pulled in 2:qemu-kvm-0.12.1.2.

Leave a comment

Filed under Uncategorized

Half-baked ideas: verify the integrity of VMs

For more half-baked ideas, see my ideas tag.

This LWN article on OSSEC reminded me of an idea I had. We need a verify tool that can verify your VMs are not corrupted and don’t contain a rootkit. You can currently run a simple rpm -V command or one of the tools listed in that LWN article, but the problem is you have to run those commands inside the VM, thus relying on the VM itself not to have been corrupted. (You can also reboot the VM into a known-good state, eg. from a rescue ISO, but then you get downtime).

Obviously the answer is to examine the VM from the host, using libguestfs to grab the checksums directly from the filesystem.

You can get the checksums easily this way, but what do you compare them against and how do you know the checksums are good?

You would need to ask the distribution for a list of known-good checksums for the packages they publish. In fact you can do this reasonably easily. That information is available in the raw RPMs, or Red Hat’s RHN, and I’m quite sure you can get it from the Debian repos too. Windows? I don’t know specifically, but I guess either Microsoft publish this or you could derive it in some way.

So now if you are presented with some file from the VM and its checksum, like:

e2ed3c7d6d429716173fbd2d831d6e2855f1d20209da1238f75d1892a3074af5 /sbin/ldconfig

in theory we can verify this file was distributed in the signed package glibc-2.11.1-4.x86_64.rpm built on Thu 18 Mar 2010 04:51:51 PM GMT by a Fedora builder. (Using virt-inspector you can work out that the VM is a Fedora instance).

There’s still a subtle problem with this which I can’t work out. What happens if the attacker doesn’t directly install a rootkit, but instead replaces a binary with a version which has a known vulnerability. Say, a known root exploit in a package which the distro had previously shipped and later found to be vulnerable? This would allow the attacker to revisit the machine and acquire root, and run any software they want entirely in memory (libguestfs only sees the disk). For that you’d need not just a big database of all the files that distributions have shipped, but a list of files that have been obsoleted for security reasons.

There’s also a second problem: Although we can enumerate the goodness (all files are files that the distribution has shipped), we don’t know what to do with all the other files. Like user files, configuration files, /tmp, or files that just shouldn’t be there. To detect rootkits amongst those files, it’s starting to look like we’d have to enumerate badness and we don’t want to go there.

5 Comments

Filed under Uncategorized