In 1993 …

In 1993, I spent £600 on 16 MB of RAM.

At the time, I needed it because my 386 DX didn’t run SLS Linux very well in 4 MB of RAM. In particular, emacs under X11 was very slow. When I got the 16 MB upgrade increased the RAM to 20MB (because I had one of those motherboards which could take both sorts of RAM together), X11 and emacs were really fast. I was even able to use a background image on my desktop, and font-lock mode in emacs.

I wish I still had that RAM, not just because it cost more than my (then) net worth to purchase, but practically because my HP LaserJet 5 printer could use those ancient 30 pin 72 pin SIMMs.

19 years later, I just got 16 GB of RAM for £150.

I should be happy, but the sad part of this is Linux now needs gigabytes of RAM to run properly.

About these ads

10 Comments

Filed under Uncategorized

10 responses to “In 1993 …

  1. I totally share your sentiment.

  2. cghvghgvngv

    please look at the software which has made some progress. Without better hardware that would not have been possible.

  3. Juan Quintela

    Fully agree. My old laptop has 4GB of RAM, and it feels really slow :-(

  4. daengbo

    GNOME 3 on Fedora 16, sure. Running SliTaz or TinyCore? No. (I don’t use those day to day, but they are very useful in tight hardware situations.)

  5. I run ubuntu server on my laptop, with a minimal X install and very simple window manager (dwm). That machine has 4 gigs of RAM, and at any given moment, with at minimal Apache, SSH, and perhaps Firefox or Chromium running, I believe I’m consuming about 1.5 GB of RAM. Keep in mind that this is a GENERIC Kernel and a lot of memory is probably consumed by features and Drivers which are not applicable to my hardware anyway. I think if I configured my kernel down to the bare minimum I needed to drive all the hardware, I could get that number down by quite a bit. I don’t have any hard facts here. But I think part of the “problem” is Linux trying to break into the desktop market. And I think it does this quite well in some respects. For example, you couldn’t run Windows off a flash drive and just plug it into a new machine and go. But you often can with most popular Linux distros…

  6. pixelbeat1

    This reminds me that devs should test with constrained RAM more often. This is quite easy with ulimit, or qemu…-m… or booting the kernel with mem=…M

    Also beneficial is testing with increased network latency like:
    tc qdisc add dev lo root handle 1:0 netem delay 20msec
    And to restore:
    tc qdisc del dev lo root

    It would also be cool to limit the CPU frequency too.
    Lots of common CPUs provide dynamic control for “underclocking”.
    For example on my sandy bridge I can minimize the available speed with:
    cpupower frequency-set -f 1
    And to restore:
    cpupower frequency-set -g ondemand
    However the min on my CPU is 800MHz
    There are other hacks like cpulimit (which repeatedly sends SIGSTOP, SIGCONT to a process), or better solutions like CFS bandwidth control:

    http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/scheduler/sched-bwc.txt;hb=HEAD

    I’ve yet to try the above, but it seems like short running operations would
    avoid throttling and it’s only done in the scheduler.
    I wonder could qemu at a lower level auto insert NOPs based on the native
    frequency to being the effective MIPS down?

    Also it would be useful to throttle file IO.
    There only seems to be out of kernel tree options for this at present?

    http://lwn.net/Articles/265944/

    http://lwn.net/Articles/332934/

    Oh I see new libvirt support for I/O throttling:

    http://www.redhat.com/archives/libvir-list/2011-November/msg01391.html

    That maps to “-drive bps= …” in newer qemu

  7. wR

    A(n) (in)famous person once said you shouldn’t ever need more than 640k RAM to do anything. While I don’t necessarily agree that in every scenario more RAM shouldn’t be a need, dev requests for ’128 gigs more RAM’ do still provoke a slight … gag reflex.
    I agree that software has come a long way in the past two decades. But as someone who has seen the admin and developer sides of the coin, I do feel that we still have a long way to go in making our software efficient. Cheap ram cannot be a solution in itself. “D

  8. “I should be happy, but the sad part of this is Linux now needs gigabytes of RAM to run properly.”

    No it doesn’t. Set up an install with a VT, fluxbox and emacs and it’ll use easily under half a gig.

    My mailserver VM is currently using 235MB of RAM. My webserver VM, 340MB. My IRC proxy VM, 110MB. All on F17.

    • rich

      I’ve got 64 MB Debian VMs. But clearly I ain’t talking about mail servers, I’m talking about running Firefox and friends:

      32615 rjones    20   0 2359m 1.9g 3128 S  0.0 24.5   1:20.95 gvfs-udisks2-vo    
      28327 qemu      20   0 6472m 1.0g 4928 S  1.7 13.6  53:47.42 qemu-kvm           
       7351 rjones    20   0 2173m 962m  28m S  3.3 12.4   1054:51 firefox            
      32057 rjones    20   0 1673m  96m  53m S  0.3  1.2   4:48.64 vlc                
       1505 root      20   0  220m  63m  30m S  1.3  0.8 206:26.48 Xorg               
       2567 rjones    20   0  660m  47m  13m S  0.0  0.6   0:01.82 evince             
       2274 rjones    20   0  593m  33m 9732 S  0.7  0.4  36:03.02 Terminal           
      

      (gvfs-udisks2-volume-monitor is a fixed memory leak)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s