Christian

Kildau

Network & Systems Architect


Welcome to my little blog. I am mostly techie over here, blogging about networking and system administration topics, but there will also be some Travel Reports from time to time…

How to Create your own ‘DynDNS’ Service

February 27, 2011, Christian Kildau2 Comments

First off: This is not DynDNS as you might know it from dyndns.org. You can’t use clients like ddclient. I’m using DNSSEC and ‘nsupdate’. You’ll need to be familiar with Bind and some shell scripting… Also I only got this working on *nix and I don’t have any intention to try it on Windows.

Let’s start with what you have to do on your client:

$ dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom host1.dyn.example.org


Now copy Khost1.dyn.example.org.+157+39064.key (your pubkey) to your server’s configdir (in case of Debian: /etc/bind) and define it as follows:

key host1.dyn.example.org. {
        algorithm HMAC-MD5;
        secret "<put key from Khost1.dyn.example.org.+157+39064.private here>";
};
zone "dyn.example.org" {
        type master;
        file "master/dyn.example.org";
        allow-update { key host1.dyn.example.org.; };
};


This allows everyone with the Ktest.unixhosts.org.+157+39064.private key, to update zone ‘dyn.example.org’. Feel free to find out how to do privilege separation on your own Back to your client: Since we can’t use ddclient or similar clients, I wrote my own small script:

#!/bin/sh
dir=$(dirname $0)
old_ip=$(cat $dir/ip_cur.txt)
new_ip=$(ifconfig pppoe0 | grep -E 'inet.[0-9]' | 
       grep -v '127.0.0.1' | awk '{ print $2}')

if [ $old_ip != $new_ip ];
 then
  echo $new_ip >> $dir/ip_log.txt
  echo "server <yourserver>nzone dyn.example.org 
    nupdate delete host1.dyn.example.org. A
    nupdate add host1.dyn.example.org. 60 A $new_ip 
    nsend" > $dir/ip_nsupdate_instructions.txt
  nsupdate -k $dir/Kfhost1.dyn.example.org.+157+25504.private 
    $dir/ip_nsupdate_instructions.txt || exit 1
  echo $new_ip > $dir/ip_cur.txt
fi


My script get’s the current IP Address of pppoe0, compares it to the one from it’s previous run and executes ‘nsupdate’ if they mismatch. ‘nsupdate’ doesn’t accept it’s configuration from stdin, that’s why I needed to hack around with echo… If ‘nsupdate’ fails (due to connection issues or something like that) my script exits. If update was successful it writes the current ip into ip_cur.txt, so the script only executes ‘nsupdate’ on IP Address change and not every time your run it. Add my script to crontab to run it once a minute or so…

* * * * * ip_update.sh

How to Set up a ‘hidden primary’ DNS

February 27, 2011, Christian Kildau0 Comments

I just had to guide a friend of mine trough the setup of a ‘hidden primary’ or ‘hidden master’ via mail, so I thought I’d also post a quick summary here to keep my blog alive

First off: A ‘hidden primary’ setup, uses one server for all zone-file changes that isn’t listed anywhere and doesn’t get any queries from clients,  and two or more ‘slaves’ that do the actual work. Have a look at this example zone-file:

$ORIGIN unixhosts.org.
unixhosts.org. IN SOA amy.unixhosts.org. hostmaster.unixhosts.org. (
                                         201102111       ; serial
                                         3h              ; refresh
                                         1m              ; retry
                                         1w              ; expire
                                         1m)             ; minimum

 IN NS ns.inwx.de.
 IN NS ns2.inwx.de.
 IN NS ns3.inwx.de.

The host amy.unixhosts.org is my ‘hidden primary’. As you can see, it’s not listed as NS, so it won’t get queries from actual client resolvers. ns[2,3].inwx.de are my name-servers for this zone, configured as slaves.

The ‘hidden primary’ config looks like:

zone "unixhosts.org" {
        type master;
        file "master/unixhosts.org";
        allow-transfer { unixhosts; inwx; };
        also-notify { 10.0.1.1; 10.0.2.1; 10.0.3.1; };
};

Whereas a ‘slave’ config looks like:

zone "unixhosts.org" {
        type slave;
        file "slave/unixhosts.org";
        masters { 10.0.0.1; };
        allow-transfer { clients; };
};

If your Infrastructure isn’t large enough to take responsibility for 3 public DNS servers, you might want to have a look at InterNetworX. I’m running their servers as ‘slaves’ for a few months now. Their support team is great and I haven’t had any issue within years!

Trying Xen 4.0 on Debian 6.0 aka Squeeze

February 25, 2011, Christian Kildau1 Comment

I have a rather mixed history with all these Virtualization techniques… I started with ranting about Xen and Ubuntu here on the blog, migrated to KVM and Ubuntu and am now considering moving back to Xen… on Debian.

Recently I needed to install Xen on one of our Machines in our Lab at work. KVM was not an option, because the System (a dual-xeon with HT) didn’t have hardware virtualization support. When I last used it, Xen 3 was a pain in the ass with it’s patched old Kernel and full-virtualized guests didn’t perform well. But Xen 4 now has support in upstream Kernel so I thought I’d give it a try… Installation went fine using aptitude. Everything got set up right. But there seems to be a bug with VGA Output though. I haven’t got a login promt or any init-script output until I removed ‘quiet’ from the Kernel’s bootloader options. But this seems to be Hypervisor related, as it does work with the Xen Kernel, but w/o Hypervisor beyond it. So, if all you get is something like

ERROR: Unable to locate IOAPIC for GSI 9

try removing quiet from your bootloader configuration…

MacBook (Pro) and the OCZ Vertex 2

February 18, 2011, Christian Kildau2 Comments

Well… I know it’s very silent over here… not just lately. :/ Anyhow…

I recently upgraded my 2010’s MacBook Pro with a 120Gb SSD. I already installed  a OCZ SSD in my Mac Mini a couple of months ago… Everything runs fast and smooth and the MBP’s battery run time is now even more awesome BUT… the hell! Hibernate is broken! It’s a known bug. Mac OS X just Kernel Oopses on wake-up! OCZ promises to fix it… since 6 months or maybe even longer, I don’t know. There is a Thread over at the OCZ forums, but it’s closed by the ops… lol!

I wasn’t aware of this issue until I ran into it myself. Maybe this post keeps someone from buying the OCZ. It might be worth waiting for the Intel G3 SSDs. But hey… I now have about 10h runtime with my MBP, so I shouldn’t need Hibernate anyway

UPDATE: I’m also having the issue that the Vertex2 isn’t recognized, when my MBP goes to sleep and I directly wake it up again. Reboot doesn’t fix it. It just doesn’t boot. Powering it off for 5 minutes does fix it! Weird…

Mac Mini and the OCZ Vertex 2

October 3, 2010, Christian Kildau1 Comment

Last week I upgraded my early 2009 Mac Mini (Core2Duo, 8Gb RAM, 320Gb 7200rpm HDD) with an SSD. I do heavy multitasking, and my HDD was just slowing me down. I did some research and brought it down to the Intel X-25G2 and the OCZ Vertex 2 (A SandForce based SSD, which works quite different than the Intel SSD). I went with the OCZ Vertex 2 E 60Gb… I am no expert in this SSD stuff, but what I’ve figured out so far is that Mac OS X 10.6 (Snow Leopard) doesn’t support TRIM, which should not be a big problem with current SSDs under Mac OS X from what I’ve read and noticed so far.

The installation itself was as “easy” as I described earlier.

I then booted from the Snow Leopard installation DVD, selected Restore from Time Machine Backup and 30min later my system was back up. As a side note: You can restore from a bigger HDD to a smaller SSD (as long as your data fits with no problems of course), but you will need to format the SSD using Disk Utility first. Otherwise the Restore Wizard will tell you, that your Backup doesn’t fit! It does format it later on again…

(I did re-install yesterday because of some weird issues with graphics performance, but that had nothing to do with the SSD or restore process. I had annoying lags when scrolling in Lightroom or Firefox for example since the Graphics Driver update for Snow Leopard a few months ago…)

Everything works smooth and well. System boot has decreased from 1min 15s to about 30s, OpenOffice now starts in 2s instead of 20s, Lightroom switches quickly between it’s modules… My average write speed is around 90MB/s, average read speed is around 150MB/s. Both for one stream. Should be much higher with multiple streams combined. But what’s more important is the very low access time.

After about one week the Vertex 2 looks like a good teammate for the Mini.

P.S. it looks like Sunday is a good day for scheduling my posts from now on

How to Fix “The file server has closed down” issues in Mac OS and netatalk

August 23, 2010, Christian Kildau4 Comments

Netatalk versions older than 2.1.3 had some issues with the TCP/IP Stack on Linux which resulted in errors like

Luckily they seem to have fixed this in 2.1.3 as the ChangeLog states: fix a serious error in networking IO code.

So the solution is as easy as upgrading. I am running Ubuntu, but two months after netatalk-2.1.3 has been released, they don’t even have it in unstable. Lucky Gentoo users you! I needed to fix this very quickly as it started to disrupt my workflow. Sadly I currently don’t have the time to dig into the packaging system of Debian or Ubuntu, so I looked up Debian’s configure options and just compiled from source:

cp -a /etc/netatalk/ ~
aptitude purge netatalk
apt-get build-dep netatalk
wget http://sourceforge.net/projects/netatalk/files/netatalk/2.1.3/netatalk-2.1.3.tar.bz2/download
tar xjvf netatalk-2.1.3.tar.bz2
cd netatalk-2.1.3
./configure --with-shadow --enable-fhs --enable-tcp-wrappers --enable-timelord --enable-overwrite --with-pkgconfdir=/etc/netatalk --enable-krb4-uam --enable-krbV-uam --with-cnid-dbd-txn --with-libgcrypt-dir --with-cracklib=/var/cache/cracklib/cracklib_dict --enable-debian --disable-srvloc --enable-zeroconf --with-ssl-dir --enable-pgp-uam --prefix=/usr/local/netatalk/
make
sudo make install
mv ~/netatalk /etc/
/etc/init.d/netatalk start

This saves a copy of your running netatalk configuration to your home directory, removes netatalk, downloads all necessary libraries to build netatalk, downloads netatalk from SourceForge, extracts it, configures it, builds it, installs it, restores the configuration and starts it as usual.

I am running netatalk 2.1.3 for a week now and the error seems to be gone

If you know how to easily create a Debian package, feel free to post in the comments.

Empty Trash issues in Snow Leopard

May 30, 2010, Christian Kildau0 Comments

I recently had many issues with Mac OS X 10.6’s Trash. The problem is that, when you Empty the Trash in Snow Leopard, Finder sometimes can’t erase all items because some of them are still in use. The funny part about this is, that most of the time, it’s the Finder itself that still uses the items! I haven’t found a solution so far, but there are at least two workarounds which don’t require logging out or even rebooting.

The first one is to force a relaunch of Finder via Apple -> Force Quit and then try it again.

If that doesn’t help it gets more complicated. You will need to open Terminal.app or any other terminal emulator.

Then type ps auxw | grep <yourfile>. The output will look something like:

$ ps auxw | grep Scan.pdf
Finder 169 chrisk  13r REG 14,2 286 1417024 /Users/chrisk/.Trash/Scan.pdf/..namedfork/rsrc

This might look complicated, but it’s actually simple. The first column is the name of the application, which uses your file, the second column is the PID (process ID) of the application, third column shows the username and the last column simply shows the path to your file in the Trash folder.

Now use ‘kill’ and the applications PID to terminate it.

$ kill 169

You should be able to empty your trash again.

How to Fast VNC alternativ to Remote Desktop to a Mac using NoMachine

March 24, 2010, Christian Kildau8 Comments

I am a very happy Mac OS user with a Mac mini and a MacBook Pro coming soon, but one things I really miss about Mac OS X is the lack of a fast and standards based remote desktop solution. The VNC server built into Mac OS X isn’t really compatible with all clients, and I still haven’t figured out if it’s possible to run it with a different resolution and color depth than the real screen!!!

But, I recently re-discovered a solution I got to know in my Linux time on a desktop: NoMachine. You’ll need a server running a recent Linux distribution or OpenSolaris which will act as a kind of a proxy and the setup is a bit complex, but it does work well. I’ll show you how to do it running Ubuntu Lucid.

First go to http://www.nomachine.com/select-package.php?os=linux&id=1 select your architecture and download all three files: client, node and server.

Then install them in the following order, fix the missing dependecies and install a vnclient plus vncpassword:

sudo dpkg -i nxclient_3.4.0-7_x86_64.deb
sudo dpkg -i nxnode_3.4.0-11_x86_64.deb
sudo dpkg -i nxserver_3.4.0-12_x86_64.deb
sudo aptitude -f install
sudo aptitude install xvnc4viewer vnc4-common

Since it’s really advisable I hope you already have PasswordAuthentication no in your sshd_config to disable Password authentication and to only allow key-based authentication. You’ll need to tweak nxserver a bit to get it working with key-based auth. Edit /usr/NX/etc/server.cfg to…

EnablePasswordDB = &quot;1&quot;


…edit the following line in /usr/NX/etc/node.cfg to enable VNC…

CommandStartRFB = &quot;/usr/bin/vncviewer -fullscreen&quot;


…create a key for your key-based authentication and restart nxserver.

sudo /usr/NX/bin/nxserver --keygen
sudo service nxserver restart


Your new key is placed at /usr/NX/share/keys/default.id_dsa.key. Copy it the device you want to connect from using scp or similar tools. Now all you need to do is enable the users you wan’t in nxserver:

sudo /usr/NX/bin/nxserver --useradd &lt;user&gt;


This enables the user in NX’s database and copies the previously generated key to the user’s authorized_keys file.

Now just enable VNC on your Mac. Go to “System Preferences”, select “Sharing” and enable “Screen Sharing”:

Now you’ll need to configure your client. Read More

Nginx or Apache?

March 21, 2010, Christian Kildau0 Comments

I recently discovered nginx when I was thinking about replacing apache2 as a reverse-proxy that adds ssl and authentication to my internal webserver. I finally chose nginx and it’s now running on my freshly installed OpenBSD 4.7 gateway. I chose nginx because of it’s straight-forward configuration syntax and because it has a much smaller codebase, which means it should run faster and has less security flaws. The documentation also is great. Plus nginx seems to be the rising star on the horizon of webservers Many large sites are already running it as their reverse-proxies/loadbalancers according to this article.

For me nginx runs much faster than apache2. Where apache2 gave about 14MBps for a single download session, nginx gives me 23MBps (It’s a slow Intel Atom machine). Here’s my configuration. But since the nginx docs are that good, you don’t need any how-tos! Just rtfm

user _nginx;
worker_processes  1;

events {
    worker_connections  1024;
}


http {
    sendfile        on;
    keepalive_timeout  65;
    gzip  on;
    access_log off;
    error_log off;
    server {
        listen 443 ;
        ssl on;
        server_name ext.example.org;
        ssl_certificate     ext.example.org.crt;
        ssl_certificate_key ext.example.org.key;
        ssl_session_timeout 5m;
        ssl_protocols       SSLv3 TLSv1;
        
        location / {
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Server $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass http://int.example.org;
            auth_basic &quot;int.example.org&quot;;
            auth_basic_user_file /etc/nginx/htpasswd;
        }
    }
}


I just love this thing. Maybe I’ll replace apache2 on my internal webserver, too.

How to Setup KVM on Ubuntu Lucid

March 19, 2010, Christian Kildau9 Comments

More than a year ago I wrote an article about Xen on Ubuntu Intrepid with the intention of blaming Ubuntu. I also clearly said, that I wouldn’t use Ubuntu anymore. This article turned out to be the most hit one on my blog. Maybe because the Ubuntu community directly links to it. Then, last Summer I wrote an article about alternatives to Xen, but I decided to wait and stay with Xen on my homeserver in the meantime. (Please keep in mind, all I use this for is for my private setups!). Last week I upgraded my Server’s hardware and also wanted to re-install it.

Xen still hasn’t made it into vanilla Kernel, it might make it into 2.6.34 or .35, but even if it does, I think it’s not even going to be close to being production ready. Plus most distributions release their next version in the next weeks/months and are already frozen, so they definitely will not ship with Xen. Well, the only real alternative is KVM. I didn’t like the idea of using KVM for a long time, but since almost every distribution now features KVM as their virtualization technique, I went with it. I also went with Ubuntu again (yeah blame me!). Why? Because their next release has long-term support, and I won’t have the time to upgrade it in the next 12-18 months. And what shall I say… I like it. Installation was kinda tricky on a software Raid0, but I was installing a development release, 1 week before the first Beta… and in the end it did work.

The server runs KVM now and it runs fast and stable. I have 4 virtual machines on it now. Installation of the guests using virt-installer and/or ubuntu-vm-builder was much easier and ended up with working VMs out of the box, whereas xen-create-image ended up with an unusable image on Intrepid, because the default console never showed up without tweaks. libvirt is also nice if you need it, but I really want to point out, that you can run KVM without libvirt just with the ‘kvm’ command!

I tagged this article ‘How-To’, but there are already many good KVM guides out there so I won’t write yet another one. I’ll just post a few hints to get KVM running with a bridged networking using libvirt.

First of all I removed /etc/libvirt/qemu/networks/default.xml to disable the dnsmasq features of libvirt. Then I created an LVM volume group where I wanted to place my machines at, but you can also use simple images on your filesystem. The next thing I did was setting up a bridge in /etc/network/interfaces:

auto br1
iface br1 inet dhcp
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0


You can now simply create your virtual machine with this command:

virt-install -n $hostname -r 512 -c /home/shared/apps/os/ubuntu/lucid-server-amd64.iso --disk path=/dev/virtdisks/bender --network bridge=br1 --vnc --vnclisten=0.0.0.0 --noautoconsole --os-type linux --os-variant ubuntuLucid --accelerate


Now connect to your host using VNC and install as usual. Another way is to use ‘ubuntu-vm-builder’, but I simply didn’t try… Make sure you limit VNC access to localhost in /etc/libvirt/qemu/$hostname.xml after installation if your network is unsecure.

To make your domain autostart on boot use:

virsh autostart $hostname


This will copy the appropriate xml configuration file to /etc/libvirt/qemu/autostart/.

It’s as simple as that. Way easier than patching a kernel for Xen and all these things. I would have really loved to see Xen in vanilla Kernel a year ago or so, but it didn’t happen and KVM works well enough for me by now… plus you have the benefit of a working power-management.

Take care.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close