No Huddle Offense

"Individual commitment to a group effort-that is what makes a team work, a company work, a society work, a civilization work."

OpenIndiana zones & DTrace

July 6th, 2011 • Comments Off on OpenIndiana zones & DTrace

Let’s assume we want to create a Solaris zone called foo on an OpenIndiana box. This post will walk you to all the steps necessary to bootstrap and configure the zone, so it’s ready to use without any user interaction. Also briefly discussed is how to limit the resources a zone can consume.

7 Steps are included in this mini tutorial:

  1. Step 1 – Create the zpool for your zones
  2. Step 2 – Configure the zone
  3. Step 3 – Sign into the zone
  4. Step 4 – Delete and unconfigure the zone
  5. Step 5 – Limit memory
  6. Step 6 – Use the fair-share scheduler
  7. Step 7 – Some DTrace fun

Step 1 – Create the zpool for your zones

First a pool is created and mounted to /zones. Deduplication is activated for this pool & a quota is set – so the zone has a space limit of 10Gb.

zfs create -o compression=on rpool/export/zones
zfs set mountpoint=/zones rpool/export/zones
zfs set dedup=on rpool/export/zones

mkdir /zones/foo
chmod 700 /zones/foo

zfs set quota=10g rpool/export/zones/foo

Step 2 – Configure the zone

A zone will be configured the way that it has the IP 192.168.0.160 (Nope – DHCP doesn’t work here :-)) and uses the physical device rum2. Otherwise the configuration is pretty straight forward. (TODO: use crossbow)

zonecfg -z foo "create; set zonepath=/zones/foo; set autoboot=true; \
 add net; set address=192.168.0.160/24; set defrouter=192.168.0.1; set physical=rum2; end; \
 verify; commit"
zoneadm -z foo verify
zoneadm -z foo install

Too ensure that when we boot the zone everything is ready to use without any additional setups, a file called sysidcfg is placed in the /etc of the zone. This will make sure that when we boot all necessary parameters like a root password, the network or the keyboard layout are automatically configured. Also the host’s resolv.conf is copied to the zone (this might not be necessary if you have a properly setup DNS server – than you can configure that DNS server in the sysidcfg file – mine does not know the hostname foo so that is why I do it this way) and the nsswitch.conf file is copied so it’ll use the resolv.conf file. Finally the zone is started…

echo "
name_service=NONE
network_interface=PRIMARY {hostname=foo
                           default_route=192.168.0.1
                           ip_address=192.168.0.160
                           netmask=255.255.255.0
                           protocol_ipv6=no}
root_password=aajfMKNH1hTm2
security_policy=NONE
terminal=xterms
timezone=CET
timeserver=localhost
keyboard=German
nfs4_domain=dynamic
" &> /zones/foo/root/etc/sysidcfg

cp /etc/resolv.conf /zones/foo/root/etc/
cp /zones/foo/root/etc/nsswitch.dns /zones/foo/root/etc/nsswitch.files

zoneadm -z foo boot

To create a password you can use the power of Python – the old way of copying the passwords from /etc/shadow doesn’t work on newer Solaris boxes since the value of CRYPT_DEFAULT is set to 5 in the file /etc/security/crypt.conf:

python -c "import crypt; print crypt.crypt('password', 'aa')"

Step 3 – Sign into the zone

Now zlogin or ssh can be used to access the zone – Note that the commands mpstat and prtconf will show that the zone has the same hardware configuration as the host box (zfs list – will show that disk space is already limited). In the next steps we want to limit those…

Step 4 – Delete and unconfigure the zone

First we will delete the zone foo again:

zoneadm -z foo halt
zoneadm -z foo uninstall
zonecfg -z foo delete

Step 5 – Limit memory

Following the steps above just change the configuration of the zone and add the capped-memory option. In this example it’ll limit the memory available to the zone. When running prtconf it’ll show that the zone only has 512Mb RAM – mpstat will still show all CPUs of your host box.

zonecfg -z foo "create; set zonepath=/zones/foo; set autoboot=true; \
 add net; set address=192.168.0.160/24; set defrouter=192.168.0.1; set physical=rum2; end; \
 add capped-memory; set physical=512m; set swap=512m; end; \
 verify; commit"

Step 6 – Using Resource Pools

While using resources pools it is possible to create a resource pool for a zone which only has one CPU assigned. Use the pooladm command to configure a pool called pool1:

poolcfg -c 'create pset pool1_set (uint pset.min=1 ; uint pset.max=1)'
poolcfg -c 'create pool pool1'
poolcfg -c 'associate pool pool1 (pset pool1_set)'

pooladm -c # writes to /etc/pooladm.conf

To restore the old pool configuration run ‘pooladm -x‘ and ‘pooladm -s

Now just configure the zone to use and associate it with the pool:

zonecfg -z foo "create; set zonepath=/zones/foo; set autoboot=true; \
 set pool=pool1; \
 add net; set address=192.168.0.160/24; set defrouter=192.168.0.1; set physical=rum2; end; \
 add capped-memory; set physical=512m; set swap=512m; end; \
 verify; commit"

Running mpstat and prtconf in the zone will show only one CPU and 512Mb RAM.

Step 6 – Use the fair-share scheduler

Also if you have several zones running in one pool you want to modify the pool to use FSS – so a more important zone gets privileged shares:

poolcfg -c 'modify pool pool_default (string pool.scheduler="FSS")'
pooladm -c
priocntl -s -c FSS -i class TS
priocntl -s -c FSS -i pid 1

And during the zone configuration define the rctl option – This example will give the zone 2 shares:

zonecfg -z foo "create; set zonepath=/zones/foo; set autoboot=true; \
 add net; set address=192.168.0.160/24; set defrouter=192.168.0.1; set physical=rum2; end; \
 add capped-memory; set physical=512m; set swap=512m; end; \
 add rctl; set name=zone.cpu-shares; add value (priv=privileged,limit=2,action=none); end; \
 verify; commit"

Step 7 – Some DTrace fun

DTrace can ‘look’ into the zones – For example to let DTrace look at the files which are opened by process within the zone foo you can simply add the predicate ‘zonename == “foo”‘:

pfexec dtrace -n 'syscall::open*:entry / zonename == "foo" / \
  { printf("%s %s",execname,copyinstr(arg0)); }'

I was researching this stuff to create a Python module to configure and bootstrap zones so I can monitor the zones & their previously created SLAs.

Install and Autoconfigure a Opensolaris zone with ZFS dedup

June 7th, 2010 • Comments Off on Install and Autoconfigure a Opensolaris zone with ZFS dedup

This is a simple script which will setup a OpenSolaris zone. After installing it is automatically configured using the sysidcfg file After running this script you will be logged in automatically. I use this script (slightly modified) to setup a complete test Platform LSF cluster…

It features the following setup:

#!/usr/bin/bash
zfs create rpool/export/zones
zfs set mountpoint=/zones rpool/export/zones
zfs set dedup=on rpool/export/zones

mkdir /zones/lsf_zone
chmod 700 /zones/lsf_zone

zonecfg -z lsf_zone "create; set zonepath=/zones/lsf_zone; set autoboot=false; add net; set address=192.168.0.160/24
; set defrouter=192.168.0.1; set physical=iwh0; end; verify; commit"

zoneadm -z lsf_zone verify
zoneadm -z lsf_zone install

zoneadm -z lsf_zone ready
touch /zones/lsf_zone/root/etc/sysidcfg

echo "name_service=NONE
system_locale=C
timeserver=localhost
timezone=CET
terminal=xterm
security_policy=NONE
nfs4_domain=dynamic
network_interface=primary {dhcp protocol_ipv6=no}" &> /zones/lsf_zone/root/etc/sysidcfg

zoneadm -z lsf_zone boot