Difference between revisions of "Eucalyptus: Administration Guide"

From OnnoWiki
Jump to navigation Jump to search
(New page: Eucalyptus Administrator's Guide (1.5.2) This guide is meant for people interested in installing Eucalyptus on their resources: anything from a laptop to a set of clusters. (If you are tr...)
 
 
Line 1,448: Line 1,448:
 
* [[Eucalyptus]]
 
* [[Eucalyptus]]
 
* [[Linux Howto]]
 
* [[Linux Howto]]
 +
 +
 +
[[Category: Cloud Computing]]
 +
[[Category: Linux]]

Latest revision as of 11:31, 28 September 2010

Eucalyptus Administrator's Guide (1.5.2)

This guide is meant for people interested in installing Eucalyptus on their resources: anything from a laptop to a set of clusters. (If you are trying to use an existing Eucalyptus installation, you may be more interested in the User's Guide.) Installing Eucalyptus

architecture.png

A Eucalyptus cloud setup consists of three components -- the cloud controller (CLC), the cluster controller (CC), and node controller(s) (NCs). The cloud controller is a Java program that offers a Web-services interface as well as a Web interface to the outside world. In addition to handling incoming requests, the cloud controller performs high-level resource scheduling and system accounting, as well as implements the S3-compatible bucket-based storage (Walrus) and EBS-style block-based storage. Cluster controller (for cluster-level scheduling and network control) and node controller (for hypervisor control) are written in C and deployed as Web services inside Apache.

Communication among these three types of components goes over SOAP with WS-security. There is one cluster controller per cluster, running on the head node; there is one node controller per each compute node. So, if you are installing Eucalyptus on one cluster, then one cloud and one cluster controller should be deployed on the head node and one node controller should be deployed on each compute node.

Before you proceed with the installation, be sure to take a look at the list of Eucalyptus's prerequisites. If you are upgrading from a previous version of Eucalyptus, please follow the instructions in the Upgrade Document.

Eucalyptus can be installed from source or using a set of packages (RPM and DEB). The former method is more general and should work on practically any Linux system, the latter should work on distribution which we support (as of 1.5.2 they are Ubuntu 9.04, Debian squeeze/lenny, CentOS 5.3, and openSUSE 11). Furthermore, we have special advice for those using Rocks-based clusters.

If you run into any problems, be sure to check the troubleshooting guide for solutions to commonly encountered problems. Eucalyptus Prerequisites

What follows is a comprehensive list of dependencies that must be satisfied before building Eucalyptus or running it. While we provide distribution-specific installation instructions that help satisfy these dependencies, this list should be useful if you are installing or building Eucalyptus on an unsupported distribution.

1. For compiling from source ¶

   * C compilers
   * Java Developer Kit (SDK) version 1.6 or above
   * Apache ant 1.6.5 or above
   * libc development files
   * pthreads development files
   * libvirt development files
   * Axis2C and rampart development files (included with Eucalyptus)
   * Curl development files
   * openssl development files
   * Optional: zlib development files

2. For running Eucalyptus ¶

There are a few different Eucalyptus components that run on either a cluster 'front-end', or on a cluster 'node'. There are different run-time dependencies for 'front-end' and 'node' components. One physical machine can play the role of the front-end and the node.

Front-end run-time dependencies

   * Java 6 is needed by the Eucalyptus components running on the front end. Note that GNU Compiler for Java (gcj), included by default with some Linux distributions, is not sufficient. Make sure that your JAVA_HOME environment variable is set to the location of your JDK.
   * Apache ant is needed to run the Cloud Controller.
   * Perl is used by helper scripts
   * The head node must run a server on port 25 that can deliver or relay email messages to cloud users' email addresses. This can be Sendmail, Exim, or postfix, or even something simpler, given that this server does not have to be able to receive incoming mail. Many Linux distributions satisfy this requirement out of the box. To test whether you have a properly functioning mail relay for localhost, try to send email to yourself from the terminal using "mail".
   * Dependencies for network support differ depending on the mode used (see Eucalyptus Networking for details). For full functionality satisfy all of them:
         o For all modes:
               + iproute and iptables packages (ip and iptables commands must work)
         o For all modes except SYSTEM:
               + DHCP Server compatible with ISC DHCP Daemon version 3.0.X (dhcp3-server)
         o For MANAGED and MANAGED-NOVLAN modes:
               + bridge-utils package (brctl command must work)
         o Additionally, for MANAGED mode:
               + vlan package (vconfig command must work)
   * For persistent dynamic block storage (aka EBS) to work, the front end will need to have the following software packages installed:
         o lvm2 package (e.g., command lvm should work)
         o aoetools package. The aoe module needs to be loaded on the front end as well as all nodes (modprobe aoe). If your kernel does not have ATA-over-Ethernet support, you will have to add that.
         o vblade package

Node run-time dependencies

   * Perl scripts are invoked by the Node Controller
   * Two hypervisors are supported:
        1. Xen (version >= 3.0.x)
               o Furthermore, xen-utils package is needed (xm command must work)
        2. KVM
   * Dependencies for network support differ depending on the mode used (see Eucalyptus Networking for details). For full functionality satisfy all of them:
         o For all modes:
               + iproute and iptables packages (ip and iptables commands must work)
         o For MANAGED and MANAGED-NOVLAN modes:
               + bridge-utils package (brctl command must work)
         o Additionally, for MANAGED mode:
               + vlan package (vconfig command must work)
   * libvirt package (potentially with libvirtd, depending on hypervisor configuration)

All Eucalyptus components

   * You must be root to install and start Eucalyptus components (by default they will run under a different user after start). This document assumes that all commands will be executed as root.

Attention Rocks users: Eucalyptus 1.5.2 can be installed on a Rocks-based cluster of version 5 or higher. To satisfy the prerequisites, please, install Java on the front-end and the xen roll in each of your virtual machine containers. The JDK installed by the java roll of the current version of Rocks is unfortunately insufficient, so you will need to install JDK 1.6.0 "manually". For our testing we used Sun's JDK, which can be found at http://java.sun.com/javase/downloads/index.jsp.

3. Distribution-specific examples ¶

What follows is a superset of all packages necessary for building and running Eucalyptus on each supported distribution:

   * For Opensuse 11.1, run the following command to install all required dependency packages:

yast2 -i bzr python-paramiko make gcc ant apache2 apache2-devel\ java-1_6_0-openjdk java-1_6_0-openjdk-devel libvirt-devel libcurl-devel\ vlan dhcp-server bridge-utils ant-contrib ant-nodeps curl libvirt

   * For Ubuntu 9.04, run the following command to install all required dependency packages:

apt-get install bzr gcc make apache2-threaded-dev ant openjdk-6-jdk\ libvirt-dev libcurl4-dev dhcp3-server vblade apache2 unzip curl vlan\ bridge-utils libvirt-bin kvm

   * For CentOS 5.3, run the following command to install all required dependency packages:

yum install -y java-1.6.0-openjdk-devel ant ant-nodeps libvirt-devel\ curl-devel httpd httpd-devel apr-devel openssl-devel dhcp

   * For Debian, run the following command to install all required dependency packages:

apt-get install gcc make apache2-threaded-dev ant openjdk-6-jdk\ libvirt-dev libcurl4-dev dhcp3-server vblade apache2 unzip curl vlan\ bridge-utils libvirt-bin kvm sudo

Please, consult the distribution-specific pages for detailed installation instructions.

4. For interacting with Eucalyptus ¶

To interact with Eucalyptus, you need to install EC2-compatible command-line tools. The instructions in Eucalyptus documentation rely on the euca2ools command-line tools distributed by the Eucalyptus Team. Many other third-party tools can also be used for some of the tasks, as described on the ecosystem page. Installation from distribution-specific binary packages

Choose a linux distribution: Installing Eucalyptus on CentOS 5.3

Eucalyptus can be installed on CentOS 5.3 from source, or by using binary RPM packages. This document details the steps required to install Eucalyptus from RPMs.

Download RPMs ¶

Eucalyptus binary installation is broken up into several packages: the cloud controller (-cloud package), the cluster controller (-cc package), and the node controller (-nc package). To simplify installation, everything is bundled into a single "tarball" for convenience, wherein we include copies of third-party packages that Eucalyptus depends on (Rampart, Axis2C, many Java libraries).

Download the appropriate tarball from http://open.eucalyptus.com/downloads

   * For 32-bit machines, get eucalyptus-1.5.2-centos-i386.tar.gz
   * For 64-bit machines, get eucalyptus-1.5.2-centos-x86_64.tar.gz

Untar the bundle in a temporary location:

tar zxvf eucalyptus-1.5.2-*.tar.gz cd eucalyptus-1.5.2-*

All others dependencies are present in the standard repositories.

Prerequisites ¶

If you start with a standard CentOS installation, you will satisfy all Eucalyptus prerequsites with the following steps:

  1. Front-end, node and client machine system clocks are synchronized (i.e. using NTP).
     yum install -y ntp
     ntpdate pool.ntp.org
  2. Node has a fully installed and configured installation of Xen that allows controlling the hypervisor via HTTP from localhost.
     yum install -y xen
     sed --in-place 's/#(xend-http-server no)/(xend-http-server yes)/' /etc/xen/xend-config.sxp
     sed --in-place 's/#(xend-address localhost)/(xend-address localhost)/' /etc/xen/xend-config.sxp
     /etc/init.d/xend restart

You will also need a kernel with xen support enabled.

  3. Front-end has Java, Apache ant, and a DHCP server (not necessarily configured properly as Eucalyptus will invoke the binary with a proper configuration)
     yum install -y java-1.6.0-openjdk ant ant-nodeps dhcp bridge-utils
  4. Firewall rules must permit the Eucalyptus components to communicate with one another, and clients to communicate with Eucalyptus. On the front-end, ports 8443, 8773, 8774 must be open; on the node, port 8775 must be open. If you are planning on using Elastic IPs and/or Security Groups, you may want to consider disabling the firewall and use Eucalyptus facilities for enabling custom firewall rules (see Network configuration for more information). On both the front-end and the nodes:

For example, from a text console:

   * run system-config-securitylevel
   * select Security Level: Disabled
   * select OK

From an X terminal:

   * run system-config-security-level
   * select 'Disabled' for 'Firewall'
   * select the 'SELinux' tab
   * select either 'Permissive' or 'Disabled' for SELinux Setting

Install RPMs on the front end ¶

In the examples below we use x86_64, which should be replaced with i386 or i586 on 32-bit architectures.

Install the third-party dependency RPMs on the front end:

cd eucalyptus-1.5.2-rpm-deps-x86_64 rpm -Uvh aoetools-21-1.el4.x86_64.rpm \ euca-axis2c-1.5.0-2.x86_64.rpm \ euca-rampartc-1.2.0-1.x86_64.rpm \ vblade-14-1mdv2008.1.x86_64.rpm cd ..

Install the -cloud and -cc RPMs on the front end:

rpm -Uvh eucalyptus-1.5.2-1.x86_64.rpm \ eucalyptus-cloud-1.5.2-1.x86_64.rpm \ eucalyptus-gl-1.5.2-1.x86_64.rpm \ eucalyptus-cc-1.5.2-1.x86_64.rpm Install RPMs on the nodes ¶

Install the dependency packages on each compute node:

cd eucalyptus-1.5.2-rpm-deps-x86_64 rpm -Uvh aoetools-21-1.el4.x86_64.rpm \ euca-axis2c-1.5.0-2.x86_64.rpm \ euca-rampartc-1.2.0-1.x86_64.rpm \ euca-libvirt-1.5-1.x86_64.rpm \ vblade-14-1mdv2008.1.x86_64.rpm cd ..

Install the node controller RPM with dependencies on each compute node:

rpm -Uvh eucalyptus-1.5.2-1.x86_64.rpm \ eucalyptus-gl-1.5.2-1.x86_64.rpm \ eucalyptus-nc-1.5.2-1.x86_64.rpm Post-Install Steps ¶

The last step in the installation is to make sure that the user 'eucalyptus', which is created at RPM installation time, is configured to interact with the hypervisor through libvirt on all of your compute nodes. The easiest way to check this is to run the following command on each node:

su eucalyptus -c "virsh list"

The output of that command may include error messages (failed to connect to xend), but as long as it includes a listing of all domains (at least Domain-0), the configuration is in order.

Start up your Eucalyptus services as follow:

On the front-end:

/etc/init.d/eucalyptus-cloud start /etc/init.d/eucalyptus-cc start

On the node:

/etc/init.d/eucalyptus-nc start

At this point you should be ready to go through the first-time configuration, as described in configuration section.

Other Configuration Dependencies ¶

If you are planning on compiling Eucalyptus, or running Eucalyptus with Elastic IPs, Security Groups and/or using Eucalyptus' ability to isolate VM networks using VLAN tagging, you'll have to install some extra dependencies. The complete list follows:

yum install -y java-1.6.0-openjdk-devel ant ant-nodeps libvirt-devel curl-devel httpd httpd-devel apr-devel openssl-devel dhcp Installing Eucalyptus (1.5.2) on Debian Lenny (5.0.1)

Eucalyptus can be installed on Debian Lenny using binary DEB packages. Download DEBs ¶

Eucalyptus binary installation is broken up into several packages: the cloud controller (-cloud package), the cluster controller (-cc package), and the node controller (-nc package). To simplify installation, everything is bundled into a single "tarball" for convenience, wherein we include copies of third-party packages that Eucalyptus depends on (Rampart, Axis2C, and Java libraries).

Download the appropriate tarball from http://open.eucalyptus.com/downloads

   * For 32-bit machines, get eucalyptus-1.5.2-lenny-i386.tar.gz
   * For 64-bit machines, get eucalyptus-1.5.2-lenny-amd64.tar.gz 

Untar the bundle in a temporary location and add the directory to your sources.list

tar zxvf eucalyptus-1.5.2-*.tar.gz cd eucalyptus-1.5.2-* su echo deb file://${PWD} ./ >> /etc/apt/sources.list apt-get update

NOTE: After installation feel free to remove the entry from sources.list Prerequisites ¶

If you start with a standard Debian Lenny installation, you will satisfy all Eucalyptus prerequesites with the following steps:

  1. Front-end, node and client machine system clocks are syncronized (i.e., using NTP).
  2. Firewall rules must permit the Eucalyptus components to communicate with one another, and clients to communicate with Eucalyptus.
         * NOTE: on the front-end, ports 8443, 8773, 8774 must be open. On the node, port 8775 must be open 
  3. Ensure that locales are properly configured (use dpkg-reconfigure locales 

Node Controllers: eucalyptus-nc

  4. Node has a fully installed and configured installation of Xen.
         * RECOMMENDED: verify your Xen installation by manually bringing up a VM and testing that it has network connectivity using bridged networking.
         * EXAMPLE: /etc/xend/xend-config.sxp
           (xend-http-server yes)
           (xend-unix-server yes)
           (xend-unix-path /var/lib/xend/xend-socket)
           (xend-address localhost)
           (network-script network-bridge)
           (vif-script vif-bridge)
           (dom0-min-mem 196)
           (dom0-cpus 0)
           (vncpasswd )

Cloud Controller: eucalyptus-cloud

  5. Fix cacerts for openjdk-6-jdk (missing from the package).
         * Add non-free to your apt sources file /etc/apt/sources.list, for example:
           su -
           echo deb http://debian.osuosl.org/debian lenny non-free >> /etc/apt/sources.list
           apt-get update
         * Install sun-java6-jre and create link for cacerts
           su -
           apt-get install ca-certificates sun-java6-jre
           mkdir -p /etc/ssl/certs/java/
           ln -sf /etc/java-6-sun/security/cacerts /etc/ssl/certs/java/cacerts

Install DEBs on the front end ¶

On front end, where cloud controller and cluster controller will run, install the -cloud and -cc DEBs:

aptitude install eucalyptus-cc eucalyptus-cloud

Install DEBs on the nodes ¶

On the compute nodes, install the node controller DEB:

aptitude install eucalyptus-nc

Installing Eucalyptus on openSUSE 11

Eucalyptus can be installed on openSUSE 11 from source, or by using binary RPM packages. This document details the steps required to install Eucalyptus from RPMs.

Download RPMs ¶

Eucalyptus binary installation is broken up into several packages: the cloud controller (-cloud package), the cluster controller (-cc package), and the node controller (-nc package). To simplify installation, everything is bundled into a single "tarball" for convenience, wherein we include copies of third-party packages that Eucalyptus depends on (Rampart, Axis2C, many Java libraries).

Download the appropriate tarball from http://open.eucalyptus.com/downloads

   * For 32-bit machines, get eucalyptus-1.5.2-opensuse-i386.tar.gz
   * For 64-bit machines, get eucalyptus-1.5.2-opensuse-x86_64.tar.gz

Untar the bundle in a temporary location:

tar zxvf eucalyptus-1.5.2-*.tar.gz cd eucalyptus-1.5.2-*

All others dependencies are present in the standard repositories.

Prerequisites ¶

If you start with a standard openSUSE installation, you will satisfy all Eucalyptus prerequsites with the following steps:

  1. Front-end, node and client machine system clocks are syncronized (i.e. using NTP).
     sntp -P no -r pool.ntp.org
     yast2 -i ntp
     /etc/init.d/ntp restart
  2. Install all dependency packages that are required for Eucalyptus to run:
     # On the front-end:
     yast2 -i ant apache2 apache2-devel java-1_6_0-openjdk java-1_6_0-openjdk-devel libvirt libvirt-devel curl libcurl-devel vlan dhcp-server bridge-utils ant-contrib ant-nodeps
     # On the node:
     yast2 -i xen libvirt libcurl-devel vlan apache2
  3. Node has a fully installed and configured installation of Xen.
         * RECOMMENDED: verify your Xen installation by manually bringing up a VM and testing that it has network connectivity using bridged networking.
         * To set up bridged networking on your node, use the 'yast2' command and go through the following steps:
               o Network Devices
               o Network Settings
               o Select 'OK' to get past information box
               o Traditional Method with ifup
               o Overview
               o Add
               o Device Type: Bridge
               o Next
               o Bridged Devices: select eth0 (or the name of your primary interface)
               o Next
               o Continue
               o Ok
         * make sure that the libvirt daemon (libvirtd) is running and configured properly
               o /etc/init.d/libvirtd start
               o virsh list
  4. Firewall rules must permit the Eucalyptus components to communicate with one another, and clients to communicate with Eucalyptus.
         * NOTE: on the front-end, ports 8443, 8773, 8774 must be open. On the node, port 8775 must be open
         * If you are planning on using Elastic IPs and/or Security Groups, you may want to consider disabling the firewall and use Eucalyptus facilities for enabling custom firewall rules (see Network configuration for more information).
               o yast2 firewall startup manual
               o /etc/init.d/SuSEfirewall2_init stop
               o reboot

Install RPMs on the front end ¶

In the examples below we use x86_64, which should be replaced with i586 on 32-bit architectures.

Install the third-party dependency RPMs:

cd eucalyptus-1.5.2-rpm-deps-x86_64 rpm -Uvh aoetools-25-2.49.x86_64.rpm \ euca-axis2c-1.5.0-2.x86_64.rpm \ euca-rampartc-1.2.0-1.x86_64.rpm \ vblade-15-2.49.x86_64.rpm cd ..

On front end, where cloud controller and cluster controller will run, install the -cloud and -cc RPMs:

rpm -Uvh eucalyptus-1.5.2-1.x86_64.rpm \ eucalyptus-cloud-1.5.2-1.x86_64.rpm \ eucalyptus-gl-1.5.2-1.x86_64.rpm \ eucalyptus-cc-1.5.2-1.x86_64.rpm Install RPMs on the nodes ¶

Install the dependency packages on each node:

cd eucalyptus-1.5.2-rpm-deps-x86_64 rpm -Uvh aoetools-25-2.49.x86_64.rpm \ euca-axis2c-1.5.0-2.x86_64.rpm \ euca-rampartc-1.2.0-1.x86_64.rpm \ vblade-15-2.49.x86_64.rpm cd ..

On the compute nodes, install the node controller RPM with dependencies:

rpm -Uvh eucalyptus-1.5.2-1.x86_64.rpm \ eucalyptus-gl-1.5.2-1.x86_64.rpm \ eucalyptus-nc-1.5.2-1.x86_64.rpm Post-Install Steps ¶

The last step in the installation is to make sure that the user 'eucalyptus', which is created at RPM installation time, has rights to interact with the hypervisor through libvirt on all of your nodes. The easiest way to set this up is to run the following command on each node:

su eucalyptus -c '(sleep 1; echo foobar; echo always) | virsh list'

where you substitute your root password for 'foobar'. Alternatively, you can manually log in to each node, become the user 'eucalyptus' using 'su', run 'virsh list', enter your root password and finally enter 'always'.

Start up your Eucalyptus services as follow:

On the front-end:

/etc/init.d/eucalyptus-cloud start /etc/init.d/eucalyptus-cc start

On the node:

/etc/init.d/eucalyptus-nc start Other Configuration Dependencies ¶

If you are planning on compiling Eucalyptus, or running Eucalyptus with Elastic IPs, Security Groups and/or using Eucalyptus' ability to isolate VM networks using VLAN tagging, you'll have to install some extra dependencies. The complete list follows:

yast2 -i bzr python-paramiko make gcc ant apache2 apache2-devel java-1_6_0-openjdk java-1_6_0-openjdk-devel libvirt libvirt-devel curl libcurl-devel vlan dhcp-server bridge-utils ant-contrib ant-nodeps Installing Eucalyptus (1.5.2) on Debian squeeze/sid

Eucalyptus can be installed on Debian squeeze or sid using binary DEB packages. Squeeze has not been released yet, so things can change quickly and without warning. Download DEBs ¶

Eucalyptus binary installation is broken up into several packages: the cloud controller (-cloud package), the cluster controller (-cc package), and the node controller (-nc package). To simplify installation, everything is bundled into a single "tarball" for convenience, wherein we include copies of third-party packages that Eucalyptus depends on (Rampart, Axis2C, and Java libraries).

Download the appropriate tarball from http://open.eucalyptus.com/downloads

   * For 32-bit machines, get eucalyptus-1.5.2-squeeze-i386.tar.gz
   * For 64-bit machines, get eucalyptus-1.5.2-squeeze-amd64.tar.gz 

Untar the bundle in a temporary location and add the directory to your sources.list

tar zxvf eucalyptus-1.5.2-*.tar.gz cd eucalyptus-1.5.2-* su echo deb file://${PWD} ./ >> /etc/apt/sources.list apt-get update

NOTE: After installation feel free to remove the entry from sources.list Prerequisites ¶

If you start with a standard Debian Squeeze installation, you will satisfy all Eucalyptus prerequsites with the following steps:

  1. Front-end, node and client machine system clocks are syncronized (i.e., using NTP).
  2. Firewall rules must permit the Eucalyptus components to communicate with one another, and clients to communicate with Eucalyptus.
         * NOTE: on the front-end, ports 8443, 8773, 8774 must be open. On the node, port 8775 must be open 
  3. Ensure that locales are properly configured (use dpkg-reconfigure locales) 

Node Controllers: eucalyptus-nc

  4. Node has a fully installed and configured installation of Xen.
         * RECOMMENDED: verify your Xen installation by manually bringing up a VM and testing that it has network connectivity using bridged networking.
         * EXAMPLE: /etc/xend/xend-config.sxp
           (xend-http-server yes)
           (xend-unix-server yes)
           (xend-unix-path /var/lib/xend/xend-socket)
           (xend-address localhost)
           (network-script network-bridge)
           (vif-script vif-bridge)
           (dom0-min-mem 196)
           (dom0-cpus 0)
           (vncpasswd )

Install DEBs on the front end ¶

On front end, where cloud controller and cluster controller will run, install the -cloud and -cc DEBs:

aptitude install eucalyptus-cc eucalyptus-cloud

Install DEBs on the nodes ¶

On the compute nodes, install the node controller DEB:

aptitude install eucalyptus-nc

Installing Eucalyptus (1.5.2) on Ubuntu Jaunty (9.04)

Eucalyptus can be installed on Ubuntu Jaunty using binary DEB packages. Download DEBs ¶

Eucalyptus binary installation is broken up into several packages: the cloud controller (-cloud package), the cluster controller (-cc package), and the node controller (-nc package). To simplify installation, everything is bundled into a single "tarball" for convenience.

Download the appropriate tarball from http://open.eucalyptus.com/downloads

   * For 32-bit machines, get eucalyptus-1.5.2-ubuntu-i386.tar.gz
   * For 64-bit machines, get eucalyptus-1.5.2-ubuntu-amd64.tar.gz 

Untar the bundle in a temporary location and add the directory to your sources.list

tar zxvf eucalyptus-1.5.2-*.tar.gz cd eucalyptus-1.5.2-* sudo -s echo deb file://${PWD} ./ >> /etc/apt/sources.list apt-get update

NOTE: After installation feel free to remove the entry from sources.list Prerequisites ¶

If you start with a standard Ubuntu Jaunty installation, you will satisfy all Eucalyptus prerequsites with the following steps:

  1. Front-end, node and client machine system clocks are syncronized (i.e., using NTP).
     ntpdate-debian -s
     apt-get install openntpd
  2. Firewall rules must permit the Eucalyptus components to communicate with one another, and clients to communicate with Eucalyptus.
         * NOTE: on the front-end, ports 8443, 8773, 8774 must be open. On the node, port 8775 must be open 
  3. Your node machine(s) must be configured with a bridge as the primary interface, if running in SYSTEM networking mode (default). You must first uninstall or disable Network Manager (default with Ubuntu Desktop), then follow the procedure below (example): 

sudo apt-get install bridge-utils sudo vi /etc/network/interfaces


Comment out any entry for your existing interfaces (eth0/eth1 etc) and add a bridge entry with your interfaces attached. For example, to have your bridge come up with all physical ethernet devices added to it, and have DHCP assign an address to the bridge:

auto br0 iface br0 inet dhcp

     bridge_ports all

For a static configuration with just eth0 attached (substitute your actual network parameters):

auto br0 iface br0 inet static

     address 192.168.12.20
     netmask 255.255.255.0
     network 192.168.12.0
     broadcast 192.168.12.255
     gateway 192.168.12.1
     dns-nameservers 192.168.12.1
     dns-search foobar foobar.com
     bridge_ports eth0

And finally restart the network by either by restarting the network using '/etc/init.d/network restart' or by rebooting the machine. Install DEBs on the front end ¶

On front end, where cloud controller and cluster controller will run, install the -cloud and -cc DEBs:

aptitude install eucalyptus-cc eucalyptus-cloud eucalyptus-common

Install DEBs on the nodes ¶

On the compute nodes, install the node controller DEB:

aptitude install eucalyptus-nc eucalyptus-common

Eucalyptus (1.5.2) on a Rocks cluster

If you want to install Eucalyptus on a Rocks cluster, you can now follow the regular CentOS RPM instructions. The Cloud Controller and the Cluster Controller will need to be installed on your Rocks front-end, and the Node Controller will need to be installed on any of your Rocks nodes that have been configured as 'VM Containers'.

Eucalyptus will not run on a Rocks virtual cluster!

If you have previously installed Eucalyptus on your Rocks cluster, you should disable the old Eucalyptus rolls:

rocks disable roll eucalyptus


rebuild the rocks distribution:

cd /home/install rocks-dist dist

and finally reinstall the nodes as 'VM Containers'. You can then follow the CentOS RPM instructions to install and configure Eucalyptus.

Keep in mind that the java roll in Rocks V (and V.I) includes JDK version 1.5 which is not enough to run Eucalyptus. You have to install the 1.6 JDK. For our testing we used Sun's JDK, which can be found at http://java.sun.com/javase/downloads/index.jsp. Upgrading to Eucalyptus 1.5.2 from 1.5.1, 1.4 and Ubuntu Jaunty package

These instructions are for those who would like to upgrade to Eucalyptus 1.5.2 from source-based or package-based 1.4 or 1.5.1 or Ubuntu Jaunty installation. If you're still running 1.3, please, follow the instructions for upgrading to 1.4 before following these instructions.

As a precaution, these instructions involve making a backup of the key state. The backup is made automatically during the RPM upgrade, but for DEB-based and source-based upgrade you would need to create the backup manually. The last section of this document explains how to roll back to the previous installation using the backup.

Commands below assume that $EUCALYPTUS variable points to the root of your current installation. After backing up the current installation, the same location will be reused for the 1.5.2 installation. If you want to install 1.5.2 somewhere else, adjust the commands accordingly.

Also, we assume that you set $OLD_VAR variable as follows:

   * if upgrading from 1.5.1 or Ubuntu Jaunty, it should be var/lib/eucalyptus
   * if upgrading from 1.4, it should be var/eucalyptus

1. Clean up Eucalyptus running state ¶

   * Note the value of the "Walrus path" listed under the "Configuration" tab of the Web interface. (That is where all uploaded images and user buckets are located.)
   * Terminate all Eucalyptus instances
     euca-terminate-instances ... # (as admin)
   * Shut down Eucalyptus on all nodes:
     $EUCALYPTUS/etc/init.d/eucalyptus-nc stop
     $EUCALYPTUS/etc/init.d/eucalyptus-cc stop
     $EUCALYPTUS/etc/init.d/eucalyptus-cloud stop
     or, in 1.4:
     $EUCALYPTUS/etc/init.d/eucalyptus stop
   * Check for errant Eucalyptus processes on all nodes and kill them
     ps aux | grep euca
     kill -9 ...

2. Back up the current installation ¶

If you you are upgrading using RPMs (on CentOS or OpenSUSE), you can skip this section because the backup will be created for you automatically and placed in /root/eucalyptus-pre-1.5.2-rollback.tar.

If you are upgrading a source-based or DEB-based installation (Debian or Ubuntu), you may want to back up the database and the keys in case the upgrade does not go smoothly.

cd $EUCALYPTUS rm -f $OLD_VAR/db/eucalyptus.lck tar cvf /root/eucalyptus-pre-1.5.2-rollback.tar etc/eucalyptus $OLD_VAR/db $OLD_VAR/keys/*.p* cp etc/eucalyptus/eucalyptus.conf /root/eucalyptus-pre-1.5.2-configuration

If you'd like to clean up a bit, you could remove the following two directories:

rm -rf log/eucalyptus run/eucalyptus

Since the "Walrus path" ($EUCALYPTUS/$OLD_VAR/bukkits by default) potentially contains a lot of data and that data is unlikely to be affected by the upgrade, we do not recommend backing it up.

3. Install Eucalyptus 1.5.2 ¶

   * If upgrading a source-based installation, follow the steps in the Source Code Installation section of the Administrator's Guide and, afterwards, return here.
   * If upgrading using binary packages, follow the steps in the installation instruction for a specific distribution:
         o CentOS 5.3
         o OpenSUSE 11
         o Debian Lenny 5.0
         o Debian Squeeze/sid
         o Ubuntu Jaunty 9.04
   * and, afterwards, return here.

4. Update the configuration ¶

Between version 1.4 and 1.5 the configuration file acquired new parameters. So, if you are performing an upgrade from 1.4, you will need to reconcile the differences between the old and the new configuration file. You can perform this manually or you can start with an automatic conversion:

$EUCALYPTUS/usr/sbin/euca_conf -upgrade-conf /root/eucalyptus-pre-1.5.2-configuration $EUCALYPTUS/etc/eucalyptus/eucalyptus.conf

If you upgraded from an older binary package, the command above was executed automatically during the upgrade.

Wether you simply copy over your previous configuration or perform the conversion, it is highly recommended to look over the new configuration file to ensure its correctness. Specifically, check the following variables:

   * *_PORT
   * NODES
   * INSTANCE_PATH
   * VNET_*
   * HYPERVISOR (should be "xen" if you are upgrading from 1.4).
   * EUCA_USER should be set according to the type of installation that you performed in step 2 above (running as root is easier to configure, but it requires one to use a specially compiled Apache).

Network parameters from 1.4 and 1.5.1 should continue to work. To learn about the new network mode (MANAGED-NOVLAN) see the Network Configuration section of the Administrator's Guide..

Ensure that the new configuration file on each compute node is also valid, either by performing the conversion on each node or by propagating a valid compute-node-specific file to all nodes.

5. Restart Eucalyptus and verify the upgrade ¶

   * Start NCs, the CC, and the CLC. Each one of those services has its own startup script in 1.5.x, so perform some combination of these on the appropriate machines:
     $EUCALYPTUS/etc/init.d/eucalyptus-nc start
     $EUCALYPTUS/etc/init.d/eucalyptus-cc start
     $EUCALYPTUS/etc/init.d/eucalyptus-cloud start
   * In a Web browser, load https://headnode:8443/ and log in as before. Verify that the user accounts and the images are there.
   * Important: The official default Walrus path as of 1.5 is $EUCALYPTUS/var/lib/eucalyptus. Whether you use the default location or not, be sure to verify that the Walrus Path setting under the Configuration tab of the Web interface matches the actual location of the buckets before running any instances or using buckets.
   * Verify that the nodes are back up and that they can run your old instances (if not, see the Troubleshooting section.)
     euca-describe-availability-zones verbose

6-a. Clean up old disk state ¶

Once you are confident that the new installation is working, delete the old state on disk.

rm /root/eucalyptus-pre-1.5.2-rollback.tar

As the upgrade may have moved some directories (to comply with FHS), you may have to remove by hand some unnecessary directories:

rm -rf $EUCALYPTUS/var/eucalyptus 6-b. Rolling back to an earlier installation ¶

   * Stop Eucalyptus 1.5.2 processes, if any, on all nodes
   * If using binary packages, remove all Eucalyptus-related packages using the command appropriate for your distribution (e.g., rpm -e or apt-get remove). Depending on the failure, you might have to use the --nopreun option to rpm.
   * Download and install the old version of Eucalyptus on all nodes as discussed in the Administrator's Guide for the appropriate version (old guides are linked to from the Documentation page).
   * Copy the old state saved during the upgrade process.
     cd $EUCALYPTUS
     tar xf /root/eucalyptus-pre-1.5.2-rollback.tar
   * If installing from RPMs, copy the old configuration file:
     cp etc/eucalyptus/eucalyptus.conf.old etc/eucalyptus/eucalyptus.conf
   * Start Eucalyptus, as before

Eucalyptus Configuration (1.5.2)

This document describes the steps for configuring Eucalyptus after the software has been installed on all nodes (either from source or using binary packages). Instructions below assume that you have variable $EUCALYPTUS set. For RPM-based installations, $EUCALYPTUS is /opt/eucalyptus/ while for DEB-based installations, $EUCALYPTUS is '/'. So set it appropriately:

export EUCALYPTUS=....

Eucalyptus comes with the euca_conf script for setting up the configuration file located in '$EUCALYPTUS/etc/eucalyptus/eucalyptus.conf'. Alternatively, you can change that file in a text editor instead of using 'euca_conf'. 1. Running Eucalyptus ¶

Eucalyptus installation consists of three types of components: cloud controller (CLC), cluster controller (CC), and the node controller(s) (NCs). In following instructions we assume that CLC and CC are co-located on a machine that we will refer to as the front end and that NCs run on compute nodes. The instructions will also work if one physical machine fulfills the role of both the front end and a compute node.

First, make sure that you have all of the runtime dependencies of Eucalyptus installed, based on your chosen set of configuration parameters. If there is a problem with runtime dependencies (for instance, if Eucalyptus cannot find/interact with them), all errors will be reported in log files located in $EUCALYPTUS/var/log/eucalyptus.

Unless the services are already started (for example, if you installed from deb packages), use the init-scripts to start each component on the appropriate host. Most likely, on the front-end you would run:

$EUCALYPTUS/etc/init.d/eucalyptus-cloud start $EUCALYPTUS/etc/init.d/eucalyptus-cc start

And on each of the compute nodes you would run:

$EUCALYPTUS/etc/init.d/eucalyptus-nc start

To stop them you call the script with stop instead of start. 2. First-time Configuration ¶

Make sure that the cloud controller is running (e.g., ps aux | grep euca should show a Java process) before going through the following steps. Also ensure that the $EUCALYPTUS variable is set as described above. a. Front-end Configuration ¶

To connect the Eucalyptus components together, you will need to register the Cluster with the Cloud, and register each Node with the Cluster. On the front-end, do:

$EUCALYPTUS/usr/sbin/euca_conf -addcluster <clustername> <clusterhost>

where <clustername> is the name your would like to attach to your Cluster, and <clusterhost> is the hostname of the machine or the IP where the Cluster Controller is running.

Also on the front-end, add the hostnames on which you plan to run node controllers one-by-one (this involves connecting to the node via SSH to propagate the cryptographic keys, so you may be prompted for a password):

$EUCALYPTUS/usr/sbin/euca_conf -addnode "<nodehost1> ... <nodehostN>"

where <nodehostX> is the hostname or IP of your node. Note that the above command requires that you have set up passwordless ssh access between the front-end and the node either as the 'root' user or as the 'eucalyptus' user. If you do not, just skip entering the password when prompted by the command (keep hitting enter), and instructions will be displayed on how to proceed.

Alternatively, you can add nodes all at once with the -nodes option, which requires you to explicitly propagate cryptographic keys afterwards:

$EUCALYPTUS/usr/sbin/euca_conf -nodes "<nodehost1> ... <nodehostN>" $EUCALYPTUS/usr/sbin/euca_conf -synckeys

OPTIONAL: Eucalyptus provides some options when it comes to configuring your VM virtual network. By default, we enable the simplest but least feature-ful networking mode, called SYSTEM in the eucalyptus.conf file: Eucalyptus will assume that you already have a DHCP server configured to serve IP addresses to VMs that start on cluster nodes. Please consult the the brief explanation in the comments of the configuration file and the Eucalyptus Networking document if you wish to try other modes that will enable more features (security groups, elastic IPs, etc.). b. Compute-node Configuration ¶

If you installed from binary packages you can now skip to step 'c' since the compute nodes should be appropriately configured. If you later decide to diverge from the default configuration, you might want to revisit these steps.

On each compute node, create a local directory where VM images are placed temporarily when VMs are running (images will be cached under the same path, too). Instruct the nodes to run the node controller, choose what hypervisor to use (xen or kvm), and specify the path for VM images. This path is used to store temporary VM images and it's important that it's empty as everything in it will be removed.

for x in hostname1 hostname2 ... hostnameN ; do \

       ssh $x "$EUCALYPTUS/usr/sbin/euca_conf -hypervisor kvm -instances /usr/local/instances"

done

Unless you've already done so, make sure that the user you have decided to run Eucalyptus as (username='eucalyptus' in the above example) has the ability to control VMs through the node controller machine's libvirt installation. A good test is to run the command virsh list as the eucalyptus user to see if that user has the appropriate rights.

Finally, ensure that the networking settings in 'eucalyptus.conf' on each of your nodes is configured properly. For instance, correct values for VNET_INTERFACE and VNET_BRIDGE may differ from your front-end. See Eucalyptus Networking for more details. c. Web-interface configuration ¶

To configure eucalyptus, after you started all components, point your browser to

   https://localhost:8443

substituting localhost with the name of the host running the cloud controller. Since Eucalyptus is using a self-signed certificate, your browser is likely to prompt you to accept the certificate. On some machines it may take few minutes after the starting of the Cloud Controller for the URL to be responsive the first time you run Eucalyptus. You will be prompted for a user/password which are set to admin/admin. Upon logging in you will be guided through three first-time tasks:

  1. You will be forced to change the admin password.
  2. You will be asked to set the admin's email address.
  3. You will be asked to confirm the URL of the Walrus service (the storage component of Eucalyptus) which should start with the hostname or IP address of the cluster head node where you are installing the ClC. 

After completing the first-time tasks, you will see the 'Configuration' tab. To use the system with the EC2 client tools, you must generate user credentials. Click the 'Credentials' tab and download your certificates via the 'Download certificates' button. You will be able to use these x509 certificates with Amazon EC2 tools and third-party tools like rightscale.com.

Create a directory, for example $HOME/.euca,

mkdir $HOME/.euca

unpack the credentials into it, and source the included 'eucarc':

. $HOME/.euca/eucarc

Note that you will have to source this file every time you intend to use the EC2 command-line tools, or you may add it to your local default environment. Eucalyptus Network Configuration (1.5.2)

   * Bridge naming convention
   * System mode
   * Static mode
   * Managed mode
         o Requirements
         o Configuration
         o Troubleshooting 
   * Managed NOVLAN mode
         o Requirements
         o Configuration 

Eucalyptus versions 1.5 and higher include a highly configurable VM networking subsystem that can be adapted to a variety of network environments. There are four high level networking "modes", each with its own set of configuration parameters, features, benefits and in some cases restrictions placed on your local network setup. The administrator must select one of these four modes before starting Eucalyptus on the front-end and nodes via modification of the 'eucalyptus.conf' configuration file on each machine running a Eucalyptus component. Brief descriptions of each mode follows:

   SYSTEM Mode - This is the simplest networking mode, but also offers the smallest number of networking features. In this mode, Eucalyptus simply assigns a random MAC address to the VM instance before booting and attaches the VM instance's ethernet device to the physical ethernet through the node's local Xen bridge. VM instances typically obtain an IP address using DHCP, the same way any non-VM machine using DHCP would obtain an address. Note that in this mode, the Eucalyptus administrator (or the administrator that manages the network to which Eucalyptus components are attached) must set up a DHCP server that has a dynamic pool of IP addresses to hand out as VMs boot. In other words, if your laptop/desktop/server gets an IP address using DHCP on the same network as the Eucalyptus nodes, then your VMs should similarly obtain addresses. This mode is most useful for users who want to try out Eucalyptus on their laptops/desktops.
   STATIC Mode - This mode offers the Eucalyptus administrator more control over VM IP address assignment. Here, the administrator configures Eucalyptus with a 'map' of MAC address/IP Address pairs. When a VM is instantiated, Eucalyptus sets up a static entry within a Eucalyptus controlled DHCP server, takes the next free MAC/IP pair, assigns it to a VM, and attaches the VMs ethernet device to the physical ethernet through the Xen bridge on the nodes (in a manner similar to SYSTEM mode). This mode is useful for administrators who have a pool of MAC/IP addresses that they wish to always assign to their VMs.
   NOTE - Running Eucalyptus in SYSTEM or STATIC mode disables some key functionality such as the definition of ingress rules between collections of VMs (termed security groups in Amazon EC2), the user-controlled, dynamic assignment of IPs to instances at boot and run-time (elastic IPs in Amazon EC2), isolation of network traffic between VMs (that is, the root user within VMs will be able to inspect and potentially interfere with network traffic from other VMs), and the availability of the meta-data service (use of the http://169.254.169.254/ URL to obtain instance specific information).
   MANAGED Mode - This mode is the most featureful of the three modes, but also carries with it the most potential constraints on the setup of the Eucalyptus administrator's network. In MANAGED mode, the Eucalyptus administrator defines a large network (usually private, unroutable) from which VM instances will draw their IP addresses. As with STATIC mode, Eucalyptus will maintain a DHCP server with static mappings for each VM instance that is created. Eucalyptus users can define a number of 'named networks', or 'security groups', to which they can apply network ingress rules that apply to any VM that runs within that 'network'. When a user runs a VM instance, they specify the name of such a network that a VM is to be a member of, and Eucalyptus selects a subset of the entire range of IPs that other VMs in the same 'network' can reside. A user can specify ingress rules that apply to a given 'network', such as allowing ping (ICMP) or ssh (TCP, port 22) traffic to reach their VMs. This capability allows Eucalyptus expose a capability similar to Amazon's 'security groups'. In addition, the administrator can specify a pool of public IP addresses that users may allocate, then assign to VMs either at boot or dynamically at run-time. This capability is similar to Amazon's 'elastic IPs'. Eucalyptus administrators that require security groups, elastic IPs, and VM network isolation must use this mode.
   MANAGED-NOVLAN Mode - This mode is identical to MANAGED mode in terms of features (dynamic IPs and security groups) but does not provide VM network isolation. Admins who want dynamic assignable IPs and the security groups, but are not running on a network that is 'VLAN clean' or don't care if their VMs are isolated from one another on the network should choose this mode.

Each Eucalyptus network mode has its own set of infrastructure requirements, configuration parameters, and caveats. These are described in more detail in the following sections. Bridge names ¶

For most of the network mode, we'll need the bridge name of your system. If you use Xen 3.0 or earlier (sometimes other version) the bridge name usually is

xenbr0

while if you use Xen 3.2 the bridge name is

eth0

and finally if you use kvm you may have a different name still: most distributions suggest for this

br0


The

brctl show

command will list all the available bridges on your system and you can use it to check that your system is properly configured to run Eucalyptus.

NOTE: the bridge name

virbr0

is created by libvirt is shouldn't not be used.

For the reminder of this document, we assume that you correctly identified the bridge and that such bridge is called

br0

SYSTEM Mode ¶

There is very little Eucalyptus configuration to use SYSTEM mode, as in this mode, Eucalyptus mostly stays 'out of the way' in terms of VM networking. The options in 'eucalyptus.conf' that must be configured correctly in 'SYSTEM' mode are as follows:

On the front-end:

VNET_MODE="SYSTEM"

On each node:

VNET_MODE="SYSTEM" VNET_BRIDGE

In each Eucalyptus node controller's (NC) 'eucalyptus.conf' file, make sure that the parameter 'VNET_BRIDGE' is set to the name of the bridge device that is connected to your local ethernet

VNET_BRIDGE="br0"

Make sure that what you are specifying in this field is actually a bridge, and that it is the bridge that is connected to an ethernet network that has a DHCP server running elsewhere that is configured to hand out IP addresses dynamically. Note that your front-end machine does not need to have any bridges (this is fine, as VNET_BRIDGE is only a relevant for node controllers, and will be safely ignored by the front-end components).

To test whether this mode is working properly at run-time, you can check on a node before and after an instance is running the configure bridge. You should see a new interface associate with the bridge for example you could see

brctl show eth0

bridge name bridge id STP enabled interfaces eth0 8000.000c29369858 no peth0

                                               vif18.0

on a node controller running Xen 3.2: note that Eucalyptus has correctly attached the VM's 'eth0' interface (vif18.0) to the bridge ('br0') that is being used to attach VMs to the local ethernet ('peth0').

In the case of kvm you may see something like

brctl show br0

bridge name bridge id STP enabled interfaces br0 8000.00005a00083d no eth0 vnet0

At this point, the VM should be sending DHCP requests to the local ethernet, and the DHCP server on the network should be sending a reply.

CAVEATS - In this mode, as mentioned previously, VMs are simply started with their ethernet interfaces attached to the local ethernet without any isolation. Practically, this means that you should treat a VM the same way that you would treat a non-VM machine running on the network. Eucalyptus does it's best to discover the IP address that was assigned to a running VM via a third-party DHCP server, but can be unsuccessful depending on the specifics of your network (switch types/configuration, location of CC on the network, etc.). Practically, if Eucalyptus cannot determine the VM's IP, then the user will see '0.0.0.0' in the output of 'describe-instances' in both the private and public address fields. The best workaround for this condition is to instrument your VMs to send some network traffic to your front end on boot (after they obtain an IP address). For instance, setting up your VM to ping the front-end a few times on boot should allow Eucalyptus to be able to discover the VMs IP. STATIC Mode ¶

In this mode, Eucalyptus will manage VM IP address assignment by maintaining its own DHCP server with one static entry per VM. The options in 'eucalyptus.conf' that must be configured correctly in 'STATIC' mode are as follows:

On the front-end (options annotated with a '*' may be required depending on your installation, see below for details):

VNET_MODE="STATIC" VNET_INTERFACE VNET_DHCPDAEMON

  • VNET_DHCPUSER

VNET_SUBNET VNET_NETMASK VNET_BROADCAST VNET_ROUTER VNET_DNS VNET_MACMAP

On each node:

VNET_MODE="STATIC" VNET_BRIDGE

The Eucalyptus administrator must configure the front-end's 'eucalyptus.conf' first with a valid, configured ethernet device that is attached to the same physical ethernet as the Eucalyptus nodes:

VNET_INTERFACE="eth0"

Next, the admin must ensure that there is a DHCP server binary installed on the front-end and Eucalyptus knows where it is located:

VNET_DHCPDAEMON="/usr/sbin/dhcpd3"

If your DHCP daemon binary is configured to run as 'non-root' (say, as the user 'dhcpd' as is the case in Ubuntu >= 8.10), then you must configure Eucalyptus to be aware of that user:

VNET_DHCPUSER="<dhcpusername>"

Then, the admin must input IP subnet information for that device. For example, if the front-end's 'eth0' interface has the IP address '192.168.1.254' on the '192.168.1.0/24' network, with a gateway at '192.168.1.1' and a DNS at '192.168.1.2', the values in 'eucalyptus.conf' would look like so:

VNET_SUBNET="192.168.1.0" VNET_NETMASK="255.255.255.0" VNET_BROADCAST="192.168.1.255" VNET_ROUTER="192.168.1.1" VNET_DNS="192.168.1.2"

Finally, the administrator must supply a list of static MAC/IP mappings that will be assigned, first come first served, to VM instances. Note that each IP must reside in the subnet defined above, and must not be in use by any other machine on the network.

VNET_MACMAP="AA:DD:11:CE:FF:ED=192.168.1.3 AA:DD:CE:FF:EE=192.168.1.4"

On the nodes, you must ensure that the bridge is entered

VNET_BRIDGE="br0"

Once you have configured Eucalyptus properly, start up the node controllers and the front-end components. To test whether this mode is working properly at run-time, you can follow the last paragraph of the SYSTEM mode, in which the bridge is inspected.

Make sure that the DHCP server has been started properly on the front-end ('ps axww | grep -i dhcpd | grep -i euca'). At this point, the VM should be sending DHCP requests to the local ethernet, and the DHCP server on the front-end should be sending a reply with one of the static MAC/IP mappings the admin has defined in 'eucalyptus.conf'.

CAVEATS - In this mode, as mentioned previously, VMs are started with their ethernet interfaces attached to the local ethernet without any isolation. Practically, this means that you should treat a VM the same way that you would treat a non-VM machine running on the network. Eucalyptus does not verify that your settings are valid, thus, you must enter them correctly in order for your VMs to obtain IP addresses. Finally, we assume that the installed DHCP daemon is, or is compatible with, ISC DHCP Daemon version 3.0.X. If it is not, we recommend either installing a version that is (common in most distributions) or writing a wrapper script around your installed DHCP server and point Eucalyptus at it (via VNET_DHCPDAEMON in 'eucalyptus.conf'). MANAGED Mode ¶

In this mode, Eucalyptus will fully manage the local VM instance network and provides all of the networking features Eucalyptus current supports (VM network isolation, user controllable VM firewalls (ingress rules/security groups), dynamic public IP assignment). The options in 'eucalyptus.conf' that must be configured correctly in 'MANAGED' mode are as follows:

On the front-end (options annotated with a '*' may be required depending on your installation, see below for details):

VNET_MODE="MANAGED" VNET_INTERFACE VNET_DHCPDAEMON

  • VNET_DHCPUSER

VNET_SUBNET VNET_NETMASK VNET_DNS VNET_ADDRSPERNET

  • VNET_PUBLICIPS

On each node:

VNET_MODE="MANAGED" VNET_INTERFACE

Be advised that this mode requires that your local network/configuration conforms to certain requirements that Eucalyptus depends upon. Requirements for MANAGED mode ¶

Before using 'MANAGED' mode, you must confirm that:

   1.) there is an available range of iP addresses that is completely unused on the network (192.168..., 10....., other).
   2.) your network is 'VLAN clean', meaning that all switch ports that Eucalyptus components are connected to will allow and forward VLAN tagged packets.
   3.) you are not running a firewall on the front-end (CC) or your firewall is compatible with the dynamic changes that Eucalyptus will make to the front-end's netfilter rules.

All three of these requirements must be met before MANAGED mode should be attempted. Failure to verify the above will, at least, result VM instances being unavailable on the network.

For requirement '1', choose a IP range that you know is completely unused on your network. Choose a range that is as large as possible. Typical examples are:

if the network 10.0.0.0 - 10.255.255.255 is completely unused:

VNET_MODE="MANAGED" VNET_SUBNET="10.0.0.0" VNET_NETMASK="255.0.0.0" VNET_DNS="<your DNS>" VNET_ADDRSPERNET="128"

or if the network 192.168.0.0 - 192.168.255.255 is completely unused:

VNET_MODE="MANAGED" VNET_SUBNET="192.168.0.0" VNET_NETMASK="255.255.0.0" VNET_DNS="<your DNS>" VNET_ADDRSPERNET="64"


Next, the admin must verify that the local network will allow/forward VLAN tagged packets between machines running Eucalyptus components. To verify, perform the following test:

on the front-end, choose the interface that is on the local ethernet (and will be set in eucalyptus.conf as VNET_INTERFACE), and run:

vconfig add <interface> 10 ifconfig <interface>.10 192.168.1.1 up

replace '192.168.1.1' with an IP from the range you selected above.

On the node, choose the interface on the local network (will be set in eucalyptus.conf as VNET_INTERFACE), and run:

vconfig add <interface> 10 ifconfig <interface>.10 192.168.1.2 up

again, replace '192.168.1.2' with another IP in the range you selected above.

Then, try a ping between hosts. On the front-end:

ping 192.168.1.2

on the node:

ping 192.168.1.1

If this does not work, then your switch needs to be configured to forward VLAN tagged packets (if it is a managed switch, see your switch's documentation to determine how to do this).

Finally, you need to carefully inspect the firewall on the front-end to make sure that it will not interfere with Eucalyptus, or vice-versa. Eucalyptus will flush the 'filter' and 'nat' tables upon boot in MANAGED mode, but provides a way for the administrator to define special rules that are loaded when Eucalyptus starts (see below for details). Configuring MANAGED mode ¶

The Eucalyptus administrator must configure the front-end's 'eucalyptus.conf' first with a valid, configured ethernet device that is attached to the same physical ethernet as the Eucalyptus nodes:

VNET_INTERFACE="eth0"

Next, the admin ust ensure that there is a DHCP server binary installed on the front-end and Eucalyptus knows where it is located:

VNET_DHCPDAEMON="/usr/sbin/dhcpd3"

If your DHCP daemon binary is configured to run as 'non-root' (say, as the user 'dhcpd' as is the case in Ubuntu >= 8.10), then you must configure Eucalyptus to be aware of that user:

VNET_DHCPUSER="<dhcpusername>"

Nodes must have VNET_INTERFACE set properly. For example, with current Xen versions, this parameter (when your node's Xen bridge is 'eth0') is typically:

VNET_INTERFACE="peth0"

while for kvm it should be something like

VNET_INTERFACE="eth0"


Once you have verified that your network configuration meets the requirements for running in MANAGED mode, the rest of the configuration is fairly simple. For example, if the 192.168.0.0/16 network is free and unused on your network:

VNET_MODE="MANAGED" VNET_SUBNET="192.168.0.0" VNET_NETMASK="255.255.0.0" VNET_DNS="<your dns>" VNET_ADDRSPERNET="64" VNET_PUBLICIPS="<publicIPa> <publicIPb> ... <publicIPn>"

SUBNET, NETMASK, and DNS have been described previously. VNET_ADDRSPERNET is used to control how many VM instances may simultaneously be part of an individual user's named network (called a 'security group' in Amazon EC2). Choosing the right value for this parameter depends on how many IPs you have made available using VNET_SUBNET/VNET_NETMASK, how many VLANs your network supports simultaneously, and how many concurrent active user networks the administrator wishes to support. In the above example, there are 65536 addresses available (192.168.0.0/16). If we divide by the number of addresses per network (set to 64 above), we find the maximum number of simultaneous active named networks that can be in use at any one point in time (65536 / 64 == 1024). If your eucalyptus installation has 100 users, then each user could have at most 10 active security groups in operation at any point in time (of course, they can define as many as they wish, but can only have sets of running VMs residing in at most 10 networks). Each security group could support up to 61 instances (64 addresses minus 1 address for the subnet, broadcast, and router IPs). If your installation favors more VMs per network and fewer active security groups per user, the administrator may adjust the VNET_ADDRSPERNET parameter accordingly. Setting it to '256' would result in each active user's security group supporting up to 253 VM instances, and each of 100 users could simultaneously have 2 active security groups.

If you would like users to log in to their instances from outside the cluster/cluster front-end, you must find a set of public IP addresses, that are not in use, and allow Eucalyptus to dynamically route them to VM instances at instance boot time or dynamically at run time. For each IP address you choose, your front-end must be capable of being configured with that IP address. To test, choose some free public IP addresses and perform the following test for each one:

on the front-end:

ip addr add <publicIP>/32 dev <interface>

on some external machine representative of where users will wish to log into their VM instances:

ping <publicIP>

if this works, then dynamic IP assignment to VM instances will work. Remove the assigned address with the following command:

ip addr del <publicIP>/32 dev <interface>

Once you have compiled a list of available public IP addresses, allow Eucalyptus to use them by listing the IPs in 'eucalyptus.conf':

VNET_PUBLICIPS="<publicIPa> <publicIPb> ... <publicIPn>"


CAVEATS - When Eucalyptus is running in MANAGED mode, you cannot currently run an entire eucalyptus installation on a single machine as this mode depends upon traffic between named networks passing through a front-end router (instead of going through the loopback device). If you wish to run Eucalyptus on a single machine (laptop), you must use SYSTEM or STATIC mode. In MANAGED mode, Eucalyptus will flush the front-end's iptables rules for both 'filter' and 'nat'. Next, it will set the default policy for the 'FORWARD' chain in 'filter' to 'DROP'. At run time, the front-end will be adding and removing rules from 'FORWARD' as users add/remove ingress rules from their active security groups. In addition, the 'nat' table will be configured to allow VMs access to the external network using IP masquerading, and will dynamically add/remove rules in the 'nat' table as users assign/unassign public IPs to VMs at instance boot or run-time. If the administrator has some rules that they wish to apply o the front-end, they should perform the following procedure on the front-end, before eucalyptus is started or while eucalyptus is not running. WARNING if the admin chooses to perform this operation to define special iptables rules that are loaded when Eucalyptus starts, they could inadvertently cause Eucalyptus VM networking to fail. It is suggested that you only do this only if you are completely sure that it will not interfere with the operation of Eucalyptus.

<use iptables to set up your iptables rules> iptables-save > $EUCALYPTUS/var/run/eucalyptus/net/iptables-preload

Troubleshooting MANAGED Mode ¶

If you start an instance believe that it is running but is not available on the network, here are some things to check.

First, verify that the requirements of MANAGED mode have been met as described above (unused range of IPs, VLAN capable network, no interfering firewall rules on the nodes or front-end). Test whether you can get to the instance from the front-end using it's private address (from the range you specified). If you cannot, next, inspect the interfaces on the front-end and nodes:

on front-end:

ifconfig -a

You should see an interface '<interface>.<vlan>' with an IP address that is up and running. For instance, if may be 'eth0.10'. If it is not, check your VNET_INTERFACE parameter and inspect the eucalyptus log files for errors.

on the node:

brctl show

You should see a number of bridges called 'eucabr<vlan>', where '<vlan>' is a number that typically starts from '10'. The output should be similar (if VNET_INTERFACE="peth0") to:

brctl show eucabr10

bridge name bridge id STP enabled interfaces eucabr10 8000.000c29369858 no peth0.10

                                               vif18.0

If this is not the case, check your VNET_INTERFACE setting, and inspect the logfiles for details.

Back on the front-end, make sure that 'dhcpd' is running:

ps axww | grep <dhcpd>

where '<dhcpd>' is what you have set for VNET_DHCPDAEMON. Make sure that, in the output of 'ps', you see that the daemon is listening on the vlan tagged interface from above (<interface>.<vlan>). If it is not running, check the eucalyptus logs for the reason why (if the command failed, you will see this information in 'cc.log', if the daemon failed at runtime, you can inspect the reason in the daemon's output itself in 'http-cc_error_log'.

If you can access the private IP of the instance from the front-end, but public IPs are not being forwarded properly, first confirm that the user's security group is set up properly by having them run 'euca-describe-group <group of instance>'. '<group of instance>' is set to 'default' by default or if unspecified when the instance was started. If the group has appropriate ingress rules set, check that the rules have been implemented on the front-end:

iptables -L <username>-<groupname>

If there are no rules here, check the 'cc.log' for errors applying the table rules for more insight. Next, check the 'nat' table:

iptables -L -t nat

You should see one DNAT rule for routing traffic from a public IP to the instance IP, and one SNAT rule for setting the source IP of outgoing packets from that instance. If you do not, check 'cc.log' to determine the cause.

If all of these checks pass and the instance still is experiencing network problems, please prepare the following information and send it along to the Eucalyptus discussion board:

on front-end and one representative node, capture the output of the following commands:

netstat -rn ifconfig -a brctl show iptables-save

and send us 'cc.log', 'nc.log', 'httpd-cc_error_log' and 'httpd-nc_error_log'. MANAGED-NOVLAN Mode ¶

In this mode, Eucalyptus will fully manage the local VM instance network and provides all of the networking features Eucalyptus current supports (user controllable VM firewalls (ingress rules/security groups), dynamic public IP assignment), but does not provide VM network isolation. The options in 'eucalyptus.conf' that must be configured correctly in 'MANAGED-NOVLAN' mode are as follows:

On the front-end (options annotated with a '*' may be required depending on your installation, see below for details):

VNET_MODE="MANAGED-NOVLAN" VNET_INTERFACE VNET_DHCPDAEMON

  • VNET_DHCPUSER

VNET_SUBNET VNET_NETMASK VNET_DNS VNET_ADDRSPERNET

  • VNET_PUBLICIPS


On each node:

VNET_MODE="MANAGED-NOVLAN" VNET_BRIDGE

Be advised that this mode requires that your local network/configuration conforms to certain requirements that Eucalyptus depends upon. Requirements for MANAGED-NOVLAN mode ¶

Before using 'MANAGED-NOVLAN' mode, you must confirm that:

   1.) there is an available range of iP addresses that is completely unused on the network (192.168..., 10....., other).
   2.) you are not running a firewall on the front-end (CC) or your firewall is compatible with the dynamic changes that Eucalyptus will make to the front-end's netfilter rules.

Both of these requirements must be met before MANAGED-NOVLAN mode should be attempted. Failure to verify the above will, at least, result VM instances being unavailable on the network.

For requirement '1', choose a IP range that you know is completely unused on your network. Choose a range that is as large as possible. Typical examples are:

if the network 10.0.0.0 - 10.255.255.255 is completely unused:

VNET_MODE="MANAGED-NOVLAN" VNET_SUBNET="10.0.0.0" VNET_NETMASK="255.0.0.0" VNET_DNS="<your DNS>" VNET_ADDRSPERNET="128"


or if the network 192.168.0.0 - 192.168.255.255 is completely unused:

VNET_MODE="MANAGED-NOVLAN" VNET_SUBNET="192.168.0.0" VNET_NETMASK="255.255.0.0" VNET_DNS="<your DNS>" VNET_ADDRSPERNET="64"

You will need to carefully inspect the firewall on the front-end to make sure that it will not interfere with Eucalyptus, or vice-versa. Eucalyptus will flush the 'filter' and 'nat' tables upon boot in MANAGED-NOVLAN mode, but provides a way for the administrator to define special rules that are loaded when Eucalyptus starts (see below for details). Configuring MANAGED-NOVLAN mode ¶

The Eucalyptus administrator must configure the front-end's 'eucalyptus.conf' first with a valid, configured ethernet device that is attached to the same physical ethernet as the Eucalyptus nodes:

VNET_INTERFACE="eth0"


Next, the admin ust ensure that there is a DHCP server binary installed on the front-end and Eucalyptus knows where it is located:

VNET_DHCPDAEMON="/usr/sbin/dhcpd3"

If your DHCP daemon binary is configured to run as 'non-root' (say, as the user 'dhcpd' as is the case in Ubuntu >= 8.10), then you must configure Eucalyptus to be aware of that user:

VNET_DHCPUSER="<dhcpusername>"

Nodes must have VNET_BRIDGE set properly:

VNET_BRIDGE="br0"

Once you have verified that your network configuration meets the requirements for running in MANAGED-NOVLAN mode, the rest of the configuration is fairly simple. For example, if the 192.168.0.0/16 network is free and unused on your network:

VNET_MODE="MANAGED-NOVLAN" VNET_SUBNET="192.168.0.0" VNET_NETMASK="255.255.0.0" VNET_DNS="<your dns>" VNET_ADDRSPERNET="64" VNET_PUBLICIPS="<publicIPa> <publicIPb> ... <publicIPn>"

SUBNET, NETMASK, and DNS have been described previously. VNET_ADDRSPERNET is used to control how many VM instances may simultaneously be part of an individual user's named network (called a 'security group' in Amazon EC2). Choosing the right value for this parameter depends on how many IPs you have made available using VNET_SUBNET/VNET_NETMASK, how many VLANs your network supports simultaneously, and how many concurrent active user networks the administrator wishes to support. In the above example, there are 65536 addresses available (192.168.0.0/16). If we divide by the number of addresses per network (set to 64 above), we find the maximum number of simultaneous active named networks that can be in use at any one point in time (65536 / 64 == 1024). If your eucalyptus installation has 100 users, then each user could have at most 10 active security groups in operation at any point in time (of course, they can define as many as they wish, but can only have sets of running VMs residing in at most 10 networks). Each security group could support up to 61 instances (64 addresses minus 1 address for the subnet, broadcast, and router IPs). If your installation favors more VMs per network and fewer active security groups per user, the administrator may adjust the VNET_ADDRSPERNET parameter accordingly. Setting it to '256' would result in each active user's security group supporting up to 253 VM instances, and each of 100 users could simultaneously have 2 active security groups.

If you would like users to log in to their instances from outside the cluster/cluster front-end, you must find a set of public IP addresses, that are not in use, and allow Eucalyptus to dynamically route them to VM instances at instance boot time or dynamically at run time. For each IP address you choose, your front-end must be capable of being configured with that IP address. To test, choose some free public IP addresses and perform the following test for each one:

on the front-end:

ip addr add <publicIP>/32 dev <interface>

on some external machine representative of where users will wish to log into their VM instances:

ping <publicIP>

if this works, then dynamic IP assignment to VM instances will work. Remove the assigned address with the following command:

ip addr del <publicIP>/32 dev <interface>

Once you have compiled a list of available public IP addresses, allow Eucalyptus to use them by listing the IPs in 'eucalyptus.conf':

VNET_PUBLICIPS="<publicIPa> <publicIPb> ... <publicIPn>"

CAVEATS - When Eucalyptus is running in MANAGED-NOVLAN mode, you cannot currently run an entire eucalyptus installation on a single machine as this mode depends upon traffic between named networks passing through a front-end router (instead of going through the loopback device). If you wish to run Eucalyptus on a single machine (laptop), you must use SYSTEM or STATIC mode. In MANAGED-NOVLAN mode, Eucalyptus will flush the front-end's iptables rules for both 'filter' and 'nat'. Next, it will set the default policy for the 'FORWARD' chain in 'filter' to 'DROP'. At run time, the front-end will be adding and removing rules from 'FORWARD' as users add/remove ingress rules from their active security groups. In addition, the 'nat' table will be configured to allow VMs access to the external network using IP masquerading, and will dynamically add/remove rules in the 'nat' table as users assign/unassign public IPs to VMs at instance boot or run-time. If the administrator has some rules that they wish to apply o the front-end, they should perform the following procedure on the front-end, before eucalyptus is started or while eucalyptus is not running. WARNING if the admin chooses to perform this operation to define special iptables rules that are loaded when Eucalyptus starts, they could inadvertently cause Eucalyptus VM networking to fail. It is suggested that you only do this only if you are completely sure that it will not interfere with the operation of Eucalyptus.

<use iptables to set up your iptables rules> iptables-save > $EUCALYPTUS/var/run/eucalyptus/net/iptables-preload

Note: If you edit a networking related value in eucalyptus.conf, you will need to restart the CC ($EUCALYPTUS/etc/init.d/eucalyptus-cc restart) for changes to take effect. Managing Eucalyptus Images (1.5.2)

First, be sure to source your 'eucarc' file before running the commands below. Note that all users may upload and register images (depending on access granted to them by the Eucalyptus administrator), but only the admin user may ever upload/register kernels or ramdisks.

Second, the instructions below rely on the euca2ools command-line tools distributed by the Eucalyptus Team. Please, install them if you haven't done so already. 1. Adding Images

To enable a VM image as an executable entity, a user/admin must add a root disk image, a kernel/ramdisk pair (ramdisk may be optional) to Walrus and register the uploaded data with Eucalyptus. Each is added to Walrus and registered with Eucalyptus separately, using three EC2 commands. The following example uses the test image that we provide. Unpack it to any directory:

Add the kernel to Walrus, and register it with Eucalyptus (WARNING: your bucket names must not end with a slash!):

euca-bundle-image -i <kernel file> --kernel true euca-upload-bundle -b <kernel bucket> -m /tmp/<kernel file>.manifest.xml euca-register <kernel-bucket>/<kernel file>.manifest.xml


Next, add the root filesystem image to Walrus:

euca-bundle-image -i <vm image file> euca-upload-bundle -b <image bucket> -m /tmp/<vm image file>.manifest.xml euca-register <image bucket>/<vm image file>.manifest.xml

Our test kernel does not require a ramdisk to boot. If the administrator would like to upload/register a kernel/ramdisk pair, the procedure is similar to the above:

euca-bundle-image -i <initrd file> --ramdisk true euca-upload-bundle -b <initrd bucket> -m <initrd file>.manifest.xml euca-register <initrd bucket>/<initrd file>.manifest.xml

2. Associating kernels and ramdisks with instances

There are three ways that one can associate a kernel (and ramdisk) with a VM instance.

  1. A user may associate a specific kernel/ramdisk identifier with an image at the 'euca-bundle-image' step
     euca-bundle-image -i <vm image file> --kernel <eki-XXXXXXXX> --ramdisk <eri-XXXXXXXX>
  2. A user may choose a specific kernel/ramdisk at instance run time as an option to 'euca-run-instances'
     euca-run-instances --kernel <eki-XXXXXXXX> --ramdisk <eri-XXXXXXXX> <emi-XXXXXXXX>
  3. The administrator can set 'default' registered kernel/ramdisk identifiers that will be used if a kernel/ramdisk is unspecified by either of the above options. This is accomplished by logging in to the administrative interface (https://your.cloud.server:8443), clicking on the 'Configuration' tab and adding an <eki-xxxxxxxx> and optionally an <eri-xxxxxxxx> as the defaults kernel/ramdisk to be used. 

3. Deleting Images

In order to delete an image, you must first de-register the image:

euca-deregister <emi-XXXXXXXX>

Then, you can remove the files stored in your bucket. Assuming you have sourced your 'eucarc' to set up EC2 client tools:

euca-delete-bundle -a $EC2_ACCESS_KEY -s $EC2_SECRET_KEY --url $S3_URL -b <bucket> -p <file prefix>

If you would like to remove the image and the bucket, add the '--clear' option:

euca-delete-bundle -a $EC2_ACCESS_KEY -s $EC2_SECRET_KEY --url $S3_URL -b <bucket> -p <file prefix> --clear


Examples

Following is an example using the 'ttylinux' image for Xen:

cd $EUCALYPTUS_SRC/eucalyptus-src-deps tar zxvf euca-ttylinux.tgz


euca-bundle-image -i ttylinux/vmlinuz-2.6.16.33-xen --kernel true euca-upload-bundle -b kernel-bucket -m /tmp/vmlinuz-2.6.16.33-xen.manifest.xml euca-register kernel-bucket/vmlinuz-2.6.16.33-xen.manifest.xml


euca-bundle-image -i ttylinux/ttylinux.img euca-upload-bundle -b image-bucket -m /tmp/ttylinux.img.manifest.xml euca-register image-bucket/ttylinux.img.manifest.xml

Next is an example using the Ubuntu pre-packaged image that we provide using the included KVM compatible kernel/ramdisk (a Xen compatible kernel/ramdisk is also included). See this page to get more pre-packaged images.

tar zxvf euca-ubuntu-9.04-x86_64.tar.gz


euca-bundle-image -i euca-ubuntu-9.04-x86_64/kvm-kernel/vmlinuz-2.6.28-11-generic --kernel true euca-upload-bundle -b ubuntu-kernel-bucket -m /tmp/vmlinuz-2.6.28-11-generic.manifest.xml euca-register ubuntu-kernel-bucket/vmlinuz-2.6.28-11-generic.manifest.xml (set the printed eki to $EKI)


euca-bundle-image -i euca-ubuntu-9.04-x86_64/kvm-kernel/initrd.img-2.6.28-11-generic --ramdisk true euca-upload-bundle -b ubuntu-ramdisk-bucket -m /tmp/initrd.img-2.6.28-11-generic.manifest.xml euca-register ubuntu-ramdisk-bucket/initrd.img-2.6.28-11-generic.manifest.xml (set the printed eri to $ERI)


euca-bundle-image -i euca-ubuntu-9.04-x86_64/ubuntu.9-04.x86-64.img --kernel $EKI --ramdisk $ERI euca-upload-bundle -b ubuntu-image-bucket -m /tmp/ubuntu.9-04.x86-64.img.manifest.xml euca-register ubuntu-image-bucket/ubuntu.9-04.x86-64.img.manifest.xml

Now, the newly uploaded image(s) should be ready to start using (see User's Guide for more information on using Eucalyptus). Eucalyptus Management (1.5.2)

This part of the Administrator's Guide describes tasks that can be performed on a completed Eucalyptus installation, whether it was installed from source or from packages. 1. Image Management ¶

To use Eucalyptus, images must be added and registered with the system. We have a document detailing the steps of this process in Image Management. 2. Node Management ¶

Once you have a running Eucalyptus system you can add and remove nodes (systems running Node Controllers) using

$EUCALYPTUS/usr/sbin/euca_conf -addnode "<nodename1> ... <nodenameN>"

you will be asked for password to login to <nodenameX>: this is needed to propagate the cryptographic keys. Similarly to remove a node

$EUCALYPTUS/usr/sbin/euca_conf -delnode "<nodename1> ... <nodenameN>"

3. User Management ¶ 3.1 User sign-up ¶

Users interested in joining the cloud should be directed to the front-end Web page (note the https prefix!):

   https://your.front.end.hostname:8443/

As soon as the administrator logs in for the first time and enters the email address to be used for application requests, thus activating the Web site for use by others, the login box of the Web site will have an "Apply for account" link underneath it. After a user fills out the application form, an email is sent to the administrator, containing two URLs, one for accepting and one for rejecting the user.

Note that there is no authentication performed on the people who fill out the form. It is up to the administrator to perform this authentication! The only "guarantee" the administrator has is that the account will not be active unless the person who requested the account (and, hence, knows the password) can read email at the submitted address. Therefore, if the administrator is willing to give the account to the person behind the email address, it is safe to approve the account. Otherwise, the administrator may use the additional information submitted (such as the telephone number, project PI, etc.) to make the decision.

Accepting or rejecting a signup request causes an email message to be sent to the user who made the request. In the case of an acceptance notification, the user will see a link for activating the account. Before activating the account, the user will have to log in with the username and password that they chose at signup. 3.2 Adding users ¶

Users can be added by the administrator explicitly by logging into the Eucalyptus web interface, as an administrative user, clicking the 'Users' tab, clicking on the 'Add User' button, and filling out the same user form that a user would fill out if they applied themselves. The user will be automatically 'approved' using this method, but their account will not be active until the user clicks the link that is sent via email similar to the above method. 3.3 Managing users ¶

If the administrator wishes to disable or delete a user, they can do so through the web interface, as an administrative user, clicking the 'Users' tab, and clicking either the 'disable' or 'delete' link respectively. Eucalyptus Troubleshooting (1.5.2)

Eucalyptus cloud admins are encouraged to consult the Known Bugs page before diving into the investigation of unexpected behavior.

The instructions below rely on the euca2ools command-line tools distributed by the Eucalyptus Team. Please, install them if you haven't done so already. 1. Restarting ¶

Eucalyptus components can be restarted using the init scripts at any time with the 'restart' operation:

/etc/init.d/eucalyptus-cloud restart /etc/init.d/eucalyptus-cc restart /etc/init.d/eucalyptus-nc restart

If you need to make a change to the cluster controller or node controller configuration through modification of $EUCALYPTUS/etc/eucalyptus/eucalyptus.conf, you will typically be required to 'stop' and then 'start' the service after the modification has been made:

/etc/init.d/eucalyptus-cc stop /etc/init.d/eucalyptus-cc start /etc/init.d/eucalyptus-nc stop /etc/init.d/eucalyptus-nc start

Warning: depending on your configuration of Eucalyptus, making changes to eucalyptus.conf that drastically alter the way Eucalyptus is handling non-eucalyptus resources (network, hypervisor, etc) may require that all currently running VMs be terminated before the configuration changes can be successfully applied. In addition, if you are running in any network mode (VNET_MODE) other than SYSTEM, correct VM network connectivity is only ensured while the CC that launched the VMs is running. If the machine that hosts a CC that has previously launched VMs fails or reboots, then the VMs will lose network connectivity.

If the administrator needs to terminate running VMs for the reasons described above, they can use the client tools to terminate all instances. Optionally, the admin can manually stop all eucalyptus components, destroy all running Xen instances using 'xm shutdown' or 'xm destroy' on the nodes, and start all Eucalyptus components to return to a clean state. 2. Diagnostics ¶ Installation/Discovering resources ¶

If something is not working right with your Eucalyptus installation, the best first step (after making sure that you have followed the installation/configuration/networking documents faithfully) is to make sure that your cloud is up and running, that all of the components are communicating properly, and that there are resources available to run instances. After you have set up and configured Eucalyptus, set up your environment properly with your admin credentials, and use the following command to see the 'status' of your cloud:

euca-describe-availability-zones verbose

You should see output similar to the following:

AVAILABILITYZONE cluster <hostname of your front-end> AVAILABILITYZONE |- vm types free / max cpu ram disk AVAILABILITYZONE |- m1.small 0128 / 0128 1 128 10 AVAILABILITYZONE |- c1.medium 0128 / 0128 1 256 10 AVAILABILITYZONE |- m1.large 0064 / 0064 2 512 10 AVAILABILITYZONE |- m1.xlarge 0064 / 0064 2 1024 20 AVAILABILITYZONE |- c1.xlarge 0032 / 0032 4 2048 20 AVAILABILITYZONE |- <node-hostname-a> certs[cc=true,nc=true] @ Sun Jan 04 15:13:30 PST 2009 AVAILABILITYZONE |- <node-hostname-b> certs[cc=true,nc=true] @ Sun Jan 04 15:13:30 PST 2009 AVAILABILITYZONE |- <node-hostname-c> certs[cc=true,nc=true] @ Sun Jan 04 15:13:30 PST 2009 AVAILABILITYZONE |- <node-hostname-d> certs[cc=true,nc=true] @ Sun Jan 04 15:13:30 PST 2009 AVAILABILITYZONE |- <node-hostname-e> certs[cc=true,nc=true] @ Sun Jan 04 15:13:30 PST 2009 AVAILABILITYZONE |- <node-hostname-f> certs[cc=true,nc=true] @ Sun Jan 04 15:13:30 PST 2009 ...


Next, the administrator should consult the Eucalyptus logfiles. On each machine running a Eucalyptus component, the logfiles are located in:

$EUCALYPTUS/var/log/eucalyptus/

On the front-end, the Cloud Controller (CLC) logs primarily to 'cloud-output.log' and 'cloud-debug.log'. Consult these files if your client tool (ec2 API tools) output contains exception messages, or if you suspect that none of your operations are ever being executed (never see Xen activity on the nodes, network configuration activity on the front-end, etc.).

The Cluster Controller (CC) also resides on the front-end, and logs to 'cc.log' and 'httpd-cc_error_log'. Consult these logfile in general, but especially if you suspect there is a problem with networking. 'cc.log' will contain log entries from the CC itself, and 'httpd-cc_error_log' will contain the STDERR/STDOUT from any external commands that the CC executes at runtime.

A Node Controller (NC) will run on every machine in the system that you have configured to run VM instances. The NC logs to 'nc.log' and 'httpd-nc_error_log'. Consult these files in general, but especially if you believe that there is a problem with VM instances actually running (i.e., it appears as if instances are trying to run - get submitted, go into 'pending' state, then go into 'terminated' directly - but fail to stay running). Node Controller troubleshooting ¶

   * If nc.log reports "Failed to connect to hypervisor," xen/kvm + libvirt is not functioning correctly. 
   * If the NC cannot be contacted, make sure that you have synchronized keys to the nodes and that the keys are owned by the user that you are running the NC as (EUCA_USER in eucalyptus.conf). 

Walrus troubleshooting ¶

   * "ec2-upload-bundle" will report a "409" error when uploading to a bucket that already exists. This is a known compatibility issue when using ec2 tools with Eucalyptus. The workaround is to use ec2-delete-bundle with the "--clear" option to delete the bundle and the bucket, before uploading to a bucket with the same name, or to use a different bucket name. 

Note: If you are using Euca2ools, this is not necessary.

   * When using "ec2-upload-bundle," make sure that there is no "/" at the end of the bucket name. 

Block storage troubleshooting ¶

   * Unable to attach volumes when the front end and the NC are running on the same machine. This is a known issue with ATA over Ethernet (AoE). AoE will not export to the same machine that the server is running on. The workaround is to run the front end and the node controller on different hosts. 
   * Volume ends up in "deleted" state when created, instead of showing up as "available." Look for error messages in $EUCALYPTUS/var/log/eucalyptus/cloud-error.log. A common problem is that ATA-over-Ethernet may not be able to export the created volume (this will appear as a "Could not export..." message in cloud-error.log). Make sure that "VNET_INTERFACE" in eucalyptus.conf on the front end is correct. 
   * Failure to create volume/snapshot. Make sure you have enough loopback devices. If you are installing from packages, you will get a warning. On most distributions, the loopback driver is installed as a module. The following will increase the number of loopback devices available,
     rmmod loop ; modprobe loop max_loop=256
   * If block devices do not automatically appear in your VMs, make sure that you have the "udev" package installed. 
   * If you are running gentoo and you get "which: no vblade in ((null)).", try compiling "su" without pam. 



Pranala Menarik