Choosing LXD as the backing cloud for Juju is an efficient way to work with Juju. Other advantages are that it is quick to set up and does not require an account with a public cloud vendor.
Traditionally LXD would run local to the Juju client but since
is capable of connecting to remote LXD hosts (see
Adding a remote LXD cloud). If you do run it
locally ensure you have enough space for the containers.
Constraints can be used with LXD containers (
v.2.4.1). However, these are not
bound to the LXD cloud type (i.e. they can affect containers that are
themselves backed by a Juju machine running on any cloud type). See
Constraints and LXD containers for details.
Here is a list of advanced LXD features supported by Juju that are explained on a separate page:
Both LXD and Juju will be needed on the host system.
Install Juju now (see the Installing Juju page).
Then follow the instructions below for installing LXD based on your chosen Ubuntu release. Note that the snap install method will soon become the preferred way to install LXD. See Using the LXD snap for how to do this.
On Trusty, install LXD from the 'trusty-backports' pocket. This will ensure a recent (and supported) version is used:
sudo apt install -t trusty-backports lxd
Note: It's been reported that the snap install works significantly better on Trusty than what's available in the Ubuntu archive.
On Xenial, install LXD from the 'xenial-backports' pocket. This will ensure a recent (and supported) version is used:
sudo apt install -t xenial-backports lxd
Note: Installing LXD in this way will update LXD if it is already present on your system.
On these releases, install LXD in the usual way:
sudo apt install lxd
In order to use LXD, the system user who will act as the Juju operator must be a member of the 'lxd' user group. Ensure that this is the case (below we assume this user is 'john'):
sudo adduser john lxd
The user will be in the 'lxd' group when they next log in. If the intended Juju operator is the current user all that's needed is a group membership refresh:
You can confirm the active group membership for the current user by running the command:
LXD can use various file-systems for its containers. Below we show how to implement ZFS, as it provides the best experience.
Note: ZFS is not supported on Ubuntu 14.04 LTS.
Proceed as follows:
sudo apt install zfsutils-linux sudo mkdir /var/lib/zfs sudo truncate -s 32G /var/lib/zfs/lxd.img sudo zpool create lxd /var/lib/zfs/lxd.img lxd init --auto --storage-backend zfs --storage-pool lxd
Above we allocated 32GB of space to a sparse file.
- If possible, put
/var/lib/zfson a fast storage device (e.g. SSD).
- The installed ZFS utilities can be used to query the pool (e.g.
sudo zpool list -v lxd).
If you've installed LXD via the snap package then you don't need to install the ZFS tools and configure it manually (as shown above). All that's required is:
lxd init --auto --storage-backend zfs
Currently Juju does not support IPv6. You will therefore need to disable it at
the LXD level. Assuming an LXD bridge of
lxc network set lxdbr0 ipv6.address none
The Juju controller for LXD (the 'localhost' cloud) can now be created. Below, we call it 'lxd':
juju bootstrap localhost lxd
View the new controller machine like this:
juju machines -m controller
This example yields the following output:
Machine State DNS Inst id Series AZ Message 0 started 10.103.91.114 juju-b14348-0 xenial Running
The controller's underlying container can be listed with the LXD client:
---------------+---------+----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+----------------------+------+------------+-----------+ | juju-b14348-0 | RUNNING | 10.103.91.114 (eth0) | | PERSISTENT | 0 | +---------------+---------+----------------------+------+------------+-----------+
Additional LXD resources provides more LXD-specific information.
A controller is created with two models - the 'controller' model, which should be reserved for Juju's internal operations, and a model named 'default', which can be used for deploying user workloads.
See these pages for ideas on what to do next: