Using LXD with Juju
Choosing LXD as the backing cloud for Juju is an efficient way to experiment with Juju. It is also very quick to set up. With lightweight containers acting as Juju machines, even a moderately powerful laptop can create useful models, or serve as a platform to develop your own charms. Make sure you have enough space locally for the containers.
A tutorial is available on this same topic: Getting started with Juju and LXD.
Software prerequisites
Both LXD and Juju will be needed on the host system.
Install Juju now (see the Installing Juju page).
Then follow the instructions below for installing LXD based on your chosen Ubuntu release. Note that the snap install method will soon become the preferred way to install LXD. See Using the LXD snap for how to do this.
Ubuntu 14.04 LTS
On Trusty, install LXD from the 'trusty-backports' pocket. This will ensure a recent (and supported) version is used:
sudo apt install -t trusty-backports lxd
Note: It's been reported that the snap install works significantly better on Trusty than what's available in the Ubuntu archive.
Ubuntu 16.04 LTS
On Xenial, install LXD from the 'xenial-backports' pocket. This will ensure a recent (and supported) version is used:
sudo apt install -t xenial-backports lxd
Note: Installing LXD in this way will update LXD if it is already present on your system.
Ubuntu 16.10 and greater
On these releases, install LXD in the usual way:
sudo apt install lxd
User group
In order to use LXD, the system user who will act as the Juju operator must be a member of the 'lxd' user group. Ensure that this is the case (below we assume this user is 'john'):
sudo adduser john lxd
The user will be in the 'lxd' group when they next log in. If the intended Juju operator is the current user all that's needed is a group membership refresh:
newgrp lxd
You can confirm the active group membership for the current user by running the command:
groups
Alternate backing file-system
LXD can use various file-systems for its containers. Below we show how to implement ZFS, as it provides the best experience.
Note: ZFS is not supported on Ubuntu 14.04 LTS.
Proceed as follows:
sudo apt install zfsutils-linux sudo mkdir /var/lib/zfs sudo truncate -s 32G /var/lib/zfs/lxd.img sudo zpool create lxd /var/lib/zfs/lxd.img lxd init --auto --storage-backend zfs --storage-pool lxd
Above we allocated 32GB of space to a sparse file.
Notes:
- If possible, put
/var/lib/zfs
on a fast storage device (e.g. SSD). - The installed ZFS utilities can be used to query the pool (e.g.
sudo zpool list -v lxd
).
If you've installed LXD via the snap package then you don't need to install the ZFS tools and configure it manually (as shown above). All that's required is:
lxd init --auto --storage-backend zfs
Disabling IPv6
Currently Juju does not support IPv6. You will therefore need to disable it at
the LXD level. Assuming an LXD bridge of lxdbr0
:
lxc network set lxdbr0 ipv6.address none
Creating a controller
The Juju controller for LXD (the 'localhost' cloud) can now be created. Below, we call it 'lxd':
juju bootstrap localhost lxd
View the new controller machine like this:
juju machines -m controller
This example yields the following output:
Machine State DNS Inst id Series AZ Message 0 started 10.103.91.114 juju-b14348-0 xenial Running
The controller's underlying container can be listed with the LXD client:
lxc list
Output:
---------------+---------+----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+----------------------+------+------------+-----------+ | juju-b14348-0 | RUNNING | 10.103.91.114 (eth0) | | PERSISTENT | 0 | +---------------+---------+----------------------+------+------------+-----------+
See more examples of Creating a controller with the localhost cloud.
Additional LXD resources
Additional LXD resources provides more LXD-specific information.
Next steps
A controller is created with two models - the 'controller' model, which should be reserved for Juju's internal operations, and a model named 'default', which can be used for deploying user workloads.
See these pages for ideas on what to do next: