This page makes use of information presented in the Offline mode strategies document. Please see that page for a full understanding.
There are three Juju entities to take into account when configuring Juju in a network-restricted environment. Each one of these may require specific treatment depending on the backing cloud type and in what manner the local network is restricted. These are:
Where the client is the Juju client from whence
juju commands are invoked;
the controller is the Juju machine acting as controller; and the machines
are the Juju workload machines that get created during charm deployment.
These entities interact in the following way: the client is responsible for creating the controller and then manages the controller through its API. The controller, in turn, is responsible for accessing and passing all needed resources to the machines.
There are a few exceptions to the above rule. When creating a controller the client needs to both access and then transfer both the 'juju-gui' charm and the Juju agent to the controller.
Recall that a Juju agent (
jujud) runs on each machine (and unit) and acts as
an intermediary between the machine/unit and the controller.
Here we examine what network access is needed to various internet-based resources for the aforementioned three entities in order to satisfy base requirements as well as requirements arising from local implementation decisions.
The client requires access to the backing cloud in order to create a controller. Most public clouds have a RESTful API that operates over TCP port 443. A special case is the localhost cloud, in which the client talks to the local LXD daemon.
Official Ubuntu cloud images.
Required when using the localhost cloud, by the client in order to create a controller and by the controller when creating further machines.
The machines require access if they will themselves be hosting LXD containers.
Where Juju agents are stored online. It is therefore required by the client in order to pass an agent to the newly-created controller and then later required by the controller itself in order to pass agents to any subsequently-created machines.
Used to map Juju series to cloud images. Required by the client for all cloud providers. The exception is the MAAS cloud, which maintains its own registry of images.
The Ubuntu package archive.
Required by the controller and the machines. Used for providing software needed to set up the controller (e.g.
juju-mongodb) as well as package management (e.g. updates). In addition, charms deployed on the machines typically call for packages to be installed.
Ubuntu security package updates.
Recommended for the controller and the machines. Note that all security updates eventually end up in the Ubuntu package archive via the '-updates' pocket.
The Charm Store. Required by the controller so that charms can be deployed on the machines. The client only requires access if the juju-gui charm is deployed on the controller (the default behaviour).
See Deploying charms offline for how to manage charms locally.
Some charms require auxiliary site support (for pulling down resources). Popular sites include https://ppa.launchpad.net and https://github.com. Therefore, the machines may need access to these.
The following table summarizes the above information. An X designates either a hard requirement or a requirement based on local factors (see the footnotes). Remember that when using the localhost cloud, the controller resides on the same host as the client, in the form of a LXD container.
|http://cloud-images.ubuntu.com||X ||X ||X |
: Required for the localhost cloud only.
: Not needed if the
--no-gui option is used with the
command. See The Juju GUI.
: Required for the localhost cloud only.
: Required if the machines will host LXD containers.
Note: The above table does not take into account the packaging needs (e.g. package updates) of the client host system.
The client is made aware of proxy settings via the shell that it is running under (e.g. Bash). This is done by exporting the relevant environment variables:
Both the controller and the machines are configured at the model level, therefore both these entities can be configured together. Keep in mind, however, that the controller is configured via the 'controller' model and the machines are configured via whatever model that will eventually contain them. This information is passed during the controller creation process since the controller needs to download Ubuntu packages in order to provision itself.
Juju has the following offline-related model configuration settings at its disposal:
The method for configuring models while using the
bootstrap command is done
with either the
--config option or the
--model-default option. The latter
is a more powerful choice since it applies to all existing and future models
--config only affects the 'controller' and 'default' models.
Read Configuring models for details on how a model can be configured.
To set an HTTP proxy for the client, whose URL is given by $PROXY_HTTP:
To configure all models to use an APT mirror, whose URL is given by $MIRROR_APT, while creating an AWS-based controller:
juju bootstrap --model-default apt-mirror=$MIRROR_APT aws
To have all models of a localhost cloud use local resources for both Juju agent binaries and cloud images, whose URLs are given by $LOCAL_AGENTS and $LOCAL_IMAGES, respectively:
juju bootstrap \ --model-default agent-metadata-url=$LOCAL_AGENTS \ --model-default image-metadata-url=$LOCAL_IMAGES \ localhost