Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Tweak variables


Panel
titleThis page:

Table of Contents

These instructions will help you create a Kubernetes cluster on a set of Nimbus instances. This procedure takes approximately two hours of elapsed time.


Procedure

Note

Before beginning, ensure that you have a running Ubuntu instance on Nimbus, and that you can connect to it.

1. Install Juju on your seed instance

1.1. Run this command to install Juju. It will take approximately 15 minutes to process.

No Format
sudo snap install juju --classic

1.2. When processing completes, list the clouds Juju has preconfigured. Note that Nimbus is not listed:

No Format
juju clouds --all

1.3a. Create a working directory in your home directory:

No Format
mkdir ~/juju

1.3. Add configuration for a Nimbus cloud.

No Format
cat > ~/juju/nimbus-cloud.yaml <<EOF
clouds:
  nimbus:
    type: openstack
    auth-types: [userpass]
    endpoint: https://nimbus.pawsey.org.au:5000/v3
    regions:
      RegionOne:
        endpoint: https://nimbus.pawsey.org.au:5000/v3
EOF

juju add-cloud --client nimbus ~/juju/nimbus-cloud.yaml
juju clouds --all
  • There should now be a 'nimbus' entry in the listing

1.4. Add your credentials for the Nimbus cloud:

No Format
cat > ~/juju/credentials.yaml <<EOF
credentials:
  nimbus:
    CHANGE_THIS_USERNAME-nimbus:
      auth-type: userpass
      password: 'CHANGE_THIS_PASSWORD'
      project-domain-name: pawsey
      tenant-name: CHANGE_THIS_PROJECT
      user-domain-name: pawsey
      username: CHANGE_THIS_USERNAME
EOF

# store your nimbus username, project name, and password in these variables
username=

...

OS_USERNAME
project=

...

OS_PROJECT_NAME
read -s 

...

OS_PASSWORD

# then replace them in the file
sed -i'' -e "s/CHANGE_THIS_PROJECT/

...

$OS_PROJECT_NAME/" ~/juju/credentials.yaml
sed -i'' -e "s/CHANGE_THIS_USERNAME/

...

$OS_USERNAME/" ~/juju/credentials.yaml
sed -i'' -e "s/CHANGE_THIS_PASSWORD/

...

$OS_PASSWORD/" ~/juju/credentials.yaml
unset 

...

OS_PASSWORD

juju add-credential --client nimbus -f ~/juju/credentials.yaml
rm ~/juju/credentials.yaml

2. Prepare an image for use by Juju

2.1. Run these commands to configure the metadata for an image (click for details). In this case we have given the IMAGE_ID for Ubuntu 18.04.

No Format
IMAGE_ID=674eaa55-a335-4e2c-b750-23423983fd2a
OS_SERIES=bionic
REGION=RegionOne
OS_AUTH_URL=https://nimbus.pawsey.org.au:5000/v3
mkdir ~/juju/simplestreams
juju metadata generate-image -d ~/juju/simplestreams -i $IMAGE_ID -s $OS_SERIES -r $REGION -u $OS_AUTH_URL

3. Use Juju to bootstrap the controller instance

During this step, you need to get the network ID of your private network. This should be connected via your router to the Public external network. You can check this at the Network Topology page. To bootstrap the controller instance:

3.1. Go to the Networks page.
3.2. Click on your network, which will be named something like projectname-network
3.3. Select and copy the ID (not Project ID), ready to paste into the NETID=... command below.
3.4. Back on the seed instance, run the following commands. The final bootstrap command will take approximately 20 minutes to process. While it is running, you can check https://nimbus.pawsey.org.au/horizon/project/instances/ to see a new controller instance being created.

No Format
NETID=YOUR_NET_ID # paste your network ID here instead of the YOUR_NET_ID
PUBEXT=dfb2cfd9-b746-410d-ab4b-f2e7d5bafacf
juju bootstrap nimbus nimbus-

...

k8s-controller \
 --constraints "arch=amd64" \
 --metadata-source ~/juju/simplestreams \
 --model-default network=$NETID \
 --model-default external-network=$PUBEXT \
 

...

--model-default use-floating-ip=true

...

 --bootstrap-constraints "instance-type=n3.1c4r"

4. Create a new model for the Kubernetes deployment

Models are Juju's concept of a workspace, so it is a good idea to create one for each deployment, to encapsulate them. This is true even with only one deployment. Refer to https://juju.is/docs/models for more information.

4.1. Create the model, then switch  to ensure you are using it.

No Format
juju add-model nimbus-k8s-model
juju switch nimbus-k8s-model


Info

From here onward, you can revert to this point in the instructions by deleting the model and recreating it:

No Format
juju destroy-model -y nimbus-k8s-model

Then go back and start again from step 4.

5. Use Juju to deploy Kubernetes, install kubectl

5.1. Install Kubernetes.

  • Installing Kubernetes takes

...

  • approximately one hour.
    • If you are familiar with the screen  program, this would be a good time to use it.
  • View the status command to see installation progress.
  • Once everything is active and started the installation is ready to use.
  • Note: This approach installs Kubernetes with default machine constraints. For further information, including increasing those constraints, refer to https://jaas.ai/canonical-kubernetes
No Format
juju status
juju deploy charmed-kubernetes
watch -c 'juju status --color'
# Ctrl-C to stop the 'watch' command

5.2. While waiting, use another window to install the Kubernetes controller client kubectl. This will take 10 minutes.

No Format
sudo snap install kubectl --classic

6. Configure the kubectl Kubernetes client

6.1. Use the following commands to install and configure the client:

No Format
mkdir -p ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
kubectl get all
kubectl cluster-info

7. (Optional) Run an autoscaling stress test on Kubernetes

7.1. Use the commands below to run a stress test.

7.2. Run an Ubuntu pod, install stress, create an autoscaler and run stress:

No Format
kubectl run autoscale-test --image=ubuntu:18.04 --requests=cpu=1000m --command sleep 1800
pod=$(kubectl get pod | awk '/autoscale-test/ {print $1}')
kubectl exec $pod -- apt-get update
kubectl exec $pod -- apt-get install stress
kubectl autoscale deployment autoscale-test --cpu-percent=25 --min=1 --max=15
kubectl get hpa
kubectl exec $pod -- stress --cpu 2 --timeout 600s &
watch 'kubectl get pod; echo; kubectl get hpa'
# Ctrl-C to stop the 'watch' command

7.3. Stop and delete the pod and autoscaler:

No Format
kubectl delete hpa autoscale-test
kubectl delete deploy autoscale-test

...

titleAdvanced Topics & Troubleshooting:

...