Some notes for running a local, multi-node Kubernetes cluster with k3d, and configuring it with nginx-ingress.
This is a useful approach to learn more about how the various components fit together, and to enable local testing without having to spin up a full blown Kubernetes cluster on a cloud provider.
Configure and Run K3d
Create a config file, k3d-conf.yaml.
Notable parts:
ports: this will configure the provided nginx loadbalancer to be available on localhost port 8080
k3s extraArgs: this will prevent the default svc loadbalancer being deployed to the nodes
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: mycluster
image: rancher/k3s:v1.26.0-k3s2
agents: 3
ports:
- port: 8080:80 # same as '--port 8080:80@loadbalancer'
Once installed, inspecting the pods and services should show something like this:
The svclb-ingress-nginx-controller-* pods are created by K3s via a Service controller in reaction to the creation of the ingress-nginx-controller service.
Deploy a Workload
At this point a workload can be deployed, via a Deployment, Service and Ingress.
Following on from my previous posts looking at Remote Containers in Visual Studio Code, I wanted to take a look at debugging.
It turns out this is super straight forward; the configuration for debugging an application inside a container is exactly the same as if you were debugging locally, namely requiring a launch.json file be configured under the .vscode directory.
The example below is for a simple nodejs application:
With this in place, simply start your application from either the 'Run' tab or the command palette (Shift + Ctrl + P, Debug), then set breakpoints as you please:
VS Code provides a full-blown debug environment, with breakpoints, code stepping, variable watches etc all supported.
A handy hint, VS Code supports intellisense in launch.json; pressing Ctrl + Spacebar will bring up a list of suggestions which will generate the appropriate entries.
That's really all there is to it. Further documentation on debugging Node in VS Code can be found here, including how to attach rather than launch, and how to configure 'skip files' to avoid certain source files.
My last post detailed developing inside containers with Visual Studio, making it possible to use existing docker images as a fully fledged development environment.
This post will dive a bit deeper, looking at how we can use Docker Compose to spin up multiple containers to support scenarios where we might want to make use of additional services or APIs.
Step 1: Move to Docker Compose
First, we will update our .devcontainer configuration to use docker-compose, instead of just a straight DockerFile. There are three components to this:
DockerFile
In this case the docker file defines the container inside which we will do our development; this remains unchanged in this simple example:
ARG VARIANT="14-buster"
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
ARG VARIANT="14-buster"
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
ARG VARIANT="14-buster"
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
devcontainer.json
This file controls how Visual Studio Code will handle remote containers for development. It is slightly different to the previous example as it will refer to a docker-compose file:
{
"name":"Node.js",
"dockerComposeFile":"./docker-compose.yml",
"service":"app",
"workspaceFolder":"/workspace",
// Set *default* container specific settings.json values on container create.
"settings":{
"terminal.integrated.shell.linux":"/bin/bash"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions":[
"dbaeumer.vscode-eslint"
],
"remoteUser":"node"
}
{
"name": "Node.js",
"dockerComposeFile": "./docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint"
],
"remoteUser": "node"
}
{
"name": "Node.js",
"dockerComposeFile": "./docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint"
],
"remoteUser": "node"
}
Some of the important items to note:
dockerComposeFile: path to the docker-compose file
service: name of the container from the docker-compose file which will be used as the dev container
remoteUser: required when container is configured with non-root user
docker-compose.yaml
Finally, the docker-compose.yaml file defines the containers and services we want to spin up. To start this is just replicating the dev container:
version:'3'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
VARIANT: 14
volumes:
- ..:/workspace:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Use a non-root user for all processes.
user: node
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
VARIANT: 14
volumes:
- ..:/workspace:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Use a non-root user for all processes.
user: node
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
VARIANT: 14
volumes:
- ..:/workspace:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Use a non-root user for all processes.
user: node
With these elements in place, it should be possible to execute a container rebuild:
Step 2: Add Additional Services
Now that we are using docker compose, we can simply update docker-compose.yaml to include the additional containers we want, like this:
version:'3'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
VARIANT: 14
volumes:
- ..:/workspace:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode:service:deepstack-ai
# Use a non-root user for all processes.
user: node
deepstack-ai:
image: deepquestai/deepstack:latest
volumes:
- localstorage:/datastore
environment:
- VISION-DETECTION=True
volumes:
localstorage:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
VARIANT: 14
volumes:
- ..:/workspace:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:deepstack-ai
# Use a non-root user for all processes.
user: node
deepstack-ai:
image: deepquestai/deepstack:latest
volumes:
- localstorage:/datastore
environment:
- VISION-DETECTION=True
volumes:
localstorage:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
VARIANT: 14
volumes:
- ..:/workspace:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:deepstack-ai
# Use a non-root user for all processes.
user: node
deepstack-ai:
image: deepquestai/deepstack:latest
volumes:
- localstorage:/datastore
environment:
- VISION-DETECTION=True
volumes:
localstorage:
After executing another rebuild, you should see VS Code pull down the appropriate images, and spin everything up, leaving you with both containers running and ready to use:
Summary
Docker-compose can be used with Visual Studio Code's remote containers to make it possible to spin up multiple containers as needed. This is useful when want to build on existing services, such as a database or AI API etc.
Visual Studio Code now offers the ability to use a docker container as a fully fledged development environment with the introduction of the Remote Containers extension.
Workspace files are made accessible from inside a container which can also host the tools relevant to the development environment, leaving VS Code acting as a remote UI to enable a 'local quality' development experience:
The obvious benefit here is the ability to very rapidly spin up a development environment through the use of pre-existing containers which already provide all required components.
Starting Up
First thing to do is create the config files that will tell VS Code how to configure the environment; this can be done by executing 'Add Development Container Configuration Files' (Ctrl + Shift + P):
This will create devcontainer.json and Dockerfile files under .devcontainer within the workspace.
The Dockerfile defines the container that Code will create and then connect to for use as a development environment. A bare bones Dockerfile for use with a Node app may look like this:
FROM node:slim
USER node
FROM node:slim
USER node
FROM node:slim
USER node
devcontainer.json defines how VS Code should work with a remote container. A simple example below shows how to reference the Dockerfile:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.140.1/containers/typescript-node
{
"name": "TriggerService",
"build": {
"dockerfile": "Dockerfile",
},
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"extensions": [
"dbaeumer.vscode-eslint",
"ms-vscode.vscode-typescript-tslint-plugin"
],
"remoteUser": "node"
}
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.140.1/containers/typescript-node
{
"name": "TriggerService",
"build": {
"dockerfile": "Dockerfile",
},
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"extensions": [
"dbaeumer.vscode-eslint",
"ms-vscode.vscode-typescript-tslint-plugin"
],
"remoteUser": "node"
}
With both of these files in place, VS Code will prompt to re-open in the container environment (or use the command palette to execute 'Reopen in Container'):
Once started up, an indicator in the bottom left shows that VS Code is currently connected to a container:
Create a Simple App
At this point VS Code is now connected to the node:slim container as configured in the Dockerfile.
Because this image provides everything needed to start developing a Node application, we can start by using npm to install Express:
npm init -y
npm install express
npm init -y
npm install express
npm init -y
npm install express
Then create index.js under the src folder:
const express = require("express");
const app = express();
const port = 8080;
// define a route handler for the default home page
app.get("/", ( req, res )=>{
res.send("Hello world!");
});
// start the Express server
app.listen( port, ()=>{
console.log(`server started at http://localhost:${ port }`);
});
const express = require( "express" );
const app = express();
const port = 8080;
// define a route handler for the default home page
app.get( "/", ( req, res ) => {
res.send( "Hello world!" );
} );
// start the Express server
app.listen( port, () => {
console.log( `server started at http://localhost:${ port }` );
} );
const express = require( "express" );
const app = express();
const port = 8080;
// define a route handler for the default home page
app.get( "/", ( req, res ) => {
res.send( "Hello world!" );
} );
// start the Express server
app.listen( port, () => {
console.log( `server started at http://localhost:${ port }` );
} );
Next we need to update the package.json file to set the main entry point and start command:
{
"name":"test-app",
"version":"1.0.0",
"description":"",
"main":"src/index.js",
"scripts":{
"start":"node .",
"test":"echo \"Error: no test specified\" && exit 1"
Now executing the following command from the terminal will start up the application inside the container:
npm run start
npm run start
npm run start
The key thing to note here is that we stood up this simple Node app without ever having to actually install Node on our host system; everything was pulled down via the node:slim docker image.
At this point the application is exposed on port 8080, so can be accessed at http://localhost:8080.
What's Next?
We have only covered enough here to get up and running, barely scratching the surface of what can be done with remote containers.
Next up, debugging from inside a container, and using docker compose to handle spinning up multiple containers.
Messing around with Vagrant again, this time using Ansible to automate configuration post deployment.
Ansible is billed as an automation platform which makes it easier to deploy systems and applications. It does this through a scripting framework which supports a wide range of functionality covering deployment and configuration.
Vagrant Config
To define which Ansible playbooks should be run, the vm.provision config can be used in a Vagrantfile:
By default, no networking is enabled (outside of Vagrant's internal management mechanism), so one of these must be configured in the Vagrantfile to make the VM accessible by network.
Port Forwarding / NAT
The most basic network configuration forwards traffic from the host machine to the guest VM only on specific ports. By default only TCP is forwarded; config looks like this:
Vagrant.configure("2") do |config|
config.vm.network "forwarded_port", guest: 80, host: 8080
end
Vagrant will also detect configuration conflicts where the same port is in use multiple times, and will prevent deployment of such a config.
Private Network
Private networks provide host only access to the guest VM; that is the networking is not bridged, and will not be accessible outside of the host VM.
Config looks like this when assigning an address via DHCP:
Vagrant.configure("2") do |config|
config.vm.network "private_network", type: "dhcp"
end
Public Network
Public networks provide access to the guest VM which is available externally to the host system. Depending on provider, this is achieved through bridging, making the guest VM as public as the host machine is.
By default, DHCP is used for assigning addresses; config looks like this:
Vagrant.configure("2") do |config|
config.vm.network "public_network"
end
Static IPs
The 'ip' config setting can be used to assign specific IPs for both private and public networks:
Note that there appears to be a bug in Vagrant 1.9.1 that prevents static IPs being applied properly in some RHEL based images. A workaround is to force the interface to come up by adding an additional provisioning line in the Vargrantfile: