Incus is a hypervisor/manager for virtual machines and application/system containers. Get community support here.
A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system.
A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container, instead, uses security primitives of the Linux kernel for the separation from the main operating system. The system container follows the lifecycle of a computer system. You can think of system containers as software virtual machines.
An application container is a container that has an application or service. It follows the lifecycle of the application instead of a system. That is, here you start and stop the application instead of booting and shutting down a system. Incus supports Open Container Initiative (OCI) images such as Docker images. When Incus launches an OCI image, it uses its own runtime, not Docker’s. That is, Incus consumes images from any OCI image repositories.
In virtual machines and system/application containers we can attach virtual networking devices, either
- none, (i.e. an instance without networking)
- one or, (i.e. most common and simple case)
- more than one.
In addition to the virtual networking devices, we can also attach real hardware networking devices. Those devices can be taken away from the host and get pushed into a virtual machine or system container.
You may use a combination of those networking devices in the same instance. It is left as an exercise to the reader to explore that road. In these tutorials we are look at one at most networking device per instance.
There will be attempts to generalize and explain in practical terms. If I get something wrong, please correct me in the comments so that it gets fixed and we all learn something new. Note that I will be editing this content along the way, adding material, troubleshooting cases, etc.
In this post we are listing tutorials of the different Incus devices of type nic (network interface controller). Whatever we write in this post and the linked tutorials, are covered in that documentation URL!
The list of tutorials per networking:
bridge
(the default, the local network bridge), it’s in this post below.
bridged
, (pending)
macvlan
, (pending)
- none,
physical
,
ipvlan
,
routed
,
The setup
When demonstrating these network configurations, we will be using an Incus VM. When learning, try there in your Incus VM before applying on your host or your server.
We launch an Incus VM, called tutorial, with Ubuntu 24.04 LTS, then get a shell with the default non-root account ubuntu. I am impatient and I am typing repeatedly the incus exec
command to get a shell. The VM takes a few moments to boot up, and I get interested error messages until the VM is actually running. Not really relevant to this tutorial but you will get educated at every opportunity.
$ incus launch images:ubuntu/24.04/cloud tutorial --vm
Launching tutorial
$ incus exec tutorial -- su -l ubuntu
Error: VM agent isn't currently running
$ incus exec tutorial -- su -l ubuntu
su: user ubuntu does not exist or the user entry does not contain all the required fields
$ incus exec tutorial -- su -l ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@tutorial:~$
We got a shell in the VM. Then, install Incus which is available in the default repositories of Ubuntu 24.04 LTS. Also, we install zfsutils-linux
, which are the client utilities to use ZFS in Incus. We are advised to add our non-root account to the incus-admin
group in order to have access to Incus. Without that, we would have to use sudo
all the time. When you add a user to a group, you need to logout then log in again for the change to take effect. And this is what we do (unless you know about newgrp
).
ubuntu@tutorial:~$ sudo apt install -y incus zfsutils-linux
...
Creating group 'incus' with GID 989.
Creating group 'incus-admin' with GID 988.
Created symlink /etc/systemd/system/multi-user.target.wants/incus-startup.service → /usr/lib/systemd/system/incus-startup.service.
Created symlink /etc/systemd/system/sockets.target.wants/incus-user.socket → /usr/lib/systemd/system/incus-user.socket.
Created symlink /etc/systemd/system/sockets.target.wants/incus.socket → /usr/lib/systemd/system/incus.socket.
incus.service is a disabled or a static unit, not starting it.
incus-user.service is a disabled or a static unit, not starting it.
Incus has been installed. You must run `sudo incus admin init` to
perform the initial configuration of Incus.
Be sure to add user(s) to either the 'incus-admin' group for full
administrative access or the 'incus' group for restricted access,
then have them logout and back in to properly setup their access.
...
ubuntu@tutorial:~$ sudo usermod -a -G incus-admin ubuntu
ubuntu@tutorial:~$ logout
$ incus exec tutorial -- su -l ubuntu
ubuntu@tutorial:~$
Now we initialize Incus with sudo incus admin init
.
Default Incus networking
When you install and setup Incus with incus admin init
, you are prompted whether you want to create a local network bridge. We press Enter to all prompts, which means that we accept all the defaults that are presented to us. The last question is whether to show the initialization configuration. If you missed it, you can get it after the fact by running incus admin init --dump
(dumps the configuration).
ubuntu@tutorial:~$ incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (zfs, dir) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: ""
name: incusbr0
type: ""
project: default
storage_pools:
- config:
size: 5GiB
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
projects: []
cluster: null
ubuntu@tutorial:~$
If you accept the defaults (i.e. press Enter in each) or type them explicitly, you get a local bridge named incusbr0
that is managed by Incus, and gives private IPv4 and IPv6 IP addresses to your newly created instances.
Let’s see them in practice in your Incus installation. You have configured Incus and Incus created a default profile, called default, for you. This profile is applied by default to all newly created instances and has the networking configuration in there. In that profile there are two devices, and one of them is the networking device. In Incus the device is called eth0
(in pink color), and in the instance it will be shown as eth0
(green color). On the host, the bridge will appear with the name incusbr0
. It’s a networking type, hence of type nic
.
ubuntu@tutorial:~$ incus profile list
+---------+-----------------------+---------+
| NAME | DESCRIPTION | USED BY |
+---------+-----------------------+---------+
| default | Default Incus profile | 0 |
+---------+-----------------------+---------+
ubuntu@tutorial:~$ incus profile show default
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by: []
ubuntu@tutorial:~$
incusbr0
was created by Incus. Let’s see details through the incus network
commands. We first list the network interfaces and then we show the incusbr0
network interface. incusbr0
is a managed network interface (in pink below), and it’s managed by Incus. Incus takes care of the networking and provides DHCP services, and access to the upstream network (i.e. the Internet). incusbr0
is a network bridge (in blue). An instance that requires network configuration from incusbr0
, will get an IP address from the range 10.180.234.1-254
(in orange). Network Address Translation (NAT) is enabled (also in orange), which means there is access to the upstream network, and likely the Internet.
ubuntu@tutorial:~$ incus network list
+----------+----------+---------+-----------------+---------+---------+
| NAME | TYPE | MANAGED | IPV4 | USED BY | STATE |
+----------+----------+---------+-----------------+---------+---------+
| enp5s0 | physical | NO | | 0 | |
+----------+----------+---------+-----------------+---------+---------+
| incusbr0 | bridge | YES | 10.180.234.1/24 | 1 | CREATED |
+----------+----------+---------+-----------------+---------+---------+
ubuntu@tutorial:~$ incus network show incusbr0
config:
ipv4.address: 10.180.234.1/24
ipv4.nat: "true"
ipv6.address: fd42:7:7dfe:75cf::1/64
ipv6.nat: "true"
description: ""
name: incusbr0
type: bridge
used_by:
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$
Let’s launch a container and test these out. The instance got an IP address (in orange) that is within the range of the network bridge above.
ubuntu@tutorial:~$ incus launch images:alpine/edge/cloud myalpine
Launching myalpine
ubuntu@tutorial:~$ incus list -c ns4t
+----------+---------+----------------------+-----------+
| NAME | STATE | IPV4 | TYPE |
+----------+---------+----------------------+-----------+
| myalpine | RUNNING | 10.180.234.24 (eth0) | CONTAINER |
+----------+---------+----------------------+-----------+
ubuntu@tutorial:~$
The IP address is OK but could it look better? It’s private anyway, and we can select anything from the range 10.x.y.z
. Let’s change it so that it uses instead 10.10.10.1-254
. We set the configuration of incusbr0
for ipv4.address
(see earlier) to a new value, 10.10.10.1/24
. Each number separated by commas is 8 bits in length, and /24
means that the first 3 * 8 = 24 bits should stay the same. We make the change, but the instance still has the old IP address. We restart the instance, and it automatically gets the new IP address from the new range.
ubuntu@tutorial:~$ incus network set incusbr0 ipv4.address=10.10.10.1/24
ubuntu@tutorial:~$ incus list -c ns4t
+----------+---------+----------------------+-----------+
| NAME | STATE | IPV4 | TYPE |
+----------+---------+----------------------+-----------+
| myalpine | RUNNING | 10.180.234.24 (eth0) | CONTAINER |
+----------+---------+----------------------+-----------+
ubuntu@tutorial:~$ incus restart myalpine
ubuntu@tutorial:~$ incus list -c ns4t
+----------+---------+--------------------+-----------+
| NAME | STATE | IPV4 | TYPE |
+----------+---------+--------------------+-----------+
| myalpine | RUNNING | 10.10.10.24 (eth0) | CONTAINER |
+----------+---------+--------------------+-----------+
ubuntu@tutorial:~$
We have created incusbr0
. Are we allowed to create another private bridge? Sure we are. We will call it incusbr1
, and also we disable IPv6 networking. IPv6 addresses are too wide and mess up the formatting on my blog. If you notice earlier, there were no IPv6 addresses although IPv6 was configured on incusbr0
. I cheated and removed the IPv6 addresses in some command outputs.
ubuntu@tutorial:~$ incus network create incusbr1 ipv4.address=10.10.20.1/24 ipv6.address=none
Network incusbr1 created
ubuntu@tutorial:~$ incus network show incusbr1
config:
ipv4.address: 10.10.20.1/24
ipv6.address: none
description: ""
name: incusbr1
type: bridge
used_by: []
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$
We have created incusbr1
. Can we now launch an instance onto that private bridge? We launch the instance called myalpine1 and we used the incus launch
parameter --network incusbr1
to specify a different network than the default network in the default Incus profile. We verify below that myalpine1 is served by incusbr1 (in green).
ubuntu@tutorial:~$ incus launch images:alpine/edge/cloud myalpine1 --network incusbr1
Launching myalpine1
ubuntu@tutorial:~$ incus list -c ns4t
+-----------+---------+--------------------+-----------+
| NAME | STATE | IPV4 | TYPE |
+-----------+---------+--------------------+-----------+
| myalpine | RUNNING | 10.10.10.24 (eth0) | CONTAINER |
+-----------+---------+--------------------+-----------+
| myalpine1 | RUNNING | 10.10.20.85 (eth0) | CONTAINER |
+-----------+---------+--------------------+-----------+
ubuntu@tutorial:~$ incus network show incusbr1
config:
ipv4.address: 10.10.20.1/24
ipv6.address: none
description: ""
name: incusbr1
type: bridge
used_by:
- /1.0/instances/myalpine1
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$
Technical details
The instances that use the Incus private bridge have access to the Internet. How is this achieved? It’s achieve with either iptables
or nftables
rules. In recent versions of Linux distributions, you would be using nftables
by default (command: nft
, no relation to NFTs). To view the firewall ruleset that were created by Incus, run sudo nft list ruleset
. Here is my ruleset and should be similar to yours. There is one table for Incus and four chains. A persistent, a forward, an in and an out. More at the documentation site at nftables.
ubuntu@tutorial:~$ sudo nft list ruleset
table inet incus {
chain pstrt.incusbr0 {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 10.57.39.0/24 ip daddr != 10.57.39.0/24 masquerade
ip6 saddr fd42:e7b:739c:7117::/64 ip6 daddr != fd42:e7b:739c:7117::/64 masquerade
}
chain fwd.incusbr0 {
type filter hook forward priority filter; policy accept;
ip version 4 oifname "incusbr0" accept
ip version 4 iifname "incusbr0" accept
ip6 version 6 oifname "incusbr0" accept
ip6 version 6 iifname "incusbr0" accept
}
chain in.incusbr0 {
type filter hook input priority filter; policy accept;
iifname "incusbr0" tcp dport 53 accept
iifname "incusbr0" udp dport 53 accept
iifname "incusbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
iifname "incusbr0" udp dport 67 accept
iifname "incusbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
iifname "incusbr0" udp dport 547 accept
}
chain out.incusbr0 {
type filter hook output priority filter; policy accept;
oifname "incusbr0" tcp sport 53 accept
oifname "incusbr0" udp sport 53 accept
oifname "incusbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
oifname "incusbr0" udp sport 67 accept
oifname "incusbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
oifname "incusbr0" udp sport 547 accept
}
}
ubuntu@tutorial:~$
Future considerations
- Network isolation.