Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 47 additions & 37 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ IPMan is a Kubernetes operator that simplifies the management of IPSec connectio
- Creates and manages IPSec VPN connections between Kubernetes nodes and remote endpoints
- Handles routing configuration automatically
- Provides IP pool management for your workloads
- Enables secure communication through VPN tunnels

## Installation

Expand Down Expand Up @@ -40,7 +39,27 @@ IPMan requires a secret for IPSec authentication:
kubectl create secret generic ipsec-secret -n default --from-literal=example=yourpresharedkey
```

### Step 2: Create an IPSecConnection
### Step 2: Create a charon group
Charon groups can contain many IPSec connections.
Usually a charon group will look like this:

```yaml
apiVersion: ipman.dialo.ai/v1
kind: CharonGroup
metadata:
name: charongroup1
namespace: default
spec:
hostNetwork: true
nodeName: node1
```
Here we specify that the other side of the VPN connections points to an IP address
assigned to a host interface on one of our nodes `node1`.

For example we could have a `enp0s1` interface with an address `192.168.10.201` on `node1`
the next steps assume this is the case.

### Step 3: Create an IPSecConnection

Create an IPSecConnection Custom Resource (CR) to establish a VPN connection:

Expand All @@ -52,10 +71,10 @@ metadata:
namespace: ipman-system
spec:
name: "example"
remoteAddr: "192.168.1.2"
localAddr: "192.168.1.1"
localId: "192.168.1.1"
remoteId: "192.168.1.2"
remoteAddr: 192.168.10.204
remoteId: 192.168.10.204
localAddr: 192.168.10.201
localId: 192.168.10.201
secretRef:
name: "ipsec-secret"
namespace: default
Expand All @@ -73,19 +92,32 @@ spec:
- "10.0.1.0/24"
xfrm_ip: "10.0.2.1/24"
vxlan_ip: "10.0.2.2/24"
if_id: 102
if_id: 101
ip_pools:
primary:
- "10.0.2.3/24"
- "10.0.2.4/24"
nodeName: "your-node-name"
```
This CR looks a lot like StrongSwan configuration file, with following added fields:
1. secretRef
This is the substitute of `secrets` section of the StrongSwan config file.
You point it at the secret created in step 1 which contains the PSK.
2. `xfrm_ip` and `vxlan_ip`
These are largly arbitrary with the exception that they have to be in the subnet defined in `local_ips`.
For most of use cases you can choose them arbitrarily and make sure they don't conflict between connections and you will be good to go.
3. `if_id`
This has to be unique within a single node since it specifies the ID of an xfrm interface strongswan and the linux kernel use to route
IPSec packets.
4. `ip_pools`
This is the list of IP's which will be given out to pods that are supposed to be in the VPN. So again they have to be IP's defined in
`local_ips`. They are split into pools. Here we name our pool `primary` but you can use any name. This helps when you share multiple services
with the other side of the VPN. You may want to have a pool `service1` and `service2` and in each you would put IP's that the other side of the VPN
expects these services to be at.

### Step 3: Deploy Workloads Using the VPN Connection

To route workload traffic through the VPN tunnel, add specific annotations to your Pods or Deployments. These annotations tell IPMan to allocate IPs from the configured pools and set up the necessary routing.

#### Required Annotations for Worker Pods
To route workload traffic through the VPN tunnel, add specific annotations to your Pods or Deployments. These annotations tell IPMan to allocate IPs
from the configured pools and set up the necessary routing.

```yaml
apiVersion: apps/v1
Expand All @@ -108,33 +140,11 @@ The operator will automatically:
2. Set up routing for your workloads
3. Configure bridge FDB entries for communication

## Configuration Reference
If your app requires a specific IP to bind to and you have multiple IP's in a pool you don't necessarily know which pod will
get which IP. To help with that there is an env var set in all worker pods named `VXLAN_IP` so in this example the pod could
get the IP `10.0.2.3/24` from the pool and the env var will contain the value `10.0.2.3`.

### IPSecConnection CR Fields

| Field | Description |
|-------|-------------|
| `name` | Name for the IPSec connection |
| `remoteAddr` | Remote VPN endpoint address |
| `localAddr` | Local VPN endpoint address |
| `localId` | Local identification |
| `remoteId` | Remote identification |
| `secretRef` | Reference to Kubernetes secret containing pre-shared key |
| `children` | Map of child connections (for multiple tunnels) |
| `nodeName` | Kubernetes node to establish connection from |

### Child Connection Fields

| Field | Description |
|-------|-------------|
| `name` | Name for the child connection |
| `local_ips` | List of local networks/IPs for the tunnel |
| `remote_ips` | List of remote networks/IPs for the tunnel |
| `xfrm_ip` | IP for the xfrm interface |
| `vxlan_ip` | IP for the vxlan interface |
| `if_id` | Interface ID |
| `ip_pools` | Named IP pools available for allocation |
| `extra` | Additional StrongSwan configuration options |
## Configuration Reference

## Troubleshooting

Expand Down
26 changes: 14 additions & 12 deletions internal/controller/ipman_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,8 @@ type InternalError struct {
}

// Error returns a formatted error string for RequestError
func (e *InternalError) Error() string {
return fmt.Sprintf("Internal error occured in '%s' while doing '%s': %s. Please open an issue on github with this error message. Env: %+v", e.Location, e.Action, e.Err.Error(), e.Environment)
func (e InternalError) Error() string {
return fmt.Sprintf("Internal error occurred in '%s' while doing '%s': %s. Please open an issue on GitHub with this error message. Env: %+v", e.Location, e.Action, e.Err.Error(), e.Environment)
}

// GetClusterNodes returns a list of all node names in the cluster
Expand Down Expand Up @@ -143,7 +143,8 @@ func (r *IPSecConnectionReconciler) GetClusterPodsByType(ctx context.Context, po
}

// ExtractCharonVolumeSocketPath gets the path to the Charon socket from a pod's volume definitions
func ExtractCharonVolumeSocketPath(p *corev1.Pod) string {
func ExtractCharonVolumeSocketPath(p *corev1.Pod, ctx context.Context) string {
logger := log.FromContext(ctx)
var CharonSocketVolume *corev1.Volume
for _, c := range p.Spec.Volumes {
if c.Name == ipmanv1.CharonSocketHostVolumeName {
Expand All @@ -160,7 +161,7 @@ func ExtractCharonVolumeSocketPath(p *corev1.Pod) string {
},
Err: fmt.Errorf("CharonSocketVolume is nil"),
}
fmt.Println(e.Error())
logger.Error(e, "Error extracting Charon Volume socket path")
}
return CharonSocketVolume.HostPath.Path
}
Expand All @@ -177,10 +178,10 @@ func ExtractContainerImage(p *corev1.Pod, containerName string) string {
}

// CharonFromPod converts a Kubernetes Pod into an IpmanPod with CharonPodSpec
func CharonFromPod(p *corev1.Pod) IpmanPod[CharonPodSpec] {
func CharonFromPod(p *corev1.Pod, ctx context.Context) IpmanPod[CharonPodSpec] {
return IpmanPod[CharonPodSpec]{
Spec: CharonPodSpec{
HostPath: ExtractCharonVolumeSocketPath(p),
HostPath: ExtractCharonVolumeSocketPath(p, ctx),
HostNetwork: p.Spec.HostNetwork,
},
Annotations: p.Annotations,
Expand All @@ -199,10 +200,10 @@ func CharonFromPod(p *corev1.Pod) IpmanPod[CharonPodSpec] {
}

// RestctlFromPod converts a Kubernetes Pod into an IpmanPod with ProxyPodSpec
func RestctlFromPod(p *corev1.Pod) IpmanPod[RestctlPodSpec] {
func RestctlFromPod(p *corev1.Pod, ctx context.Context) IpmanPod[RestctlPodSpec] {
return IpmanPod[RestctlPodSpec]{
Spec: RestctlPodSpec{
HostPath: ExtractCharonVolumeSocketPath(p),
HostPath: ExtractCharonVolumeSocketPath(p, ctx),
},
Group: ipmanv1.CharonGroupRef{
Name: p.Labels[ipmanv1.LabelGroupName],
Expand All @@ -219,28 +220,29 @@ func RestctlFromPod(p *corev1.Pod) IpmanPod[RestctlPodSpec] {
}

// GetClusterPodsAs retrieves cluster pods with a specific label and transforms them into typed IpmanPod objects
func GetClusterPodsAs[S IpmanPodSpec](ctx context.Context, r *IPSecConnectionReconciler, label string, transformer func(*corev1.Pod) IpmanPod[S]) ([]IpmanPod[S], error) {
func GetClusterPodsAs[S IpmanPodSpec](ctx context.Context, r *IPSecConnectionReconciler, label string, transformer func(*corev1.Pod, context.Context) IpmanPod[S]) ([]IpmanPod[S], error) {
IpmanPods := []IpmanPod[S]{}
ps, err := r.GetClusterPodsByType(ctx, label)
if err != nil {
return nil, err
}

for _, p := range ps {
IpmanPods = append(IpmanPods, transformer(&p))
IpmanPods = append(IpmanPods, transformer(&p, ctx))
}
return IpmanPods, nil
}

// XfrmFromPod converts a Kubernetes Pod into an IpmanPod with XfrmPodSpec,
// extracting properties and routes from pod annotations
func (r *IPSecConnectionReconciler) XfrmFromPod(p *corev1.Pod) IpmanPod[XfrmPodSpec] {
func (r *IPSecConnectionReconciler) XfrmFromPod(p *corev1.Pod, ctx context.Context) IpmanPod[XfrmPodSpec] {
specJSON := p.Annotations[ipmanv1.AnnotationSpec]
logger := log.FromContext(ctx)

spec := &XfrmPodSpec{}
err := json.Unmarshal([]byte(specJSON), spec)
if err != nil {
fmt.Printf("Error unmarshaling XfrmPodSpec: %v\n", err)
logger.Error(err, "Error unmarshaling XfrmPodSpec")
}
result := IpmanPod[XfrmPodSpec]{
Meta: PodMeta{
Expand Down