Deployment Assistant

Watch: Setup Video Tutorial
Quick walkthrough of deploying K8 Inspector to Kubernetes
About secrets and license keys:
License key - Not required as an environment variable. Enter it through the in-app Setup Wizard after deployment.
ENCRYPTION_KEY - Required for database encryption. Must be set as an environment variable and stored securely in Kubernetes Secrets, AWS Secrets Manager, Azure Key Vault, or your preferred secrets management solution.
Kubernetes expert? Skip this wizard and deploy manually using the manifests in k8s-manifests/
Setup Guide K8s Guide Troubleshooting

How would you like to deploy K8 Inspector?

This wizard provides step-by-step guidance for deploying K8 Inspector. It shows you the commands to run - you'll execute them manually in your terminal. If you're comfortable with Kubernetes, feel free to skip this and use the manifests in k8s-manifests/ directly.

Designed for In-Cluster Deployment

K8 Inspector is designed to run as a pod inside your Kubernetes cluster. This enables features like in-cluster service account authentication, direct API access, pod exec terminals, and real-time metrics. Running locally from a desktop will work for basic exploration, but many advanced features require in-cluster deployment.

Quick Test

Local Smoke Test

Run K8 Inspector on your desktop for a quick validation or to explore the UI before deploying to your cluster.

  • Quick smoke test
  • Explore UI and basic features
  • Limited features (no in-cluster access)
Recommended

Kubernetes Deployment

Deploy K8 Inspector into your Kubernetes cluster for production monitoring, team access, and full feature set.

  • In-cluster service account
  • Proper secrets management
  • Ingress / Load Balancer access

Local Quick Start

Get K8 Inspector running locally for a quick smoke test or to explore the UI before deploying to your cluster.

Limited Functionality in Local Mode

Running from your desktop provides basic cluster visibility, but many features require in-cluster deployment:

  • Pod Exec/Terminals - May not work without direct cluster network access
  • Service Account Auth - Not available outside the cluster
  • Real-time Metrics - Requires in-cluster metrics-server access
  • Network Policies - Cannot test from outside the cluster

For full functionality, deploy K8 Inspector as a pod in your cluster.

1

Run with Docker

# Run K8 Inspector container docker run -d \ --name k8inspector \ -p 3030:3030 \ -v ~/.kube:/root/.kube:ro \ -e NODE_ENV=production \ -e SESSION_SECRET=$(openssl rand -hex 32) \ k8inspector:2.0.1 # Open in browser open http://localhost:3030
1

Run with Docker Compose

# Start K8 Inspector docker-compose up -d # View logs docker-compose logs -f # Open in browser open http://localhost:3030
1

Run with Node.js (Development Only)

Development Only

Direct Node.js execution is for development and debugging. Use Docker or Kubernetes for any shared/production use.

# Install dependencies cd backend && npm install && cd .. # Start the application ./start.sh # Open in browser open http://localhost:3030

Next Step: Deploy to Your Cluster

Local testing is great for exploring the UI, but for the full experience with all features working correctly, deploy K8 Inspector as a pod inside your Kubernetes cluster.

Build & Push Container Image

Build the K8 Inspector Docker image and push it to your container registry so Kubernetes can pull it.

# Build the image docker build -t k8inspector:2.0.1 . # Login to ECR aws ecr get-login-password --region YOUR_REGION | \ docker login --username AWS --password-stdin YOUR_ACCOUNT.dkr.ecr.YOUR_REGION.amazonaws.com # Tag and push docker tag k8inspector:2.0.1 YOUR_ACCOUNT.dkr.ecr.YOUR_REGION.amazonaws.com/k8inspector:2.0.1 docker push YOUR_ACCOUNT.dkr.ecr.YOUR_REGION.amazonaws.com/k8inspector:2.0.1
# Build the image docker build -t k8inspector:2.0.1 . # Configure Docker for GCR gcloud auth configure-docker # Tag and push docker tag k8inspector:2.0.1 gcr.io/YOUR_PROJECT/k8inspector:2.0.1 docker push gcr.io/YOUR_PROJECT/k8inspector:2.0.1
# Build the image docker build -t k8inspector:2.0.1 . # Login to ACR az acr login --name YOUR_REGISTRY # Tag and push docker tag k8inspector:2.0.1 YOUR_REGISTRY.azurecr.io/k8inspector:2.0.1 docker push YOUR_REGISTRY.azurecr.io/k8inspector:2.0.1
# Build the image docker build -t k8inspector:2.0.1 . # Login to your registry docker login YOUR_REGISTRY # Tag and push docker tag k8inspector:2.0.1 YOUR_REGISTRY/k8inspector:2.0.1 docker push YOUR_REGISTRY/k8inspector:2.0.1

No Docker daemon required. The image is built inside your Kubernetes cluster using Kaniko. Useful for bastion hosts, air-gapped networks, and CI runners with no Docker socket. Real-world example used to ship a build from a bastion into an EKS cluster in <60 s end-to-end.

1. Build the distribution tarball locally

# From the k8inspector repo root ./scripts/build-dist.sh slim-secure --env=lemonsqueezy # → dist/k8inspector-2.0.1-slim-secure.tar.gz (~7 MB)

2. Stage the tarball in S3 and generate a presigned URL

BUCKET=my-build-artifacts REGION=YOUR_REGION KEY=builds/k8inspector-2.0.1-slim.tar.gz aws s3 cp dist/k8inspector-2.0.1-slim-secure.tar.gz \ s3://${BUCKET}/${KEY} --region ${REGION} CONTEXT_URL=$(aws s3 presign s3://${BUCKET}/${KEY} \ --region ${REGION} --expires-in 3600) echo "$CONTEXT_URL"

3. Apply the Kaniko build Pod

The --context-sub-path=k8inspector flag is required because the dist tarball wraps everything in a top-level k8inspector/ directory.

cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: kaniko-build-k8inspector namespace: infra spec: restartPolicy: Never containers: - name: kaniko image: gcr.io/kaniko-project/executor:v1.20.0 args: - "--context=${CONTEXT_URL}" - "--context-sub-path=k8inspector" - "--destination=YOUR_ACCOUNT.dkr.ecr.YOUR_REGION.amazonaws.com/k8inspector:2.0.1-slim" - "--dockerfile=Dockerfile" - "--verbosity=info" - "--cleanup" volumeMounts: - name: docker-auth mountPath: /kaniko/.docker volumes: - name: docker-auth secret: secretName: kaniko-ecr-auth items: - {key: config.json, path: config.json} EOF kubectl logs -f -n infra kaniko-build-k8inspector # → "Pushed ...@sha256:..." typically in <30 s on a warm node

4. Roll out the new image

# If you reused the same image tag: kubectl rollout restart deployment k8inspector -n infra kubectl rollout status deployment k8inspector -n infra --timeout=180s # Quick health check: kubectl run curl-test --rm -i --restart=Never \ --image=curlimages/curl:latest -n infra -- \ -s -w "\nHTTP=%{http_code}\n" \ http://k8inspector.infra.svc.cluster.local:3030/health

Prerequisite: The kaniko-ecr-auth Secret must hold a config.json pointing Kaniko at your registry credentials — for ECR the simplest form is the ecr-login credential helper combined with an IRSA-annotated ServiceAccount. Full walkthrough in AWS EKS Deployment Guide.

Common post-deploy issues

Two issues hit nearly every fresh EKS install of K8 Inspector. Both are benign — the pod stays Ready and /health returns 200 — but parts of the UI will show errors until they're fixed.

1. [AnomalyDetection] nodes is forbidden

The bundled ClusterRole pre-2.0.2 omits nodes (core) and metrics.k8s.io. Patch it in place — errors stop on the next 60-second tick, no pod restart needed:

kubectl patch clusterrole k8inspector-admin --type=json -p='[ {"op":"add","path":"/rules/-","value":{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","watch"]}}, {"op":"add","path":"/rules/-","value":{"apiGroups":["metrics.k8s.io"],"resources":["pods","nodes"],"verbs":["get","list"]}} ]'

2. UI "EKS Direct" → AccessDenied on sts:AssumeRoleWithWebIdentity

IRSA trust-policy mismatch: the IAM role pins one sub (e.g. …:k8inspector) but the pod runs as a different SA (e.g. …:k8inspector-v2).

Quick workaround (no IAM change needed): in the Setup Wizard, choose "Kubeconfig File" (or "Auto-detect") instead of "AWS EKS (Direct)" — the pod's in-cluster config is already loaded on startup, so the connection succeeds via the ServiceAccount token and never touches AWS APIs.

Proper fix (multi-SA trust policy):

# 1. Find what SA your pod runs as and what role its annotation points at: kubectl get deploy k8inspector -n infra \ -o jsonpath='{.spec.template.spec.serviceAccountName}'; echo kubectl get sa $SA -n infra \ -o jsonpath='{.metadata.annotations.eks\.amazonaws\.com/role-arn}'; echo # 2. Update the trust policy to allow both old + new SA names # (sub becomes a list; old deployments keep working during migration): cat > trust-policy.json <<EOF { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"Federated": "arn:aws:iam::ACCOUNT:oidc-provider/oidc.eks.REGION.amazonaws.com/id/OIDC_ID"}, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.REGION.amazonaws.com/id/OIDC_ID:aud": "sts.amazonaws.com", "oidc.eks.REGION.amazonaws.com/id/OIDC_ID:sub": [ "system:serviceaccount:infra:k8inspector", "system:serviceaccount:infra:k8inspector-v2" ] } } }] } EOF aws iam update-assume-role-policy \ --role-name K8InspectorRole \ --policy-document file://trust-policy.json kubectl rollout restart deployment k8inspector -n infra

Configure Secrets

K8 Inspector requires secure secrets for session management and encryption. Use your organization's secrets management solution.

Security Best Practice

Never commit secrets to Git or store them in plain text. Use one of the following approaches:

  • Kubernetes Secrets - Basic, but base64 encoded only
  • External Secrets Operator - Syncs from AWS/GCP/Azure
  • Sealed Secrets - Encrypted secrets in Git
  • HashiCorp Vault - Enterprise secrets management
1

Create Namespace

kubectl create namespace k8inspector
2

Create Kubernetes Secrets

License Key Not Required Here

The license key is entered through the in-app Setup Wizard after deployment - you don't need to include it in environment variables. Only the encryption keys below are required.

# Generate and create secrets (LICENSE_KEY added via app Setup Wizard) kubectl create secret generic k8inspector-secrets \ --namespace=k8inspector \ --from-literal=SESSION_SECRET=$(openssl rand -hex 32) \ --from-literal=ENCRYPTION_KEY=$(openssl rand -hex 16)
3

Configure Image Pull Secret (if using private registry)

# For ECR (if not using IRSA) kubectl create secret docker-registry ecr-credentials \ --namespace=k8inspector \ --docker-server=YOUR_ACCOUNT.dkr.ecr.YOUR_REGION.amazonaws.com \ --docker-username=AWS \ --docker-password=$(aws ecr get-login-password)

Deploy to Kubernetes

Deploy K8 Inspector to your cluster using the included Kubernetes manifests or Helm chart.

1

Update kustomization.yaml with your image

# Edit k8s-manifests/kustomization.yaml # Update the image to your registry: images: - name: k8inspector newName: YOUR_REGISTRY/k8inspector newTag: "2.0.1"
2

Deploy with Kustomize

# Apply the manifests kubectl apply -k k8s-manifests/ # Watch deployment progress kubectl rollout status deployment/k8inspector -n k8inspector # Verify pods are running kubectl get pods -n k8inspector
1

Install with Helm

# Install K8 Inspector helm install k8inspector ./helm/k8inspector \ --namespace k8inspector \ --create-namespace \ --set image.repository=YOUR_REGISTRY/k8inspector \ --set image.tag=2.0.1 \ --set secrets.existingSecret=k8inspector-secrets # Check status helm status k8inspector -n k8inspector
1

Apply manifests individually

# Apply each manifest kubectl apply -f k8s-manifests/namespace.yaml kubectl apply -f k8s-manifests/serviceaccount.yaml kubectl apply -f k8s-manifests/rbac.yaml kubectl apply -f k8s-manifests/deployment.yaml kubectl apply -f k8s-manifests/service.yaml # Optionally apply ingress kubectl apply -f k8s-manifests/ingress.yaml

Access Your Deployment

Once you've completed the previous steps, use one of these methods to access K8 Inspector in your cluster.

This is a Manual Process

This wizard provides guidance and commands to copy - it does not automatically deploy anything. You'll need to run the commands from the previous steps in your terminal before proceeding.

1

Quick Access via Port Forward

# Port forward to local machine kubectl port-forward -n k8inspector svc/k8inspector 3030:3030 # Open in browser open http://localhost:3030

Port forwarding is great for quick access but won't persist. Configure Ingress for permanent access.

1

Configure Ingress (Recommended for Production)

# Edit k8s-manifests/ingress.yaml with your hostname apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: k8inspector namespace: k8inspector annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - k8inspector.your-domain.com secretName: k8inspector-tls rules: - host: k8inspector.your-domain.com http: paths: - path: / pathType: Prefix backend: service: name: k8inspector port: number: 3030
1

Expose via Load Balancer

# Change service type to LoadBalancer kubectl patch svc k8inspector -n k8inspector -p '{"spec": {"type": "LoadBalancer"}}' # Get external IP (may take a few minutes) kubectl get svc k8inspector -n k8inspector -w

License Activation

Need a license key? Visit agencio.app to purchase or start a free trial. Enter the key in Settings after logging in.

License keys are validated directly with LemonSqueezy - no API key required on your end.

Cluster Connection (In-Cluster)

When running inside your Kubernetes cluster, select "Kubeconfig File" in the Setup Wizard. This automatically detects the pod's ServiceAccount for authentication.

Use "AWS EKS (Direct)" only when connecting from outside the cluster.

Verify Your Deployment

Before accessing K8 Inspector, confirm your pods are running:

kubectl get pods -n k8inspector kubectl logs -n k8inspector deployment/k8inspector

Documentation

Reference guides included in this distribution package.

Quick Start Guide

Overview and quick start instructions

Troubleshooting

Common issues and solutions

User Setup Guide

First-time setup and configuration

Feature Matrix

Features by license tier

Kubernetes Deployment

Detailed K8s deployment guide

Licensing

License types and activation

Cloud-Specific Guides

AWS EKS Azure AKS Google GKE