Cross-VPC Connectivity¶
Two mechanisms connect the shared-services VPC (
10.0.0.0/16) to the dev VPC (10.1.0.0/16): Tailscale mesh (primary, for servers and humans) and VPC peering (for EKS pods that cannot run Tailscale).
The Two Mechanisms¶
| Mechanism | What Uses It | How |
|---|---|---|
| Tailscale mesh | Servers, Ansible, Alloy agents, developer laptops | Subnet routers in each VPC advertise routes; WireGuard encrypts traffic |
VPC Peering (pcx-0535aabbb2629e915) |
EKS runner pods, CI/CD jobs running in Kubernetes | Direct VPC routing; no Tailscale needed; combined with private Route53 zones for DNS |
Tailscale Cross-VPC Traffic Flow¶
When a dev server (e.g., orchestrator-dev-cwiq-io) contacts Authentik (sso.shared.cwiq.io):
1. orchestrator-dev-cwiq-io (10.1.35.46)
→ Queries DNS: sso.shared.cwiq.io
→ Route53 shared-internal zone (associated with dev VPC): returns NLB private IP
2. Tailscale client on orchestrator-dev checks route table:
"10.0.0.0/16 → via subnet router 10.1.40.x (Tailscale IP: 100.x.x.x)"
3. Traffic encrypted via WireGuard → travels to tailscale-shared router
4. tailscale-shared (SNAT mode):
Source IP translated: 10.1.35.46 → 10.0.12.x (router VPC IP)
Packet forwarded to Authentik NLB
5. Authentik sees source as 10.0.12.x (subnet router CIDR)
Security group allows: ingress from 10.0.12.0/26
This is why security groups for shared-services resources only need to allow 10.0.12.0/26 (the router subnet), not the entire dev VPC CIDR.
VPC Peering Cross-VPC Traffic Flow¶
When an EKS runner pod executes a SonarQube scan:
1. Pod IP: 10.1.34.50 (VPC CNI, no Tailscale)
→ Queries DNS: sonarqube.shared.cwiq.io
→ Private Route53 zone (associated with dev VPC): returns 10.0.10.8 (VPC private IP)
2. Packet: src=10.1.34.50, dst=10.0.10.8
Dev VPC route table: 10.0.0.0/16 → pcx-0535aabbb2629e915 (VPC peering)
3. Shared-services VPC receives packet
Route table: 10.1.0.0/16 → pcx-0535aabbb2629e915 (return path)
4. SonarQube security group allows: ingress :9000 from 10.1.0.0/16
Decision Guide: Tailscale vs VPC Peering¶
| Scenario | Use | Configuration needed |
|---|---|---|
| Server SSH access | Tailscale | Tailscale client on instance |
| Alloy log/metric push | Tailscale (Tailscale hostname) | ACL rules for ports 3100, 9009 |
| Ansible playbook execution | Tailscale | SSH via ansible-shared-cwiq-io |
| GitLab runner (Kubernetes pod) → Nexus | VPC peering + private DNS | Private Route53 record, SG allowing dev VPC CIDR |
| GitLab runner (Kubernetes pod) → SonarQube | VPC peering + private DNS | Private Route53 record 10.0.10.8 |
| GitLab runner (Kubernetes pod) → Vault | VPC peering | SG allowing dev VPC CIDR on port 8200 |
| CI deploy-dev SSH | VPC private IP (10.1.35.46) |
No Tailscale — EKS pod uses VPC peering |
| LangFuse Docker DNS resolution | Tailscale IP in Route53 | Private zone must use Tailscale IP, not VPC IP |
Important: LangFuse DNS Exception¶
langfuse.dev.cwiq.io points to the Tailscale IP (100.119.26.88) rather than the VPC private IP. This is intentional: Docker containers on the orchestrator dev server use the hostname to contact LangFuse. Docker bridge networks cannot reach VPC private IPs, but can reach Tailscale IPs.
This is the exception to the general rule that private Route53 zones use VPC private IPs.
VPC Peering Route Tables¶
Both VPCs have explicit routes for the peering connection:
| VPC | Route Table Destination | Target |
|---|---|---|
Dev (10.1.0.0/16) |
10.0.0.0/16 |
pcx-0535aabbb2629e915 |
Shared-Services (10.0.0.0/16) |
10.1.0.0/16 |
pcx-0535aabbb2629e915 |
These are managed in Terraform under the VPC networking modules.
Related Pages¶
- Tailscale Overview — Subnet router architecture and SNAT mode
- MagicDNS — Hostname conventions: dashes vs dots
- ACL Tags — ACL rules enabling cross-VPC access
- EKS Cluster — Why EKS pods cannot use Tailscale
- Route53 DNS — Private zone association for cross-account DNS