NoRouter: IP-over-Stdio. The easiest multi-host & multi-cloud networking ever. No root privilege is required.
NoRouter (IP-over-Stdio) is the easiest multi-host & multi-cloud networking ever:
docker exec
, kubectl exec
, ssh
)sudo
, docker run --privileged
)Web site: https://norouter.io/
NoRouter implements unprivileged networking by using multiple loopback addresses such as 127.0.42.101 and 127.0.42.102.
The hosts in the network are connected by forwarding packets over stdio streams like docker exec
, kubectl exec
, ssh
, and whatever.
Unlike traditional port forwarders such as docker run -p
, kubectl port-forward
, ssh -L
, and ssh -R
,
NoRouter provides mutual interconnectivity across multiple remote hosts.
NoRouter is mostly expected to be used in a dev environment for running heterogeneous multi-cloud apps.
e.g. An environment that is composed of:
For production environments, setting up VPNs rather than NoRouter would be the right choice.
The binaries are available at https://github.com/norouter/norouter/releases .
See also Getting Started.
norouter
binary to all the hosts. Run norouter show-installer
to show an installation script.norouter show-example
to show an example manifest.norouter <FILE>
to start NoRouter with the specified manifest YAML file.Run norouter <FILE>
with the following YAML file:
hosts:
# localhost
local:
vip: "127.0.42.100"
# Docker & Podman container (docker exec, podman exec)
docker:
cmd: "docker exec -i some-container norouter"
vip: "127.0.42.101"
ports: ["8080:127.0.0.1:80"]
# Writing /etc/hosts is possible on most Docker and Kubernetes containers
writeEtcHosts: true
# Kubernetes Pod (kubectl exec)
kube:
cmd: "kubectl --context=some-context exec -i some-pod -- norouter"
vip: "127.0.42.102"
ports: ["8080:127.0.0.1:80"]
# Writing /etc/hosts is possible on most Docker and Kubernetes containers
writeEtcHosts: true
# LXD container (lxc exec)
lxd:
cmd: "lxc exec some-container -- norouter"
vip: "127.0.42.103"
ports: ["8080:127.0.0.1:80"]
# SSH
# If your key has a passphrase, make sure to configure ssh-agent so that NoRouter can login to the remote host automatically.
ssh:
cmd: "ssh [email protected] -- norouter"
vip: "127.0.42.104"
ports: ["8080:127.0.0.1:80"]
In this example, 127.0.42.101:8080 on each hosts is forwarded to the port 80 of the Docker container.
Try:
$ curl http://127.0.42.101:8080
$ docker exec some-container curl http://127.0.42.101:8080
$ kubectl --context=some-context exec some-pod -- curl http://127.0.42.101:8080
$ lxc exec some-container -- curl http://127.0.42.101:8080
$ ssh [email protected] -- curl http://127.0.42.101:8080
Similarly, 127.0.42.102:8080 is forwarded to the port 80 of the Kubernetes Pod,
127.0.42.103:8080 is forwarderd to the port 80 of the LXD container,
and 127.0.42.104:8080 is forwarded to the port 80 of some-ssh-host.example.com
.
docker network create
networksThis example shows steps to use NoRouter for creating an HTTP proxy that works like a VPN router
that connects clients into docker network create
networks.
This technique also works with remote Docker, rootless Docker, Docker for Mac, and even with Podman.
Read docker
as podman
for the usage with Podman.
First, create a Docker network named "foo", and create an nginx container named "nginx" there:
$ docker network create foo
$ docker run -d --name nginx --hostname nginx --network foo nginx:alpine
Then, create a "bastion" container in the same network, and install NoRouter into it:
$ docker run -d --name bastion --network foo alpine sleep infinity
$ norouter show-installer | docker exec -i bastion sh
Launch norouter example2.yaml
with the following YAML:
hosts:
local:
vip: "127.0.42.100"
http:
listen: "127.0.0.1:18080"
loopback:
disable: true
bastion:
cmd: "docker exec -i bastion /root/bin/norouter"
vip: "127.0.42.101"
routes:
- via: bastion
to: ["0.0.0.0/0", "*"]
The "nginx" container can be connected from the host as follows:
$ export http_proxy=http://127.0.0.1:18080
$ curl http://nginx
If you are using Podman, try curl http://nginx.dns.podman
rather than curl http://nginx
.
Example 2 can be also applied to Kubernetes clusters, just by replacing docker exec
with kubectl exec
.
$ export http_proxy=http://127.0.0.1:18080
$ curl http://nginx.default.svc.cluster.local
The following example provides an HTTP proxy that virtually aggregates VPCs of AWS, Azure, and GCP:
hosts:
local:
vip: "127.0.42.100"
http:
listen: "127.0.0.1:18080"
aws_bastion:
cmd: "ssh aws_bastion -- ~/bin/norouter"
vip: "127.0.42.101"
azure_bastion:
cmd: "ssh azure_bastion -- ~/bin/norouter"
vip: "127.0.42.102"
gcp_bastion:
cmd: "ssh gcp_bastion -- ~/bin/norouter"
vip: "127.0.42.103"
routes:
- via: aws_bastion
to:
- "*.compute.internal"
- via: azure_bastion
to:
- "*.internal.cloudapp.net"
- via: gcp_bastion
to:
# Substitute "example-123456" with your own GCP project ID
- "*.example-123456.internal"
The localhost can access all remote hosts in these networks:
$ export http_proxy=http://127.0.0.1:18080
$ curl http://ip-XXX-XXX-XX-XXX.ap-northeast-1.compute.internal
$ curl http://some-azure-host.internal.cloudapp.net
$ curl http://some-gcp-host.asia-northeast1-b.c.example-123456.internal
$ make
$ sudo make install
git commit -s
and with your real name.NoRouter is licensed under the terms of Apache License, Version 2.0.
Copyright (C) NoRouter authors.