Networking in Kubernetes clusters is an abstracted implementation that can be configured per cluster basis. Pharos supports few different options how to configure the networking provider.
Common networking options
network: provider: weave node_local_dns_cache: true service_cidr: 172.30.0.0/16 pod_network_cidr: 172.31.0.0/16 firewalld: enabled: true weave: trusted_subnets: - 10.10.0.0/16
node_local_dns_cache (optional, default
Whether or not to run a node local DNS cache as a
service_cidr (optional, default:
IP Block to use for Service VIPs.
pod_network_cidr (optional, default:
IP Block to use for Pod networking.
Supported Network Providers
Weave is the default networking provider in Pharos clusters. Weave Net creates a virtual network that connects containers across multiple hosts and enables their automatic discovery. Weave also supports network policies.
If the cluster is deployed on multiple regions/data centers*, Weave networking is configured so that each node within a region connects to other nodes in the same region through private interfaces/addresses. When nodes peer with nodes outside their own region the peering uses public addresses of the nodes. This configuration is fully dynamic and handled by an additional side-car component on the networking deployment. The users needs to just ensure the nodes have proper region labels in place.
*) Nodes region is determined by
failure-domain.beta.kubernetes.io/region annotations value.
network: provider: weave #service_cidr: 172.30.0.0/16 #pod_network_cidr: 172.31.0.0/16 #weave: # trusted_subnets: # - 10.10.0.0/16 # no_masq_local: true # password: ./weave_passwd # known_peers: 
Path to a file that contains shared secret for Weave Network. If not set shared secret is generated by Kontena Pharos.
An array of trusted subnets where overlay network can be used without IPSEC. By default Weave creates secure tunnels between nodes with IPSEC. In environments where the node-to-node networking is secure and trusted you can disable the IPSEC tunneling for better performance.
no_masq_local (optional, default:
Whether to preserve the client source IP address when accessing Service annotated with
service.spec.externalTrafficPolicy=Local. For more information look here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport
An array of known peer addresses which are used for initial network setup. This is only needed when building multi-cluster pod networks.
A subnet that is used for pod ip-address allocation. The specified subnet needs to be within
network.pod_network_cidr. This is usually only needed when building multi-cluster pod networks.
Calico creates and manages a flat layer 3 network, assigning each workload a fully routable IP address. Workloads can communicate without IP encapsulation or network address translation for bare metal performance, easier troubleshooting, and better interoperability. In environments that require an overlay, Calico uses IP-in-IP tunneling
network: provider: calico #pod_network_cidr: 172.30.0.0/16 #service_cidr: 172.31.0.0/16 #calico: # ipip_mode: CrossSubnet # mtu: 1500 # environment: # FELIX_PROMETHEUSMETRICSENABLED: true # FELIX_PROMETHEUSMETRICSPORT: 9999
Always(default) - Calico will route using IP-in-IP for all traffic originating from a Calico enabled host to all Calico networked containers and VMs within the IP Pool.
Never- Never use IP-in-IP encapsulation.
CrossSubnet- IP-in-IP encapsulation can also be performed selectively, only for traffic crossing subnet boundaries. This provides better performance in AWS multi-AZ deployments, and in general when deploying on networks where pools of nodes with L2 connectivity are connected via a router.
For more details on IP-in-IP configration and usability see https://docs.projectcalico.org/v3.3/usage/configuration/ip-in-ip.
Depending on the environment Pharos and Calico is being deployed into it may be helpful or even necessary to configure the MTU of the Pod networking managed by Calico. To figure out what MTU size to use you need to consult your networking provider documentation. Calico MTU defaults to conservative
1500 which should work in most environments although might not provide optimal performance. For further info see also Calico MTU docs.
Whether or not calico should apply NAT on the kubernetes nodes to outgoing packets from pods. Supported options:
Optional environment variables to set for Calico node daemonset. For possible values see Calico reference docs: https://docs.projectcalico.org/v3.5/reference/
Enabling Prometheus metrics
Enabling Prometheus metrics happens via environment variables as specified in the example. This also automatically creates a "headless" service for metrics discovery.
Note: Calico node daemonset runs in host network space so make sure the port you choose is free on all the hosts.