Scorpiones Group
Loading

Why you need Kubernetes and Dockers and what can it do?

Kubernetes, is an open source platform that automates Linux container operations.
It eliminates many of the manual processes involved in deploying and scaling containerized applications.
In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters.
These clusters can span hosts across public, private, or hybrid clouds.
For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling,
like real-time data streaming through Apache Kafka.

Kubernetes was originally developed and designed by engineers at Google.
Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers.
Fun Fact: This is the technology behind Google’s cloud services.

Why do you need Kubernetes?

Real production apps span multiple containers.
Those containers must be deployed across multiple server hosts. Security for containers is multilayered and can be complicated.
That's where Kubernetes can help.
Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale, for these workloads.
Kubernetes orchestration allows you to build application services that span multiple containers,
schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.
With Kubernetes you can take real steps towards better IT security.

Kubernetes also needs to integrate with networking, storage, security, telemetry and other services to provide a comprehensive container infrastructure.

Kubernetes fixes a lot of common problems with container proliferation, sorting containers together into a ”pod”.
Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services—like networking and storage—to those containers.
Other parts of Kubernetes help you load balance across these pods and ensure you have the right number of containers running to support your workloads.

With the right implementation of Kubernetes—and with the help of other open source projects like Atomic Registry, Open vSwitch, heapster, OAuth, and SELinux— you can orchestrate all parts of your container infrastructure.

What can you do with Kubernetes?

The primary advantage of using Kubernetes in your environment, especially if you are optimizing app dev for the cloud,
is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines.
More broadly, it helps you fully implement and rely on a container-based infrastructure in production environments.
And because Kubernetes is all about automation of operational tasks, you can do many of the same things that other application platforms or management systems let you do, but for your containers.

With Kubernetes you can:

  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running how you deployed them.
  • Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.

Kubernetes & Dockers Best Practice
What about docker?

Docker helps you create and deploy software within containers.
It’s an open source collection of tools that help you “Build, Ship, and Run any App, Anywhere”.
Yes, it really is as magic as it sounds.

With Docker, you create a special file called a Dockerfile.
Dockerfiles define a build process, which, when fed to the ‘docker build’ command, will produce an immutable docker image.
You can think of this as a snapshot of your application, ready to be brought to life at any time.
When you want to start it up, just use the ‘docker run’ command to run it anywhere the docker daemon is supported and running.
It can be on your laptop, your production server in the cloud, or on a raspberry pi. Regardless of where your image is running, it will behave the same way.

Docker also provides a cloud-based repository called Docker Hub.
You can think of it like GitHub for Docker Images.
You can use Docker Hub to store and distribute the container images you build.

As previously mentioned, Docker and Kubernetes work at different levels.
Under the hood, Kubernetes can integrate with the Docker engine to coordinate the scheduling and execution of Docker containers on Kubelets.
The Docker engine itself is responsible for running the actual container image built by running ‘docker build’.
Higher level concepts such as service-discovery, loadbalancing and network policies are handled by Kubernetes as well.

When used together, both Docker and Kubernetes are great tools for developing a modern cloud architecture, but they are fundamentally different at their core.
It is important to understand the high-level differences between the technologies when building your stack.

Our CTO Nimrod Levy has compiled 9 Kubernetes Security Best Practices just for our readers.

Follow these recommendations for a more secure Kubernetes cluster.

  1. Upgrade to the Latest Version
    New security features — and not just bug fixes — are added in every quarterly update, and to take advantage of them, we recommend you run the latest stable version.
    Upgrades and support can become more difficult the farther behind you fall, so plan to upgrade at least once per quarter. Using a managed Kubernetes provider can make upgrades very easy.
  2. Enable Role-Based Access Control (RBAC)
    Control who can access the Kubernetes API and what permissions they have with Role- Based Access Control (RBAC).
    RBAC is usually enabled by default in Kubernetes 1.6 and beyond (later for some managed providers),
    but if you have upgraded since then and haven’t changed your configuration, you’ll want to double-check your settings.
    Because of the way Kubernetes authorization controllers are combined, you must both enable RBAC and disable legacy Attribute-Based Access Control (ABAC).

    If your application needs access to the Kubernetes API, create service accounts individually and give them the smallest set of permissions needed at each use site.
    This is better than granting overly broad permissions to the default account for a namespace. Most applications don’t need to access the API at all; `automountServiceAccountToken` can be set to “false” for these.
  3. Use Namespaces to Establish Security Boundaries
    Creating separate namespaces is an important first level of isolation between components.
    We find it’s much easier to apply security controls such as Network Policies when different types of workloads are deployed in separate namespaces.
  4. Separate Sensitive Workloads
    To limit the potential impact of a compromise, it’s best to run sensitive workloads on a dedicated set of machines.
    This approach reduces the risk of a sensitive application being accessed through a less-secure application that shares a container runtime or host.
    You can achieve this separation using node pools (in the cloud or on-premises) and Kubernetes namespaces, taints, tolerations, and other controls.
  5. Secure Cloud Metadata Access
    Sensitive metadata, such as kubelet admin credentials, can sometimes be stolen or misused to escalate privileges in a cluster.
    For example, a recent Shopify bug bounty disclosure detailed how a user was able to escalate privileges by confusing a microservice into leaking information from the cloud provider’s metadata service.
    GKE’s metadata concealment feature changes the cluster deployment mechanism to avoid this exposure,
    and we recommend using it until it is replaced with a permanent solution.
    Similar countermeasures may be needed in other environments.
  6. Create and Define Cluster Network Policies
    Network Policies allow you to control network access into and out of your containerized applications.
    To use them, you‘ll need to make sure that you have a networking provider that supports this resource;
    with some managed Kubernetes providers such as Google Kubernetes Engine (GKE),
    you‘ll need to opt in. (Enabling network policies in GKE will require a brief rolling upgrade if your cluster already exists.)
    Once that’s in place, start with some basic default network policies, such as blocking traffic from other namespaces by default.
  7. Run a Cluster-wide Pod Security Policy
    A Pod Security Policy sets defaults for how workloads are allowed to run in your cluster.
    Consider defining a policy and enabling the Pod Security Policy admission controller — instructions vary depending on your cloud provider or deployment model.
    As a start, you could require that deployments drop the NET_RAW capability to defeat certain classes of network spoofing attacks.
  8. Harden Node Security
    You can follow these three steps to improve the security posture on your nodes:

    • Ensure the host is secure and configured correctly.
    • One way to do so is to check your configuration against CIS Benchmarks; many products feature an autochecker that will assess conformance with these standards automatically.
    • Control network access to sensitive ports.
    • Make sure that your network blocks access to ports used by kubelet, including 10250 and 10255.
    • Consider limiting access to the Kubernetes API server except from trusted networks. Malicious users have abused access to these ports to run cryptocurrency miners in clusters that are not configured to require authentication and authorization on the kubelet API server.
    • Minimize administrative access to Kubernetes nodes. Access to the nodes in your cluster should generally be restricted — debugging and other tasks can usually be handled without direct access to the node.
  9. Turn on Audit Logging
    Make sure you have audit logs enabled and are monitoring them for anomalous or unwanted API calls, especially any authorization failures — these log entries will have a status message “Forbidden.”
    Authorization failures could mean that an attacker is trying to abuse stolen credentials.
    Managed Kubernetes providers, including GKE, provide access to this data in their cloud console and may allow you to set up alerts on authorization failures.

Looking Ahead

Follow these recommendations for a more secure Kubernetes cluster.
Remember, even after you follow these tips to configure your Kubernetes cluster securely, you will still need to build security into other aspects of your container configurations and their runtime operations.
As you improve the security of your tech stack, look for tools that provide a central point of governance for your container deployments and deliver continuous monitoring and protection for your containers and cloud-native applications.

Tags: Kubernetes Information Security Containers Docker

Contact Us

SEND A MESSAGE