Difference between revisions of "Kubernetes"

From OnnoWiki
Jump to navigation Jump to search
 
(One intermediate revision by the same user not shown)
Line 123: Line 123:
  
 
* [[Kubernetes: Apa itu Kubernetes? Pengertian, Fungsi dan Cara Kerjanya]]
 
* [[Kubernetes: Apa itu Kubernetes? Pengertian, Fungsi dan Cara Kerjanya]]
 +
* [[Kubernetes: Apa itu Kubernetes? Pengenalan, Keunggulan, dan Cara Kerjanya]]
 
* [[Kubernetes: How to]]
 
* [[Kubernetes: How to]]
 
* [[minikube]]
 
* [[minikube]]
Line 145: Line 146:
  
 
* [[k3s: uji nyali]]
 
* [[k3s: uji nyali]]
 +
 +
===CaaS===
 +
 +
* [[CaaS: Container as a Service]]

Latest revision as of 09:14, 16 April 2023

Kubernetes (biasanya di sebut K8s) adalah open-source container-orchestration system untuk mengautomasi computer application deployment, scaling, & management.

Awalnya dikembangkan oleh Google sekarang di maintain oleh Cloud Native Computing Foundation. Kubernetes bertujuan sebagai "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". Kubernetes bekerja dengan berbagai container tools, termasuk Docker.

Banyak layanan cloud menawarkan Kubernetes-based platform atau infrastructure as a service (PaaS atau IaaS) dimana Kubernetes di jalankan sebagai platform-providing service.


Kubernetes Objects

Kubernetes mendefinisikan sekumpulan building blocks ("primitif"), yang secara kolektif menyediakan mekanisme yang menerapkan, memelihara, dan menskalakan aplikasi berdasarkan CPU, memori, atau metrik khusus. Kubernetes digabungkan secara longgar dan dapat diperluas untuk memenuhi beban kerja yang berbeda. Ekstensibilitas ini sebagian besar disediakan oleh Kubernetes API, yang digunakan oleh komponen internal serta ekstensi dan container yang berjalan di Kubernetes. Platform menggunakan kontrolnya atas komputasi dan sumber daya penyimpanan dengan mendefinisikan sumber daya sebagai Objek, yang kemudian dapat dikelola seperti itu. Objek utamanya adalah:

Pods

Pod adalah level yang lebih tinggi dari pengelompokan abstraksi komponen dalam container. Pod terdiri dari satu atau beberapa container yang dijamin ditempatkan bersama di mesin host dan dapat berbagi resource. Unit penjadwalan dasar di Kubernetes adalah sebuah pod.

Setiap pod di Kubernetes diberi alamat IP Pod unik di dalam cluster, yang memungkinkan aplikasi menggunakan port tanpa risiko konflik. Di dalam pod, semua container dapat mereferensikan satu sama lain di localhost, tetapi container dalam satu pod tidak memiliki cara untuk secara langsung menangani container lain di dalam pod lain; untuk itu harus menggunakan Alamat IP Pod. Pengembang aplikasi tidak boleh menggunakan Alamat IP Pod, untuk mereferensikan / memanggil kapabilitas di pod lain, karena alamat IP Pod bersifat sementara - pod spesifik yang dirujuknya dapat ditetapkan ke alamat IP Pod lain saat dimulai ulang. Sebagai gantinya, mereka harus menggunakan referensi ke Layanan, yang menyimpan referensi ke pod target di Alamat IP Pod tertentu.

Pod dapat menentukan volume, seperti direktori disk lokal atau disk jaringan, dan mengeksposnya ke container di dalam pod. Pod dapat dikelola secara manual melalui Kubernetes API, atau pengelolaannya dapat didelegasikan ke pengontrol. Volume tersebut juga menjadi dasar fitur Kubernetes ConfigMaps (untuk menyediakan akses ke konfigurasi melalui sistem file yang terlihat oleh penampung) dan Rahasia (untuk memberikan akses ke kredensial yang diperlukan untuk mengakses sumber daya jarak jauh secara aman, dengan memberikan kredensial tersebut pada sistem file yang hanya terlihat ke wadah resmi).

ReplicaSets

Tujuan ReplicaSet adalah untuk mempertahankan set stabil replika Pod yang berjalan pada waktu tertentu. Karena itu, ini sering digunakan untuk menjamin ketersediaan sejumlah Pod identik yang ditentukan.

ReplicaSets juga dapat dikatakan sebagai mekanisme pengelompokan yang memungkinkan Kubernetes mempertahankan jumlah instance yang telah dideklarasikan untuk pod tertentu. Definisi Set Replika menggunakan selektor, yang evaluasinya akan menghasilkan identifikasi semua pod yang terkait dengannya.

Services

Tampilan yang disederhanakan menunjukkan bagaimana Layanan berinteraksi dengan jaringan Pod di cluster Kubernetes. Layanan Kubernetes adalah sekumpulan pod yang bekerja bersama, seperti satu tingkat aplikasi multi-tingkat. Kumpulan pod yang merupakan layanan ditentukan oleh pemilih label. Kubernetes menyediakan dua mode penemuan layanan, menggunakan variabel lingkungan atau menggunakan DNS Kubernetes. Penemuan layanan menetapkan alamat IP dan nama DNS yang stabil ke layanan, dan load balancing lalu lintas secara round-robin ke koneksi jaringan dari alamat IP tersebut di antara pod yang cocok dengan pemilih (meskipun kegagalan menyebabkan pod berpindah dari mesin ke mesin ). Secara default, layanan diekspos di dalam cluster (misalnya, pod ujung belakang mungkin dikelompokkan ke dalam layanan, dengan permintaan dari pod front-end load-balanced di antaranya), tetapi layanan juga dapat diekspos di luar cluster (misalnya, bagi klien untuk mencapai pod front-end).

Volume

Sistem file di container Kubernetes menyediakan penyimpanan singkat, secara default. Ini berarti bahwa restart pod akan menghapus semua data di container seperti itu, dan oleh karena itu, bentuk penyimpanan ini cukup terbatas pada aplikasi apa pun kecuali aplikasi sepele. Volume Kubernetes menyediakan penyimpanan persisten yang ada selama masa pakai pod itu sendiri. Penyimpanan ini juga dapat digunakan sebagai ruang disk bersama untuk container di dalam pod. Volume dipasang pada titik pemasangan tertentu di dalam wadah, yang ditentukan oleh konfigurasi pod, dan tidak dapat dipasang ke volume lain atau ditautkan ke volume lain. Volume yang sama dapat dipasang pada titik yang berbeda di pohon sistem file oleh wadah yang berbeda.

Namespaces

Kubernetes menyediakan partisi dari resource yang dikelolanya menjadi set non-overlapping yang disebut namespace. Mereka dimaksudkan untuk digunakan dalam lingkungan dengan banyak pengguna yang tersebar di beberapa tim, atau proyek, atau bahkan memisahkan lingkungan seperti pengembangan, pengujian, dan produksi.

ConfigMaps and Secrets

Tantangan aplikasi yang umum adalah memutuskan di mana menyimpan dan mengelola informasi konfigurasi, beberapa di antaranya mungkin berisi data sensitif. Data konfigurasi dapat berupa apa saja seperti properti individual atau informasi kasar seperti seluruh file konfigurasi atau dokumen JSON / XML. Kubernetes menyediakan dua mekanisme yang berkaitan erat untuk menangani kebutuhan ini: "configmaps" dan "secret", keduanya memungkinkan perubahan konfigurasi dibuat tanpa memerlukan build aplikasi. Data dari configmaps dan secret akan tersedia untuk setiap instance aplikasi tempat objek ini telah diikat melalui penerapan. Rahasia dan / atau configmap hanya dikirim ke sebuah node jika pod di node tersebut membutuhkannya. Kubernetes akan menyimpannya di memori pada node tersebut. Setelah pod yang bergantung pada secret atau configmap dihapus, salinan dalam memori dari semua rahasia dan configmap yang terikat juga akan dihapus. Data dapat diakses oleh pod melalui salah satu dari dua cara berikut: a) sebagai variabel lingkungan (yang akan dibuat oleh Kubernetes saat pod dimulai) atau b) tersedia di sistem file container yang hanya terlihat dari dalam pod.

Datanya sendiri disimpan di master yang merupakan mesin yang sangat aman dan tidak ada yang boleh memiliki akses login. Perbedaan terbesar antara rahasia dan configmap adalah bahwa konten data dalam rahasia dikodekan base64. (Pada versi k8s yang lebih baru, rahasia disimpan terenkripsi di etcd)

StatefulSets

It is very easy to address the scaling of stateless applications: one simply adds more running pods—which is something that Kubernetes does very well. Stateful workloads are much harder, because the state needs to be preserved if a pod is restarted, and if the application is scaled up or down, then the state may need to be redistributed. Databases are an example of stateful workloads. When run in high-availability mode, many databases come with the notion of a primary instance and secondary instance(s). In this case, the notion of ordering of instances is important. Other applications like Kafka distribute the data amongst their brokers—so one broker is not the same as another. In this case, the notion of instance uniqueness is important. StatefulSets[33] are controllers (see Controller Manager, below) that are provided by Kubernetes that enforce the properties of uniqueness and ordering amongst instances of a pod and can be used to run stateful applications.

DaemonSets

Normally, the locations where pods are run are determined by the algorithm implemented in the Kubernetes Scheduler. For some use cases, though, there could be a need to run a pod on every single node in the cluster. This is useful for use cases like log collection, ingress controllers, and storage services. The ability to do this kind of pod scheduling is implemented by the feature called DaemonSets.

Secrets Secrets contain the ssh keys, passwords and OAuth tokens for the pod.


Managing Kubernetes objects Kubernetes provides some mechanisms that allow one to manage, select, or manipulate its objects.

Labels and selectors Kubernetes enables clients (users or internal components) to attach keys called "labels" to any API object in the system, such as pods and nodes. Correspondingly, "label selectors" are queries against labels that resolve to matching objects.[23] When a service is defined, one can define the label selectors that will be used by the service router / load balancer to select the pod instances that the traffic will be routed to. Thus, simply changing the labels of the pods or changing the label selectors on the service can be used to control which pods get traffic and which don't, which can be used to support various deployment patterns like blue-green deployments or A-B testing. This capability to dynamically control how services utilize implementing resources provides a loose coupling within the infrastructure.

For example, if an application's pods have labels for a system tier (with values such as front-end, back-end, for example) and a release_track (with values such as canary, production, for example), then an operation on all of back-end and canary nodes can use a label selector, such as:[36]

tier=back-end AND release_track=canary

Field selectors Just like labels, field selectors also let one select Kubernetes resources. Unlike labels, the selection is based on the attribute values inherent to the resource being selected, rather than user-defined categorization. metadata.name and metadata.namespace are field selectors that will be present on all Kubernetes objects. Other selectors that can be used depend on the object/resource type.

Replication Controllers and Deployments A ReplicaSet declares the number of instances of a pod that is needed, and a Replication Controller manages the system so that the number of healthy pods that are running matches the number of pods declared in the ReplicaSet (determined by evaluating its selector).

Deployments are a higher level management mechanism for ReplicaSets. While the Replication Controller manages the scale of the ReplicaSet, Deployments will manage what happens to the ReplicaSet - whether an update has to be rolled out, or rolled back, etc. When deployments are scaled up or down, this results in the declaration of the ReplicaSet changing - and this change in declared state is managed by the Replication Controller.

Cluster API The design principles underlying Kubernetes allow one to programmatically create, configure, and manage Kubernetes clusters. This function is exposed via an API called the Cluster API. A key concept embodied in the API is the notion that the Kubernetes cluster is itself a resource / object that can be managed just like any other Kubernetes resources. Similarly, machines that make up the cluster are also treated as a Kubernetes resource. The API has two pieces - the core API, and a provider implementation. The provider implementation consists of cloud-provider specific functions that let Kubernetes provide the cluster API in a fashion that is well-integrated with the cloud-provider's services and resources.

Architecture

Kubernetes architecture diagram Kubernetes follows the primary/replica architecture. The components of Kubernetes can be divided into those that manage an individual node and those that are part of the control plane.

Kubernetes control plane The Kubernetes master is the main controlling unit of the cluster, managing its workload and directing communication across the system. The Kubernetes control plane consists of various components, each its own process, that can run both on a single master node or on multiple masters supporting high-availability clusters.[37] The various components of the Kubernetes control plane are as follows:

etcd: etcd[38] is a persistent, lightweight, distributed, key-value data store developed by CoreOS that reliably stores the configuration data of the cluster, representing the overall state of the cluster at any given point of time. Just like Apache ZooKeeper, etcd is a system that favors consistency over availability in the event of a network partition (see CAP theorem). This consistency is crucial for correctly scheduling and operating services. The Kubernetes API Server uses etcd's watch API to monitor the cluster and roll out critical configuration changes or simply restore any divergences of the state of the cluster back to what was declared by the deployer. As an example, if the deployer specified that three instances of a particular pod need to be running, this fact is stored in etcd. If it is found that only two instances are running, this delta will be detected by comparison with etcd data, and Kubernetes will use this to schedule the creation of an additional instance of that pod.[37] API server: The API server is a key component and serves the Kubernetes API using JSON over HTTP, which provides both the internal and external interface to Kubernetes.[23][39] The API server processes and validates REST requests and updates state of the API objects in etcd, thereby allowing clients to configure workloads and containers across Worker nodes.[40] Scheduler: The scheduler is the pluggable component that selects which node an unscheduled pod (the basic entity managed by the scheduler) runs on, based on resource availability. The scheduler tracks resource use on each node to ensure that workload is not scheduled in excess of available resources. For this purpose, the scheduler must know the resource requirements, resource availability, and other user-provided constraints and policy directives such as quality-of-service, affinity/anti-affinity requirements, data locality, and so on. In essence, the scheduler's role is to match resource "supply" to workload "demand".[41] Controller manager: A controller is a reconciliation loop that drives actual cluster state toward the desired cluster state, communicating with the API server to create, update, and delete the resources it manages (pods, service endpoints, etc.). [42][39] The controller manager is a process that manages a set of core Kubernetes controllers. One kind of controller is a Replication Controller, which handles replication and scaling by running a specified number of copies of a pod across the cluster. It also handles creating replacement pods if the underlying node fails.[42] Other controllers that are part of the core Kubernetes system include a DaemonSet Controller for running exactly one pod on every machine (or some subset of machines), and a Job Controller for running pods that run to completion, e.g. as part of a batch job.[43] The set of pods that a controller manages is determined by label selectors that are part of the controller's definition.[36] Kubernetes node A Node, also known as a Worker or a Minion, is a machine where containers (workloads) are deployed. Every node in the cluster must run a container runtime such as Docker, as well as the below-mentioned components, for communication with the primary for network configuration of these containers.

Kubelet: Kubelet is responsible for the running state of each node, ensuring that all containers on the node are healthy. It takes care of starting, stopping, and maintaining application containers organized into pods as directed by the control plane.[23][44] Kubelet monitors the state of a pod, and if not in the desired state, the pod re-deploys to the same node. Node status is relayed every few seconds via heartbeat messages to the primary. Once the primary detects a node failure, the Replication Controller observes this state change and launches pods on other healthy nodes.[citation needed] Kube-proxy: The Kube-proxy is an implementation of a network proxy and a load balancer, and it supports the service abstraction along with other networking operation.[23] It is responsible for routing traffic to the appropriate container based on IP and port number of the incoming request. Container runtime: A container resides inside a pod. The container is the lowest level of a micro-service, which holds the running application, libraries, and their dependencies. Containers can be exposed to the world through an external IP address. Kubernetes supports Docker containers since its first version, and in July 2016 rkt container engine was added.[45] Add-ons Add-ons operate just like any other application running within the cluster: they are implemented via pods and services, and are only different in that they implement features of the Kubernetes cluster. The pods may be managed by Deployments, ReplicationControllers, and so on. There are many add-ons, and the list is growing. Some of the more important are:

DNS: All Kubernetes clusters should have cluster DNS; it is a mandatory feature. Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. Web UI: This is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself. Container Resource Monitoring: Providing a reliable application runtime, and being able to scale it up or down in response to workloads, means being able to continuously and effectively monitor workload performance. Container Resource Monitoring provides this capability by recording metrics about containers in a central database, and provides a UI for browsing that data. The cAdvisor is a component on a slave node that provides a limited metric monitoring capability. There are full metrics pipelines as well, such as Prometheus, which can meet most monitoring needs. Cluster-level logging: Logs should have a separate storage and lifecycle independent of nodes, pods, or containers. Otherwise, node or pod failures can cause loss of event data. The ability to do this is called cluster-level logging, and such mechanisms are responsible for saving container logs to a central log store with search/browsing interface. Kubernetes provides no native storage for log data, but one can integrate many existing logging solutions into the Kubernetes cluster. Microservices Kubernetes is commonly used as a way to host a microservice-based implementation, because it and its associated ecosystem of tools provide all the capabilities needed to address key concerns of any microservice architecture.

Kubernetes Persistent Storage Containers emerged as a way to make software portable. The container contains all the packages you need to run a service. The provided filesystem makes containers extremely portable and easy to use in development. A container can be moved from development to test or production with no or relatively few configuration changes.

Historically Kubernetes was suitable only for stateless services. However, many applications have a database, which requires persistence, which leads to the creation of persistent storage for Kubernetes. Implementing persistent storage for containers is one of the top challenges of Kubernetes administrators, DevOps and cloud engineers. Containers may be ephemeral, but more and more of their data is not, so one needs to ensure the data's survival in case of container termination or hardware failure.

When deploying containers with Kubernetes or containerized applications, companies often realize that they need persistent storage. They need to provide fast and reliable storage for databases, root images and other data used by the containers.

In addition to the landscape, the Cloud Native Computing Foundation (CNCF), has published other information about Kubernetes Persistent Storage including a blog helping to define the container attached storage pattern. This pattern can be thought of as one that uses Kubernetes itself as a component of the storage system or service.[46]

More information about the relative popularity of these and other approaches can be found on the CNCF's landscape survey as well, which showed that OpenEBS from MayaData and Rook - a storage orchestration project - were the two projects most likely to be in evaluation as of the Fall of 2019.[47]

Referensi

Pranala Menarik

Minikube

helm


k3s

CaaS