{"id":4053,"date":"2025-12-23T22:36:10","date_gmt":"2025-12-23T22:36:10","guid":{"rendered":"https:\/\/wiki.thomasandsofia.com\/?p=4053"},"modified":"2026-01-05T20:55:09","modified_gmt":"2026-01-05T20:55:09","slug":"kubernetes-in-4-hours","status":"publish","type":"post","link":"https:\/\/wiki.thomasandsofia.com\/?p=4053","title":{"rendered":"Kubernetes in 4 hours"},"content":{"rendered":"<p><iframe loading=\"lazy\" title=\"Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]\" width=\"900\" height=\"506\" src=\"https:\/\/www.youtube.com\/embed\/X48VuDVv0do?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<h1>Definitions<\/h1>\n<h3>Cluster<\/h3>\n<p>A cluster is a group of <strong>Nodes<\/strong>\u00a0that will host one or more <strong>K8s<\/strong> <strong>Deployments<\/strong>.<\/p>\n<h3>ConfigMap<\/h3>\n<p>URL endpoints for services are defined here. This allows them to change without having to reconfigure applications that call them directly. <strong>Pods<\/strong> then call the ConfigMap to understand where to send the data without having to rebuild the pod. You connect it to the pods so they can access the information.<\/p>\n<p>These may also contain metadata such as database usernames and passwords (which may also change). Since passwords should not be stored in plain text, these should be stored as <strong>Secret<\/strong>s.<\/p>\n<p>The data in these files can then be accessed using Environmental Variables or as a properties file.<\/p>\n<h3>Container<\/h3>\n<h3>Deployment<\/h3>\n<p>In practice, you do not work specifically with Pods, but instead you work with deployments. Deployments are abstractions of Pods. Each deployment is a blueprint for each POD. These blueprints will contain information such as<\/p>\n<ul>\n<li>Number of replicas to maintain<\/li>\n<li>Scaling these numbers up or down<\/li>\n<\/ul>\n<p>Note: Databases cannot be defined as Deployments because they are stateful. Ref: <strong>StatefulSets<\/strong><\/p>\n<p><strong>Layers of Abstraction:<\/strong><\/p>\n<ul>\n<li>Deployments manage Replica sets<\/li>\n<li>Replica sets manage all replicas of Pods<\/li>\n<li>Pods are an abstraction of containers.<\/li>\n<\/ul>\n<p>A Deployment is as deep as a K8s admin needs to go!!<\/p>\n<h3>External Services<\/h3>\n<p>A <strong>service<\/strong> that is available to the public, such as a web server or API endpoint. These are, by default, accessible via the <strong>Node<\/strong>&#8216;s IP address, followed by a specific port. Ex: HTTP:\/\/124.87.101.2:8080. This is good for test purposes, but not practical for production.<\/p>\n<p>To use a standard domain name (Ex: HTTP:\/\/my-app.com), you will use <strong>Ingress<\/strong>.<\/p>\n<h3>Ingress<\/h3>\n<p>Ingress can be assigned a DNS resolvable domain. Ex: HTTP:\/\/my-app.com. These connections are then forwarded to <strong>External Services<\/strong>.<\/p>\n<h3>Internal Service<\/h3>\n<p>A <strong>service<\/strong> that is NOT available to the public (such as a direct connection to a database).<\/p>\n<h3>K8s<\/h3>\n<p><strong>K<\/strong> (First letter is Upper Case) &#8216;u b e r n e t e&#8217; (<strong>8<\/strong> characters between the 1st and last letters in the name) <strong>s<\/strong> (last letter is lower case)<\/p>\n<h3>Namespace<\/h3>\n<h3>Node<\/h3>\n<p>A node is a server (physical, virtual or some combination) that will host 1 or more <strong>Pods<\/strong>. Multiple nodes that support an application are grouped together in a <strong>CLUSTER<\/strong>.<\/p>\n<h3>Pod<\/h3>\n<p>Pods are the smallest unit in K8s. Pods are wrappers for <strong>Containers<\/strong>. Most pods only contain a single container, however it is possible to have more than one container if these are closely linked (Container A always requires Container B when it is deployed.)<\/p>\n<p>Pods (not their containers) are assigned static IPs upon creation. If a pod crashes and a new one is created in it&#8217;s place, the new pod will be assigned a new IP, which could cause issues if another container is trying to access this one by the IP. For this reason, K8s uses <strong>Services.<\/strong><\/p>\n<h3>Replica sets<\/h3>\n<p>These are automatically controlled and they manage how many replicas need to be created based on the deployment config.<\/p>\n<h3>Secret<\/h3>\n<p>Very similar to <strong>ConfigMap<\/strong>s, but it is used to store credentials, such as passwords and certificates. Instead of storing these in plain text, they are Base-64 encoded. Like ConfigMaps, you attached them to the <strong>Pods<\/strong>.<\/p>\n<p>The data in these files can then be accessed using Environmental Variables or as a properties file.<\/p>\n<h3>Service<\/h3>\n<p>Services have 2 functions: Permanent IP address and Load Balancer<\/p>\n<p>IP Address<\/p>\n<p><strong>Pods<\/strong>\u00a0communicate with each other via services. A service is a static (permanent) IP that can be assigned to pods. Each service will have their own unique IP, such as a Database. Applications should use the service IP instead of the pod&#8217;s IP and traffic will automatically get routed to the pod. Since the life cycle of the pod and service are not connected, if the pod dies and a new one is created in it&#8217;s place, applications can still use the Service to talk to the pod.<\/p>\n<p>Load Balancer<\/p>\n<p>When a request is received, it will forward the data to whichever Pod is less busy. (Ref: <strong>Deployments<\/strong>)<\/p>\n<h3>StatefulSets<\/h3>\n<p>Used for managing Stateful applications, such as Databases. (MySQL, MongoDB, ElasticSearch, etc.) vs. <strong>Deployments<\/strong>, which are for Stateless applications.<\/p>\n<p>To prevent database inconsistencies, StatefulSets control which pods are reading or writing data at any time.<\/p>\n<p>Like Deployments, StatefulSets control the number of database replicas as well as scaling these numbers up or down.<\/p>\n<p>Note: Deploying StatefulSets can be tricky! As such, most databases are actually hosted OUTSIDE of the K8s cluster and only use K8s for applications that are stateless and can scale accordingly.<\/p>\n<h3>Volumes<\/h3>\n<p>Since <strong>Pods<\/strong> are ephemeral and can be killed and restarted up upon demand, any data that exists on them would be lost as soon as the pod dies. Since database, log and other data must persist reliably, volumes are used. Volumes attach physical storage to your pods. The volume may reside on the local <strong>Node<\/strong> where the pod is running, or it may be remote, including somewhere else in the <strong>cluster<\/strong>, cloud storage, or other storage outside of the cluster, such as a SAN.<\/p>\n<p>Think of a Volume as an external hard drive plugged into the cluster.<\/p>\n<p>Note: K8s does not manage the data. The Administrator is responsible for all backups, data replication, etc.<\/p>\n<p>&nbsp;<\/p>\n<h1>Architecture<\/h1>\n<h3>Worker Nodes<\/h3>\n<ul>\n<li>Worker Nodes do the actual work. There may be hundreds of these!<\/li>\n<li>Each node can run multiple PODS<\/li>\n<li>3 processes must be installed on each worker node:\n<ul>\n<li>Container Runtime:\u00a0Docker usually, but can be something else.\n<ul>\n<li>This is external to K8s.<\/li>\n<\/ul>\n<\/li>\n<li>Kubelet: Schedules pods (and subsequently the containers in them)\n<ul>\n<li>Interacts with both the container runtime and the node.<\/li>\n<li>Starts the pods with the containers inside.<\/li>\n<li>Assigns resources from the node to the containers, such as CPU, RAM, Storage resources, etc.<\/li>\n<\/ul>\n<\/li>\n<li>Kube Proxy\n<ul>\n<li>Forwards communications between Pods.<\/li>\n<li>High Performance. Will try to keep communications within the same Node whenever possible.\n<ul>\n<li>This reduces network loads.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Master Nodes<\/h3>\n<ul>\n<li>Master Nodes are used to manage the cluster.<\/li>\n<li>Should have 2 min for High Availability.<\/li>\n<li>4 processes on every Master Node\n<ul>\n<li>API Server\n<ul>\n<li>Load balanced between multiple Master Nodes.<\/li>\n<li>This is the cluster gateway.\n<ul>\n<li>This could be an external process, such as a UI, API client, Kubectl command line, etc.<\/li>\n<li>Configurations, such as updates, or queries about the cluster&#8217;s health, are handled via the API Server<\/li>\n<\/ul>\n<\/li>\n<li>Acts as a gatekeeper for authentication!\n<ul>\n<li>Only authorized requests get through to the cluster.<\/li>\n<li>All requests must go through the API server, which\n<ul>\n<li>Validates the request and forwards it on to the other processes.<\/li>\n<\/ul>\n<\/li>\n<li>Makes security somewhat easier because there is only 1 entry point.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>Scheduler\n<ul>\n<li>Used to decide where to create Pods.<\/li>\n<li>Has intelligence to determine which Worker Node new pods should be deployed on.\n<ul>\n<li>Based on available resources, etc.<\/li>\n<\/ul>\n<\/li>\n<li>Scheduler DOES NOT actually create the new Pods, that is handled by the Worker Node&#8217;s Kubelet process.<\/li>\n<\/ul>\n<\/li>\n<li>Controller Manager\n<ul>\n<li>Detects cluster state changes. Watches for Nodes and Pods that die and reschedules them as quickly as possible.<\/li>\n<li>Passes this data to the Scheduler to determine where to spin up the new Pods.<\/li>\n<\/ul>\n<\/li>\n<li>ectd\n<ul>\n<li>The cluster brain!\u00a0A key-&gt;value store.<\/li>\n<li>Shared storage when using with multiple Master Nodes.<\/li>\n<li>All K8s Master Processes talk to ectd.\n<ul>\n<li>Any time a Node or Pod is added, the changes are stored here.<\/li>\n<li>Actual application data (such as app logs, database data, etc.) are NOT stored here!<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Example Setup<\/h3>\n<ul>\n<li>2 Master Nodes\n<ul>\n<li>More important than Worker Nodes, but require fewer resources.<\/li>\n<\/ul>\n<\/li>\n<li>3 Worker Nodes<\/li>\n<\/ul>\n<h1>Minikube and Kubectl<\/h1>\n<h3>Minikube<\/h3>\n<ul>\n<li>Setting up a test environment on a local machine would be difficult to setup as defined in the Example Setup above. (2 MN, 3 WN)<\/li>\n<li>Minikube is Open Source single-node cluster that runs in VirtualBox.\n<ul>\n<li>Master processes and Worker processes all run on the same host.<\/li>\n<li>Docker container runtime will be pre-installed.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Kubectl<\/h3>\n<ul>\n<li>Command line tool for K8s clusters\n<ul>\n<li>All configurations must go through API Server<\/li>\n<li>Kubectl is the most powerful of all API Server clients.\n<ul>\n<li>You can do anything you want with Kubectl!<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>Kubectl talks to API Server which controls everything else. (Create services, add Nodes, create\/destroy pods, etc.)<\/li>\n<li>Not limited to Minikube. This is &#8216;the&#8217; tool to use for any type of cluster!<\/li>\n<\/ul>\n<h1>Installation (0:39:00)<\/h1>\n<p>Followed instructions from here: https:\/\/www.youtube.com\/watch?v=d-io3hKFdWs<\/p>\n<h2>Install Minikube<\/h2>\n<ul>\n<li>Followed instructions from here: https:\/\/www.youtube.com\/watch?v=d-io3hKFdWs\n<ul>\n<li>Installs directly on a running Linux VM<\/li>\n<li>Requires:\n<ul>\n<li>Docker is pre-installed<\/li>\n<li>User is member of the docker group\n<ul>\n<li><code>sudo usermod -aG docker $USER<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<pre>sudo apt update\r\nsudo apt upgrade -y\r\ncurl -LO https:\/\/storage.googleapis.com\/minikube\/releases\/latest\/minikube-linux-amd64\r\nsudo install minikube-linux-amd64 \/usr\/local\/bin\/minikube\r\n\r\n# Verify it installed\r\nminikube version\r\n\r\n<\/pre>\n<h2>Install kubectl<\/h2>\n<pre>curl -LO https:\/\/storage.googleapis.com\/kubernetes-release\/release\/`curl -s https:\/\/storage.googleapis.com\/kubernetes-release\/release\/stable.txt`\/bin\/linux\/amd64\/kubectl\r\n# make executable and move\r\nchmod +x kubectl\r\nsudo mv kubectl \/usr\/local\/bin\r\n\r\n# Verify it installed\r\nkubectl version -o yaml<\/pre>\n<h2>Start the cluster<\/h2>\n<pre>minikube start --driver=docker\r\n\r\n# Verify the status\r\nminikube status<\/pre>\n<h1>Basic kubectl commands<\/h1>\n<h2>General<\/h2>\n<pre># Help\r\nkubectl -h\r\n\r\n# Create anything help\r\nkubectl create -h<\/pre>\n<p>&nbsp;<\/p>\n<h2>Cluster Commands<\/h2>\n<p><strong>Get Everything<\/strong><\/p>\n<pre>kubectl get all<\/pre>\n<p><strong>Get basic info<\/strong><\/p>\n<pre>kubectl cluster-info<\/pre>\n<p><strong>Check cluster nodes<\/strong><\/p>\n<pre>kubectl get nodes<\/pre>\n<p>Check services<\/p>\n<pre>kubectl get services<\/pre>\n<h2>Node Commands<\/h2>\n<p><strong>Check cluster nodes<\/strong><\/p>\n<pre>kubectl get nodes<\/pre>\n<p>&nbsp;<\/p>\n<h2>Deployment\/Pod Commands<\/h2>\n<p>Note: Pretty much all creation\/deletion\/editing is done at the Deployment level. The rest is really just looking at things.<\/p>\n<p><strong>Create a Deployment<\/strong><\/p>\n<p>Use this for creating Pods!<\/p>\n<pre>kubectl create deployment &lt;DEPLOYMENT_NAME&gt; --image=&lt;DOCKER_IMAGE_NAME&gt; [--dry-run] [OTHER_OPTIONS]\r\n\r\nExample:\r\nkubectl create deployment nginx-depl --image=nginx \r\n# Check the details \r\n# This will take about a minute to fully spin up \r\nkubectl get pod \r\nNAME                        READY STATUS  RESTARTS AGE \r\nnginx-depl-5fcbf6fffd-zxhkr 1\/1   Running 0        51s \r\n#deploy_name-replica_set_id-pod_id<\/pre>\n<p><strong>Delete a Deployment<\/strong><\/p>\n<pre>kubectl delete deployment &lt;DEPLOYMENT_NAME&gt;<\/pre>\n<p><strong>Edit a Deployment<\/strong><\/p>\n<pre>kubectl edit deployment &lt;NAME&gt;\r\n\r\n# Example\r\nkubectl edit deployment nginx-depl<\/pre>\n<p><strong>Interact with the pod (CLI Terminal)<\/strong><\/p>\n<ul>\n<li>`-it` = Interactive Terminal<\/li>\n<\/ul>\n<pre>kubectl exec -it &lt;POD_NAME&gt; -- bin\/bash<\/pre>\n<p><strong>Logs<\/strong><\/p>\n<pre>kubectl logs &lt;POD_NAME&gt;\r\n\r\nExample:\r\nkubectl logs nginx-depl-5fcbf6fffd-zxhkr<\/pre>\n<p>&nbsp;<\/p>\n<p><strong>View Deployments<\/strong><\/p>\n<pre>kubectl get deployments<\/pre>\n<p><strong>View Pods<\/strong><\/p>\n<pre>kubectl get pod<\/pre>\n<p><strong>View Pod Status<\/strong><\/p>\n<pre>kubectl describe pod &lt;POD_NAME&gt;<\/pre>\n<p><strong>View Replica Sets<\/strong><\/p>\n<pre>kubectl get replicaset<\/pre>\n<h1>Using Configuration Files<\/h1>\n<p>0:56:28<\/p>\n<p>Exclusive use of the CLI is not scalable. In practice,\u00a0 you&#8217;ll use config files. To do this, use the &#8216;apply&#8217; command.<\/p>\n<pre>kubectl apply -f &lt;FILENAME&gt;\r\n\r\nExample\r\nkubectl apply -f nginx-deployment.yaml<\/pre>\n<p><strong>You can also delete configurations<\/strong><\/p>\n<pre>kubectl delete -f &lt;FILENAME&gt;<\/pre>\n<h3>Simple deployment config example:<\/h3>\n<p>Deployment Example: nginx-depl.yaml<\/p>\n<pre>apiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata: \r\n  name: nginx-depl\r\n  labels: # These are used by Services to identify which deployments belong to a service.\r\n    app: nginx\r\nspec: # Specification for the Deployment\r\n  replicas: 1\r\n  selector:\r\n    matchLabels: # This is how K8s identifies which Pods belong to this deployment\r\n      app: nginx\r\n  template:\r\n    metadata:\r\n      labels: # These will be 'matched' with matchLabels above.\r\n        app: nginx\r\n    spec: # Specification for the Pod (aka Pod Blueprint)\r\n      containers:\r\n      - name: nginx # Only 1 container in the Pod, but could have more!\r\n        image: nginx:1.16\r\n        ports:\r\n        - containerPort: 8080<\/pre>\n<p>Service Example: nginx-service.yaml<\/p>\n<pre>apiVersion: v1\r\nkind: Service \r\nmetadata:\r\n  name: nginx-service\r\nspec: # Specification for the Deployment\r\n  selector:\r\n    app: nginx\r\n  ports:\r\n    - protocol: TCP\r\n      port: 80\r\n      targetPort: 8080<\/pre>\n<p><strong>Apply the configuration<\/strong><\/p>\n<pre>kubectl apply -f nginx-depl.yaml<\/pre>\n<p><em>Things to consider:<\/em><\/p>\n<ol>\n<li>You cannot apply a configuration file to a deployment that was configured via <code>kubectl create deployment ...<\/code>. You will need to delete the deployment first.<\/li>\n<li>Once a deployment has been created using a config file, you may simply update the file then rerun the apply command to perform updates.<\/li>\n<li>Configurations files should be stored with your code, or in their own git repository.<\/li>\n<\/ol>\n<h2>Understanding K8s YAML configuration files<\/h2>\n<p>1:02:00<\/p>\n<p>Deployment Example:<\/p>\n<pre>apiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata: \r\n  name: nginx-depl\r\n  labels: ...\r\nspec:\r\n  replicas: 1\r\n  selector: ...\r\n  template: ...<\/pre>\n<p>Service Example:<\/p>\n<pre>apiVersion: v1\r\nkind: Service\r\nmetadata: \r\n  name: nginx-service\r\nspec:\r\n  selector: ...\r\n  template: ...<\/pre>\n<p><strong>Three parts to a K8s config (Excluding the header)<\/strong><\/p>\n<p>Not counted: Header.<\/p>\n<ul>\n<li>apiVersion:\n<ul>\n<li>Note: Each component may have a different apiVersion<\/li>\n<\/ul>\n<\/li>\n<li>kind: Defines what you are creating<\/li>\n<\/ul>\n<ol>\n<li>\u00a0metadata:\n<ul>\n<li>Will include the name<\/li>\n<\/ul>\n<\/li>\n<li>spec: # Specification\n<ul>\n<li>These attributes will be specific to the &#8216;kind&#8217; of deployment being created.<\/li>\n<\/ul>\n<\/li>\n<li>status:\n<ul>\n<li>This part is automatically created by K8s.\n<ul>\n<li>DO NOT include this in your config file!!<\/li>\n<\/ul>\n<\/li>\n<li>K8s looks at the desired state (status) and compares to the actual state to see if something needs to be fixed.\n<ul>\n<li>This data comes from etcd.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<h3>Connecting Components (Labels, Selectors &amp; Ports)<\/h3>\n<p><strong>Labels<\/strong><\/p>\n<ul>\n<li>\u00a0Always part of metadata<\/li>\n<li>Any key: value pair you can think of (Does not have to be &#8216;app&#8217; as shown in previous examples<\/li>\n<li>Reference the Simple Config comments above for more info.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-29-15-58-02.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4064\" src=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-29-15-58-02.png\" alt=\"\" width=\"1465\" height=\"826\" srcset=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-29-15-58-02.png 1465w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-29-15-58-02-300x169.png 300w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-29-15-58-02-1024x577.png 1024w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-29-15-58-02-768x433.png 768w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-29-15-58-02-150x85.png 150w\" sizes=\"auto, (max-width: 1465px) 100vw, 1465px\" \/><\/a><\/p>\n<p><strong>Ports<\/strong><\/p>\n<ul>\n<li>A container&#8217;s <code>containerPort<\/code> must equal the Service&#8217;s <code>targetPort<\/code>.<\/li>\n<\/ul>\n<p>Service Example:<\/p>\n<pre>spec:\r\n  ports:\r\n    - protocol: TCP\r\n      port: 80 #Incoming port\r\n      targetPort: 8080 #Forward from the incoming port to the port on the container.<\/pre>\n<p>Deployment Example:<\/p>\n<pre>spec: # Deployment specs\r\n  template:\r\n    spec: # Pod specs\r\n      containers:\r\n      - name: \r\n        ...\r\n        ports:\r\n        - containerPort: 8080 # Worker Node's targetPort<\/pre>\n<p>Deploy them both<\/p>\n<pre>kubectl apply -f nginx-depl.yaml\r\nkubectl apply -f nginx-service.yaml<\/pre>\n<p>Verify the relationships<\/p>\n<pre>kubectl get service\r\nkubectl describe service nginx-service\r\n\r\n# look for this....\r\nEndpoints: 10.20.30.40:8080, 10.20.30.41:8080<\/pre>\n<p>Verify we have the correct Pod IPs<\/p>\n<ul>\n<li>-o = output<\/li>\n<\/ul>\n<pre>kubectl get pod -o wide\r\n\r\nExample output:\r\nNAME                        READY STATUS  RESTARTS AGE   IP          NODE ...\r\nnginx-depl-7f5cf9f489-62vtg 1\/1   Running 0        9m17s 10.20.30.40 minikube ...\r\nnginx-depl-7f5cf9f489-v9bxn 1\/1   Running 0        9m17s 10.20.30.41 minikube ...<\/pre>\n<p>View the status (automatically added by K8s!)<\/p>\n<pre>kubectl get deployment nginx-depl -o yaml\r\nkubectl get deployment nginx-depl -o yaml &gt; output-file.yaml<\/pre>\n<p>Note: You should probably not use this file as a deployment file. If you do, you will need to strip out a lot of metadata that was automatically added besides just the status data, such as creation times, Ids, etc.<\/p>\n<h1>Complete Application Setup with K8s Components<\/h1>\n<p>1:16:19<\/p>\n<p><a href=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-30-08-09-17.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4069\" src=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-30-08-09-17.png\" alt=\"\" width=\"1057\" height=\"406\" srcset=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-30-08-09-17.png 1057w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-30-08-09-17-300x115.png 300w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-30-08-09-17-1024x393.png 1024w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-30-08-09-17-768x295.png 768w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-30-08-09-17-150x58.png 150w\" sizes=\"auto, (max-width: 1057px) 100vw, 1057px\" \/><\/a><\/p>\n<h2>Create the Deployments<\/h2>\n<p>Note: SECRET files must be created (or at least executed) BEFORE running any files that reference them.<\/p>\n<ul>\n<li>\u00a0To view the conf data for mongodb: <a href=\"https:\/\/hub.docker.com\" target=\"_blank\" rel=\"noopener\">https:\/\/hub.docker.com<\/a> Search &#8216;mongodb&#8217;\n<ul>\n<li>Port: 27017<\/li>\n<li>Environmental Variables:\n<ul>\n<li>MONGO_INITDB_ROOT_USERNAME<\/li>\n<li>MONGO_INITDB_ROOT_PASSWORD<\/li>\n<li>We&#8217;ll pull these values from a <strong>SECRET<\/strong>!<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>mongo-deployment.yaml<\/strong><\/p>\n<pre>apiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  name: mongodb-depl\r\n  labels:\r\n    app: mongodb\r\nspec: # Specification for the Deployment\r\n  replicas: 1\r\n  selector:\r\n    matchLabels:\r\n      app: mongodb\r\n  template:\r\n    metadata:\r\n      labels:\r\n        app: mongodb\r\n    spec: # Specification for the Pod\r\n      containers:\r\n      - name: mongodb # Only 1 container in the Pod, but could have more!\r\n        image: mongo\r\n        ports:\r\n        - containerPort: 27017\r\n        env:\r\n        - name: MONGO_INITDB_ROOT_USERNAME\r\n          valueFrom:\r\n            secretKeyRef:\r\n              name: mongodb-secret\r\n              key: mongo-root-username\r\n        - name: MONGO_INITDB_ROOT_PASSWORD\r\n          valueFrom:\r\n            secretKeyRef:\r\n              name: mongodb-secret\r\n              key: mongo-root-password \r\n<\/pre>\n<p><strong>Create the SECRETS file<\/strong><\/p>\n<p>Note: To base64 encode the username and password, use <code>echo -n '&lt;MY_SECRET&gt;' | base64<\/code><\/p>\n<ul>\n<li>This example uses `username` and `password` respectively.<\/li>\n<\/ul>\n<p><strong>mongo-secret.yaml<\/strong><\/p>\n<pre>apiVersion: v1\r\nkind: Secret\r\nmetadata:\r\n  name: mongodb-secret\r\ntype: Opaque\r\ndata:\r\n  mongo-root-username: dXNlcm5hbWU=\r\n  mongo-root-password: cGFzc3dvcmQ=<\/pre>\n<p>Apply the secret<\/p>\n<pre>kubectl apply -f mongo-secret.yaml<\/pre>\n<p>Verify the secret has been created<\/p>\n<pre>kubectl get secret<\/pre>\n<p><strong>Create the deployment<\/strong><\/p>\n<pre>kubectl apply -f mongodb-deployment.yaml<\/pre>\n<h3>Create the service<\/h3>\n<p>Services and Deployments are usually bound together, so they can be combined in the same file. To do this, just add a line consisting of 3 consecutive hyphens, which informs the YAML processor that this starts a new file.<\/p>\n<pre>---\r\napiVersion: v1\r\nkind: Service\r\nmetadata:\r\n  name: mongodb-service\r\nspec:\r\n  selector:\r\n    app: mongodb\r\n  ports:\r\n  - protocol: TCP\r\n    port: 27017\r\n    targetPort: 27017<\/pre>\n<p><strong>Verify the service is running<\/strong><\/p>\n<p>Note: This will also show the IP(s) of any pods attached to it.<\/p>\n<pre>kubectl describe service mongodb-service<\/pre>\n<p>You can then verify this IP matches the IP of your Pod.<\/p>\n<pre>kubectl get pod -o wide<\/pre>\n<h2>Create the Mongo Express Deployment and Service<\/h2>\n<p>1:33:24<\/p>\n<p>Create the ConfigMap for the DB so the application can locate it. This time, we&#8217;re creating it first. \ud83d\ude42<\/p>\n<p><strong>mongo-configmap.yaml<\/strong><\/p>\n<pre>apiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n  name: mongodb-configmap\r\ndata:\r\n  database_url: mongodb-service # This is the service name<\/pre>\n<p><strong>mongo-express.yaml<\/strong><\/p>\n<p>Note: Mongo Express has been updated since the video. As such, you need to specify image <code>mongo-express:0.54.0<\/code> instead of the most recent version.<\/p>\n<pre>apiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  name: mongo-express\r\n  labels:\r\n    app: mongo-express\r\nspec: # Specification for the Deployment \r\n  replicas: 1\r\n  selector:\r\n    matchLabels:\r\n      app: mongo-express\r\ntemplate: # Blueprint for Pods\r\n  metadata:\r\n    labels:\r\n      app: mongo-express\r\n  spec: # Specification for the Pod \r\n    containers:\r\n    - name: mongo-express # Only 1 container in the Pod, but could have more! \r\n      image: mongo-express:0.54.0\r\n      ports:\r\n      - containerPort: 8081\r\n     env:\r\n     - name: ME_CONFIG_MONGODB_ADMINUSERNAME\r\n       valueFrom:\r\n         secretKeyRef:\r\n           name: mongodb-secret\r\n           key: mongo-root-username\r\n     - name: ME_CONFIG_MONGODB_ADMINPASSWORD\r\n       valueFrom:\r\n         secretKeyRef:\r\n           name: mongodb-secret\r\n           key: mongo-root-password\r\n     - name: ME_CONFIG_MONGODB_SERVER\r\n       valueFrom:\r\n         configMapKeyRef:\r\n           name: mongodb-configmap\r\n           key: database_url<\/pre>\n<h3>Apply the ConfigMap and the Deployment<\/h3>\n<pre>kubectl apply -f mongo-configmap.yaml\r\nkubectl apply -f mongo-express.yaml<\/pre>\n<p>Verify all Pods are running and get the ID of the Express Pod<\/p>\n<pre>kubectl get pods<\/pre>\n<p>Verify the database actually connected<\/p>\n<pre>kubectl logs &lt;mongo-express pod&gt;<\/pre>\n<h3>Create the External Service to connect to Mongo Express from a browser<\/h3>\n<p>We&#8217;ll add this to the end of the Deployment Service.<\/p>\n<p>Notes:<\/p>\n<ul>\n<li>\u00a0Added<code>type: LoadBalancer<\/code> under the spec. Poor wording since all Services are load balancers (including internal).<\/li>\n<li>Added `nodePort: 30000` to the Port definition. This value must be between 30000 and 32767.<\/li>\n<\/ul>\n<pre>apiVersion: v1\r\nkind: Service\r\nmetadata:\r\n  name: mongo-express-service\r\nspec:\r\n  selector:\r\n    app: mongo-express\r\n  type: LoadBalancer\r\n  ports:\r\n  - protocol: TCP\r\n    port: 8081\r\n    targetPort: 8081\r\n    nodePort: 30000<\/pre>\n<p>Apply the update<\/p>\n<pre>kubectl apply -f mongo-express.yaml<\/pre>\n<p>You can see the service is external by viewing the new service&#8217;s TYPE = LoadBalancer<\/p>\n<ul>\n<li>Type ClusterIP (aka Internal Service) is default, so this does not need to be defined.<\/li>\n<\/ul>\n<pre>kubectl get service\r\n---\r\nNAME                  TYPE         CLUSTER-IP     EXTERNAL-IP PORT(S)        AGE\r\nkubernetes            ClusterIP    10.96.0.1      &lt;none&gt;      443\/TCP        22h\r\nmongo-express-service LoadBalancer 10.96.8.15     &lt;pending&gt;   8081:30000\/TCP 50s\r\nmongodb-service       ClusterIP    10.108.118.248 &lt;none&gt;      27017\/TCP      20h<\/pre>\n<p>Because this is Minikube, the external IP works a bit different. If this was a standard K8s deployment, the IP address would be assigned.<\/p>\n<pre>minikube service mongo-express-service\r\n---\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 NAMESPACE \u2502 NAME                  \u2502 TARGET PORT \u2502 URL                       \u2502\r\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\r\n\u2502 default   \u2502 mongo-express-service \u2502 8081        \u2502 http:\/\/192.168.49.2:30000 \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518<\/pre>\n<p><a href=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-13-06-28.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4078\" src=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-13-06-28.png\" alt=\"\" width=\"1124\" height=\"450\" srcset=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-13-06-28.png 1124w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-13-06-28-300x120.png 300w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-13-06-28-1024x410.png 1024w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-13-06-28-768x307.png 768w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-13-06-28-150x60.png 150w\" sizes=\"auto, (max-width: 1124px) 100vw, 1124px\" \/><\/a><\/p>\n<h1>K8s Namespaces<\/h1>\n<p>1:46:16<\/p>\n<ul>\n<li>Resources are organized into Namespaces.<\/li>\n<li>Think of Namespaces as virtual clusters inside a cluster.<\/li>\n<li>4 Namespaces created by default: `kubectl get namespace`<\/li>\n<\/ul>\n<pre>kubectl get namespace\r\n---\r\nNAME            STATUS AGE\r\ndefault         Active 23h\r\nkube-node-lease Active 23h\r\nkube-public     Active 23h\r\nkube-system     Active 23h<\/pre>\n<p>default:<\/p>\n<ul>\n<li>Where all resources go by default if you have not created a custom namespace.<\/li>\n<\/ul>\n<p>kube-node-lease:<\/p>\n<ul>\n<li>Contains information re: heartbeats of nodes.<\/li>\n<li>Each node has an associated lease object in the namespace.<\/li>\n<li>This is used to determine the availability of the node.<\/li>\n<\/ul>\n<p>kube-public:<\/p>\n<ul>\n<li>Publicly Accessible data.<\/li>\n<li>A configmap containing cluster info. <code>kubectl cluster-info<\/code><\/li>\n<\/ul>\n<p>kube-system:<\/p>\n<ul>\n<li>Internal Only! Do not user or modify anything in this namespace!<\/li>\n<li>These are system process and processes for kubectl, etc.<\/li>\n<\/ul>\n<p><strong>Creating a namespace<\/strong><\/p>\n<pre>kubectl create namespace &lt;NAMESPACE_NAME&gt;\r\nkubectl get namespace<\/pre>\n<p><strong>Creating with a Configuration file<\/strong><\/p>\n<p>This is a better way. This will maintain a history of what resources you created in a cluster in your config file repository.<\/p>\n<pre>apiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n  name: my-configmap\r\n  namespace: my-namespace\r\ndata:\r\n  my_data: blahblahblah<\/pre>\n<h3>Why use Namespaces?<\/h3>\n<p>Without namespaces, everything gets clumped into the default. this makes it hard if not impossible to sort resources.<\/p>\n<ul>\n<li>Better organization.\u00a0Group resources into their own namespaces, such as Databases.\n<ul>\n<li>Database resources<\/li>\n<li>Elastic Search resources<\/li>\n<li>NginX &#8230;<\/li>\n<\/ul>\n<\/li>\n<li>Multiple Teams using the same cluster\n<ul>\n<li>Allows multiple teams to use the same deployment name with different resources, configurations, etc.\n<ul>\n<li>Prevents teams from overriding each other<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>Hosting Staging and Development in the same cluster\n<ul>\n<li>Both systems can share pre-defined resources, such as a database.<\/li>\n<li><a href=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-14-00-05.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4080\" src=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-14-00-05.png\" alt=\"\" width=\"1434\" height=\"742\" srcset=\"https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-14-00-05.png 1434w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-14-00-05-300x155.png 300w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-14-00-05-1024x530.png 1024w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-14-00-05-768x397.png 768w, https:\/\/wiki.thomasandsofia.com\/wp-content\/uploads\/2025\/12\/Screenshot-from-2025-12-31-14-00-05-150x78.png 150w\" sizes=\"auto, (max-width: 1434px) 100vw, 1434px\" \/><\/a><\/li>\n<\/ul>\n<\/li>\n<li>Blue \/ Green Deployments\n<ul>\n<li>Two versions of Production &#8211; Active, current version and Future upgraded version.<\/li>\n<li>Like above, they can share common resources.<\/li>\n<\/ul>\n<\/li>\n<li>Limit resources\n<ul>\n<li>Assign team access to ONLY their namespace! This prevents accidental overrides.<\/li>\n<li>Each team gets their own isolated environment.<\/li>\n<li>Resource Quotas allow you to limit the resources (CPU\/RAM\/Storage) per Namespace.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Characteristics of a Namespace<\/h3>\n<p>1:55:07<\/p>\n<p>Shared and Unshared Resources<\/p>\n<pre>kubectl api-resources --namespaced=true #bound to a namespace\r\nkubectl api-resources --namespaced=false #available cluster wide<\/pre>\n<ul>\n<li>Most cannot be shared.\n<ul>\n<li>ConfigMaps<\/li>\n<li>Secrets<\/li>\n<\/ul>\n<\/li>\n<li>Services CAN be shared!\n<ul>\n<li>When referencing the service from the local config map, use the namespace name at the end of the service name.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<pre>apiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n  name: my-local-configmap\r\ndata:\r\n  db_url: db-service.namespace<\/pre>\n<p>Components that do not live in Namespaces. (These live globally in the cluster)<\/p>\n<ul>\n<li>Volumes<\/li>\n<li>Nodes<\/li>\n<\/ul>\n<h3>Creating components in a Namespace<\/h3>\n<p>By default, all components created will be in the `default` namespace. You must identify the namespace to add the components to.<\/p>\n<p>1. Identify the desired namespace in the create command<\/p>\n<pre>kubectl apply -f filename.yaml --namespace &lt;NAMESPACE&gt;<\/pre>\n<p>2. (Better) Identify the desired namespace in the configuration file.<\/p>\n<pre>apiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n  name: my-local-configmap\r\n  namespace: my-namespace\r\ndata:<\/pre>\n<p>Important! When locating a component in a non-default namespace, you MUST identify the desired namespace in the command. Failure to do this will show from the default namespace only!!<\/p>\n<pre>kubectl get configmap -n my-namespace<\/pre>\n<h3>Setting a non-default namespace<\/h3>\n<p>This requires a 3rd party tool such as kubectx \/ kubens<\/p>\n<p>This may also be possible now via kubectl&#8230; Untested<\/p>\n<pre>kubectl config set-context --current --namespace=NAMESPACE<\/pre>\n<h1>K8s Ingress Explained<\/h1>\n<p>2:01:52<\/p>\n<p>Allows users to access the application without using an external service that requires the IP of the Node and the application port.<\/p>\n<ul>\n<li>Ingress forwards to the Internal Service, which then forwards to the Pod.<\/li>\n<\/ul>\n<p><strong>Internal Service definition<\/strong><\/p>\n<pre>apiVersion: v1\r\nkind: Service\r\nmetadata: \r\n  name: my-internal-service \r\nspec: \r\n  selector: \r\n    app: mongo-express \r\n  type: LoadBalancer \r\n  ports: \r\n  - protocol: TCP \r\n    port: 8080\r\n    targetPort: 8080\r\n    nodePort: 30000 #30000 - 32767<\/pre>\n<p>Note: No Node or Type (default ClusterIp) definitions here.<\/p>\n<p><strong>Ingress Definition<\/strong><\/p>\n<pre>apiVersion: networking.k8s.io\/v1beta1\r\nkind: Ingress\r\nmetadata: \r\n  name: my-app-ingress \r\nspec: \r\n  rules:\r\n  - host: my-app.com # Must be a valid domain name. Map to the IP of the Ingress Node\r\n    http:\r\n      paths: # everything after the domain\r\n      - backend: \r\n          serviceName: myapp-internal-service # Internal Service Name\r\n          servicePort: 8080 #Internal service port<\/pre>\n<h3>Ingress Controller<\/h3>\n<p>2:07:40<\/p>\n<p>Limited notes going forward&#8230;<\/p>\n<ul>\n<li>\u00a0Using Proxy servers<\/li>\n<li>Subdomains vs. Paths<\/li>\n<li>TLS Certificates<\/li>\n<\/ul>\n<h1>Helm &#8211; Package manager of K8s<\/h1>\n<p>2:24:17<\/p>\n<h3>Finding packages<\/h3>\n<pre>helm search &lt;PACKAGE&gt;<\/pre>\n<ul>\n<li>\u00a0Do we have one for Virtana CO?\n<ul>\n<li>Might be a private registery<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Template Engine<\/h3>\n<p>Similar configs. Use {{ }} to replace variables with values.<\/p>\n<p>values.yaml<\/p>\n<p>2:30:55<\/p>\n<h3>Release Management<\/h3>\n<p>2:26:00<\/p>\n<h1>Volumes<\/h1>\n<p>2:38:08<\/p>\n<p>Rewatch this, although I don&#8217;t think this included much meat.<\/p>\n<h1>StatefulSets<\/h1>\n<p>2:58:38<\/p>\n<p>StatefulSets are a K8s component used specifically for stateful applications.<\/p>\n<ul>\n<li>Deployed using StatefulSet<\/li>\n<li>Depend on the most up-to-date information.<\/li>\n<li>Track their data using some persistent storage.<\/li>\n<li>\u00a0Databases\n<ul>\n<li>MySQL<\/li>\n<li>MongoDB<\/li>\n<li>ElasticSearch<\/li>\n<\/ul>\n<\/li>\n<li>Any Application that stores data to keep track of its state.<\/li>\n<\/ul>\n<p>Stateless Applications<\/p>\n<ul>\n<li>Deployed using Deployments\n<ul>\n<li>Easy replication within the cluster<\/li>\n<\/ul>\n<\/li>\n<li>Do not keep records of previous interactions.<\/li>\n<li>They simply process code and are just pass-throughs for data updates.<\/li>\n<li>Each request is completely independent of any other.\n<ul>\n<li>All data required to process the request must be included with the request!<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Differences between StatefulSets and Deployments<\/h3>\n<p>Similarities<\/p>\n<ul>\n<li>Identical Pod replications<\/li>\n<li>Configuration of storage<\/li>\n<\/ul>\n<p>Differences<\/p>\n<ul>\n<li>Deployments\n<ul>\n<li>Completely interchangeable. One dies, any other can just take over.<\/li>\n<li>Created in random orders with random hashes (IDs)<\/li>\n<li>One service that load balances to any of them.<\/li>\n<li>Deleted or scaled down in random orders<\/li>\n<\/ul>\n<\/li>\n<li>StatefulSets\n<ul>\n<li>Replica Pods are NOT identical!\n<ul>\n<li>They get their own identity above the Pod blueprint<\/li>\n<li>This is the difference between Deployments and StatefulSets<\/li>\n<\/ul>\n<\/li>\n<li>Cannot be created\/deleted at the same time<\/li>\n<li>Cannot be randomly addressed<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Pod Identities<\/p>\n<ul>\n<li>Sticky identity for each Pod.\n<ul>\n<li>mysql-0, mysql-1, mysql-2, etc.<\/li>\n<li>&lt;StatefulSet_Name&gt;-&lt;Ordinal&gt;\n<ul>\n<li>Ordinal starts from 0 and increments by 1 for each replica.<\/li>\n<\/ul>\n<\/li>\n<li>Each Pod gets its own DNS name\n<ul>\n<li>mysql-0.svc2, mysql-1.svc2, etc.<\/li>\n<\/ul>\n<\/li>\n<li>When a Pod is restarted:\n<ul>\n<li>It will get a new IP (Same as any other Pod)<\/li>\n<li>Pod ID will remain the same<\/li>\n<li>DNS Name will remain the same (but will point to the new IP)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>Created from same spec, but NOT interchangeable\n<ul>\n<li>Persistent ID remains across rescheduling.<\/li>\n<li>When a Pod dies and is replaced, it keeps its ID.<\/li>\n<\/ul>\n<\/li>\n<li>Pods can only replicate if the previous Pod is up and running!\n<ul>\n<li>Example: If on initial deployment, the Master fails to deploy, no replicas will be created!<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Scaling Databases &#8211; Overview<\/p>\n<ul>\n<li>With a single instance, it both reads and writes.<\/li>\n<li>With multiple instances, one 1 (Master) can read and write. Others (Slaves\/Workers) can only read.<\/li>\n<li>Each Pod has access to different physical storage.\n<ul>\n<li>Each get their own replica of the storage\n<ul>\n<li>ID-0: \/data\/vol\/pv-0 (Master)<\/li>\n<li>ID-1: \/data\/vol\/pv-1 (Worker)<\/li>\n<li>ID-2: \/data\/vol\/pv-2 (Worker)<\/li>\n<\/ul>\n<\/li>\n<li>For this to work, data must be continuously synchronized.\n<ul>\n<li>Workers must know about each change so they can remain up to date.<\/li>\n<li>Master updates data. Workers update their own data.<\/li>\n<\/ul>\n<\/li>\n<li>When adding a new replica (Pod) [ID-3]:\n<ul>\n<li>Pod must create its own storage<\/li>\n<li>Clone all existing data.\n<ul>\n<li>This happens from the previous Pod (ID-2)<\/li>\n<\/ul>\n<\/li>\n<li>Continues syncing from Master<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>Scaling Down\n<ul>\n<li>Always scale in reverse order.\n<ul>\n<li>The last Pod created (Highest Ordinal) is deleted first,<\/li>\n<li>Then the next oldest&#8230;<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>More about Storage<\/p>\n<ul>\n<li>Interesting Note: Temporary Data\n<ul>\n<li>It IS possible to do this without persistent storage and only using the storage available to the Pods.<\/li>\n<li>Replication, Sychronization will still work.<\/li>\n<li>Caveat: All data will be lost when all Pods die, or Cluster crashes, etc.!!\n<ul>\n<li>All Pods die at the same time&#8230;.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>With Persistent storage, all data will survive, even if all Pods die.\n<ul>\n<li>Storage lifecycle not controlled by Pod state.<\/li>\n<li>Each storage contains data about:\n<ul>\n<li>The Pod&#8217;s &#8216;State&#8217;<\/li>\n<li>Whether is a Master or Slave<\/li>\n<\/ul>\n<\/li>\n<li>When a Pod is rebuilt, it gets re-attached to its original storage.\n<ul>\n<li>for example, the Master remains the Master.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>This is why StatefulSets should ONLY use Remote Storage\n<ul>\n<li>If a Node dies, so does the local storage!<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Replicating Stateful Apps.<\/h3>\n<p>It&#8217;s Complicated!<\/p>\n<ul>\n<li>Kubernetes helps you, but there is A LOT of manual work required.\n<ul>\n<li>Need to configure cloning and data synchronization.<\/li>\n<li>Make Remote storage available<\/li>\n<li>Managing storage and Backups.<\/li>\n<\/ul>\n<\/li>\n<li>As such, K8s (or any containerized environment) is not well suited for Stateful Apps.<\/li>\n<li>K8s is amazing for Stateless apps.<\/li>\n<\/ul>\n<h1>K8s Services<\/h1>\n<p>3:13:42<\/p>\n<p>What is a service and why do we need it?<\/p>\n<ul>\n<li>Pods are ephemeral &#8211; destroyed frequently!\n<ul>\n<li>Restarts get new IP addresses.<\/li>\n<li>Addressing via hard coded IP address\u00a0 would require constant software updates!<\/li>\n<\/ul>\n<\/li>\n<li>Services get a stable, static IP.\n<ul>\n<li>This keeps fluidity both Internally and Externally<\/li>\n<\/ul>\n<\/li>\n<li>Services offer LoadBalancing automatically.<\/li>\n<li><\/li>\n<\/ul>\n<p>4 types of Services<\/p>\n<ul>\n<li>ClusterIP\n<ul>\n<li>Default. No &#8216;type:&#8217; required<\/li>\n<li>Think of a Service as a Load Balancer, with a static IP and Port.<\/li>\n<li>Pod IP comes from range available on the Node.\n<ul>\n<li>Node0: 10.2.0.x<\/li>\n<li>Node1: 10.2.1.x<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>NodePort\n<ul>\n<li>type: NodePort\n<ul>\n<li>Is an extension of ClusterIP type.<\/li>\n<\/ul>\n<\/li>\n<li>Not recommended. Insecure.\n<ul>\n<li>Better to use LoadBalancer<\/li>\n<\/ul>\n<\/li>\n<li>Creates a service that allows External traffic on a static Port to each worker node.\n<ul>\n<li>Instead of Ingress, The browser can access the service on a node via the Port defined.<\/li>\n<\/ul>\n<\/li>\n<li>Node Port Range: 30000 &#8211; 32676<\/li>\n<li>Must also define the `nodePort` value in the spec: ports: list.<\/li>\n<\/ul>\n<\/li>\n<li>Headless\n<ul>\n<li>Also default &#8216;type&#8217; of ClusterIP<\/li>\n<li>How to talk to a specific Pod without going through the Service?\n<ul>\n<li>Stateful apps? Maybe need to talk to a DB Master to make a change to a DB?<\/li>\n<li>`spec: {clusterIp: None}` will return the PodIp when making a dns call<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>LoadBalancer\n<ul>\n<li>type: LoadBalancer<\/li>\n<li>Is an extension of NodePort type<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Definitions Cluster A cluster is a group of Nodes\u00a0that will host one or more K8s Deployments. ConfigMap URL endpoints for services are defined here. This allows them to change without having to reconfigure applications that call them directly. Pods then call the ConfigMap to understand where to send the data without having to rebuild the ..<\/p>\n<div class=\"clear-fix\"><\/div>\n<p><a href=\"https:\/\/wiki.thomasandsofia.com\/?p=4053\" title=\"read more...\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[86],"tags":[],"class_list":["post-4053","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=\/wp\/v2\/posts\/4053","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4053"}],"version-history":[{"count":29,"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=\/wp\/v2\/posts\/4053\/revisions"}],"predecessor-version":[{"id":4086,"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=\/wp\/v2\/posts\/4053\/revisions\/4086"}],"wp:attachment":[{"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4053"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4053"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wiki.thomasandsofia.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4053"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}