<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[AshiqPradeep.com]]></title><description><![CDATA[AshiqPradeep Solutions]]></description><link>https://learn.ashiqpradeep.com/</link><generator>Ghost 5.88</generator><lastBuildDate>Wed, 06 May 2026 11:37:18 GMT</lastBuildDate><atom:link href="https://learn.ashiqpradeep.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Terraformer Magic: Convert Your AWS Setup into Terraform Code in Minutes]]></title><description><![CDATA[<p>Manually created infrastructure? No problem. With <strong>Terraformer</strong>, you can generate Terraform code for existing AWS resources in just a few commands. Let&#x2019;s see how to set it up on an Amazon Linux EC2 instance.</p><p>Install Terraform</p><pre><code>sudo yum install unzip -y
wget https://releases.hashicorp.com/terraform/1.</code></pre>]]></description><link>https://learn.ashiqpradeep.com/terraformer-magic-convert-your-aws-setup-into-terraform-code-in-minutes/</link><guid isPermaLink="false">6842ebd0c4958c58bb8ab515</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Fri, 06 Jun 2025 13:36:57 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/06/Terraformer-Magic-Convert-Your-AWS-Setup-into-Terraform-Code-in-Minutes.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/06/Terraformer-Magic-Convert-Your-AWS-Setup-into-Terraform-Code-in-Minutes.png" alt="Terraformer Magic: Convert Your AWS Setup into Terraform Code in Minutes"><p>Manually created infrastructure? No problem. With <strong>Terraformer</strong>, you can generate Terraform code for existing AWS resources in just a few commands. Let&#x2019;s see how to set it up on an Amazon Linux EC2 instance.</p><p>Install Terraform</p><pre><code>sudo yum install unzip -y
wget https://releases.hashicorp.com/terraform/1.8.4/terraform_1.8.4_linux_amd64.zip
unzip terraform_1.8.4_linux_amd64.zip
sudo mv terraform /usr/local/bin/
terraform version
</code></pre><p>Install Terraformer</p><pre><code class="language-`">wget https://github.com/GoogleCloudPlatform/terraformer/releases/download/0.8.24/terraformer-all-linux-amd64 -O terraformer
chmod +x terraformer
sudo mv terraformer /usr/local/bin/
</code></pre><p>Import EC2 Instances with One Command</p><pre><code>terraformer import aws --resources=ec2_instance --regions=us-east-1

This pulls all EC2 instances from us-east-1 and generates Terraform files for you.</code></pre><p>Output Structure</p><pre><code>generated/
&#x2514;&#x2500;&#x2500; aws/
    &#x2514;&#x2500;&#x2500; ec2_instance/
        &#x251C;&#x2500;&#x2500; main.tf
        &#x251C;&#x2500;&#x2500; outputs.tf
        &#x2514;&#x2500;&#x2500; terraform.tfstate</code></pre><p><strong>Consolidate All Resources into a Single Terraform Project</strong></p><p>By default, <strong>Terraformer</strong> creates separate folders and state files per resource type. To manage everything cleanly under one project, follow these steps:</p><pre><code>Make a new folder for your Terraform project and Move All Terraform Files

mkdir terraform
cd terraform
cp ../generated/aws/*/*.tf .
terraform init</code></pre><pre><code>Re-import Resources into the New Project

terraform import &lt;resource_type&gt;.&lt;resource_name&gt; &lt;resource_identifier&gt;

Repeat for each resource using the correct identifiers.

</code></pre><p>Final Result</p><p>All resources are now in a single <code>terraform</code> folder with one <code>terraform.tfstate</code> file &#x2014; making it easier to manage, version, and collaborate with your team.</p>]]></content:encoded></item><item><title><![CDATA[Modern CI with Podman: Build & Push Images to Docker Hub via GitHub Actions]]></title><description><![CDATA[<p>Use <strong>Podman</strong>, a <strong>lightweight and secure Docker alternative</strong>, to build and push images in GitHub Actions directly to <strong>Docker Hub</strong>.</p><p><strong>GitHub Actions Workflow</strong></p>
<p>Create <code>.github/workflows/podman-ci.yml</code>:</p><pre><code>name: Build &amp; Push Podman Image

on:
  push:
    branches: [ main ]

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name:</code></pre>]]></description><link>https://learn.ashiqpradeep.com/modern-ci-with-podman-build-push-images-to-docker-hub-via-github-actions/</link><guid isPermaLink="false">67ef601ec4958c58bb8ab4f6</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Fri, 04 Apr 2025 04:34:24 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/04/Modern-CI-with-Podman.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/04/Modern-CI-with-Podman.png" alt="Modern CI with Podman: Build &amp; Push Images to Docker Hub via GitHub Actions"><p>Use <strong>Podman</strong>, a <strong>lightweight and secure Docker alternative</strong>, to build and push images in GitHub Actions directly to <strong>Docker Hub</strong>.</p><p><strong>GitHub Actions Workflow</strong></p>
<p>Create <code>.github/workflows/podman-ci.yml</code>:</p><pre><code>name: Build &amp; Push Podman Image

on:
  push:
    branches: [ main ]

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Install Podman
        run: sudo apt update &amp;&amp; sudo apt install -y podman

      - name: Login to Docker Hub
        run: |
          echo &quot;${{ secrets.DOCKER_PASSWORD }}&quot; | podman login -u &quot;${{ secrets.DOCKER_USERNAME }}&quot; --password-stdin docker.io

      - name: Build &amp; Push Image
        run: |
          podman build -t docker.io/yourdockerhubusername/podman-app:latest .
          podman push docker.io/yourdockerhubusername/podman-app:latest</code></pre><p><strong>GitHub Secrets Required</strong></p>
<p>Set these in your repo:</p><ul><li><code>DOCKER_USERNAME</code></li><li><code>DOCKER_PASSWORD</code></li></ul><p><strong>Dockerfile</strong></p>
<pre><code>FROM nginx:alpine
COPY webcontent/ /usr/share/nginx/html
EXPOSE 80
CMD [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;]</code></pre><p><strong>Sample index.html</strong></p>
<p>Place this in <code>webcontent/index.html</code></p><pre><code>&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;&lt;title&gt;Podman App&lt;/title&gt;&lt;/head&gt;
&lt;body&gt;
  &lt;h1&gt;Hello from Podman! &#x1F680;&lt;/h1&gt;
&lt;/body&gt;
&lt;/html&gt;</code></pre><p><strong>Why Podman?</strong></p>
<ul>
<li>No Daemon Needed</li>
<li>Secure &amp; Rootless</li>
<li>Docker-Compatible CLI</li>
<li>Lightweight &amp; Fast</li>
</ul>
<p>Once pushed, your GitHub Action will build the image and push it to Docker Hub. </p>]]></content:encoded></item><item><title><![CDATA[How to Use Docker Hub with Podman – A Docker Alternative]]></title><description><![CDATA[<p>If you&apos;re exploring a <strong>Docker alternative, Podman</strong> is a powerful and secure choice. It&#x2019;s a <strong>daemonless container engine</strong> that fully supports Docker image formats and registries like Docker Hub.</p>
<p>This guide shows how to build an image with Podman, tag it properly, and push/pull it</p>]]></description><link>https://learn.ashiqpradeep.com/how-to-use-docker-hub-with-podman-a-docker-alternative/</link><guid isPermaLink="false">67ef521ac4958c58bb8ab4d0</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Fri, 04 Apr 2025 03:39:14 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/04/How-to-Use-Docker-Hub-with-Podman.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/04/How-to-Use-Docker-Hub-with-Podman.png" alt="How to Use Docker Hub with Podman &#x2013; A Docker Alternative"><p>If you&apos;re exploring a <strong>Docker alternative, Podman</strong> is a powerful and secure choice. It&#x2019;s a <strong>daemonless container engine</strong> that fully supports Docker image formats and registries like Docker Hub.</p>
<p>This guide shows how to build an image with Podman, tag it properly, and push/pull it from Docker Hub.</p>
<p>Install Podman on Ubuntu</p>
<pre><code>Run the following commands:

sudo apt update
sudo apt -y install podman</code></pre><pre><code>Check the version to confirm installation:

podman --version</code></pre><pre><code>Sample Dockerfile

FROM alpine:latest

# Install bash (optional but useful)
RUN apk add --no-cache bash

# Keep the container running with an infinite loop
CMD [&quot;bash&quot;, &quot;-c&quot;, &quot;while true; do echo &apos;Podman is running...&apos;; sleep 5; done&quot;]</code></pre><p>Step 1: Login to Docker Hub</p>
<pre><code>podman login docker.io

Enter your Docker Hub username and password/token when prompted.</code></pre><pre><code>Step 2: Build the Image Using Podman

podman build -t myimage:latest .</code></pre><pre><code>Step 3: Tag the Image for Docker Hub

podman tag myimage docker.io/yourdockerhubusername/myimage:latest</code></pre><pre><code>Step 4: Push the Image to Docker Hub

podman push docker.io/yourdockerhubusername/myimage:latest</code></pre><pre><code>Step 5: Pull the Image from Docker Hub

podman pull docker.io/yourdockerhubusername/myimage:latest</code></pre><pre><code>Run it:

podman run docker.io/yourdockerhubusername/myimage:latest
</code></pre>]]></content:encoded></item><item><title><![CDATA[Create and Host Your Own Helm Charts with GitHub Pages!]]></title><description><![CDATA[<p>Helm is a powerful package manager for Kubernetes, making it easy to deploy applications. In this guide, we&apos;ll show you how to create a custom Helm repository, host it on GitHub Pages, and make it publicly accessible.</p>
<p><strong>What You Need</strong></p>
<p>Helm<br>
Git<br>
A GitHub account<br>
A Kubernetes cluster</p>]]></description><link>https://learn.ashiqpradeep.com/create-and-host-your-own-helm-charts-with-github-pages/</link><guid isPermaLink="false">67eccf81c4958c58bb8ab48e</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Wed, 02 Apr 2025 06:00:41 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/04/Create-and-Host-Your-Own-Helm-Charts-with-GitHub-Pages-.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/04/Create-and-Host-Your-Own-Helm-Charts-with-GitHub-Pages-.png" alt="Create and Host Your Own Helm Charts with GitHub Pages!"><p>Helm is a powerful package manager for Kubernetes, making it easy to deploy applications. In this guide, we&apos;ll show you how to create a custom Helm repository, host it on GitHub Pages, and make it publicly accessible.</p>
<p><strong>What You Need</strong></p>
<p>Helm<br>
Git<br>
A GitHub account<br>
A Kubernetes cluster (MicroK8s, Minikube, or cloud-based)</p>
<p><strong>Step 1: Create a Helm Chart</strong></p>
<pre><code>Run the following command to create a Helm chart:

helm create my-first-project</code></pre><p>This creates a <strong>my-first-project</strong> directory with a sample Helm chart. You can modify the default configurations, add new templates, or update the values.yaml file to fit your application&apos;s needs.</p>
<p><strong>Step 2: Package the Helm Chart</strong></p>
<pre><code>To prepare your chart for publishing, package it:

helm package my-first-project</code></pre><p>This creates a .tgz file, e.g., my-first-project-0.1.0.tgz.</p>
<p><strong>Step 3: Create a Helm Repository Index</strong></p>
<pre><code>Generate an index file for your repository:

helm repo index .</code></pre><p>This creates index.yaml, which Helm uses to find charts.</p>
<p><strong>Step 4: Set Up a GitHub Repository</strong></p>
<pre><code>Go to GitHub and create a public repository (e.g., helm).

Clone it to your local machine:
git clone https://github.com/YOUR_GITHUB_USERNAME/helm.git
cd helm

Move your chart package and index file into the repository:
cp ../my-first-project-0.1.0.tgz .
cp ../index.yaml .

Commit and push the files:
git add .
git commit -m &quot;Initial commit&quot;
git push origin main</code></pre><p><strong>Step 5: Enable GitHub Pages</strong></p>
<p>Open your GitHub repository.<br>
Click Settings &gt; Pages.<br>
Under Source, select main or gh-pages.<br>
Under Branch, select the branch where your files are stored (/root or /docs).<br>
Save the settings.<br>
Copy your repository URL (e.g., https://YOUR_GITHUB_USERNAME.github.io/helm/).</p>
<p><strong>Step 6: Add the Helm Repository</strong></p>
<pre><code>Tell Helm to use your new repository:
helm repo add my-repo https://YOUR_GITHUB_USERNAME.github.io/helm/

Check if it&apos;s added:
helm repo list</code></pre><p><strong>Step 7: Install the Helm Chart</strong></p>
<pre><code>Deploy your chart from your custom repository:
helm install myrelease my-repo/my-first-project</code></pre><p><strong>Conclusion</strong></p>
<p>That&apos;s it! You&apos;ve successfully set up a public Helm repository using GitHub Pages. Now, anyone can use your charts with just a URL. Happy Helm charting!</p>
]]></content:encoded></item><item><title><![CDATA[Horizontal Pod Autoscaler (HPA): Scaling Workloads in Kubernetes]]></title><description><![CDATA[<p>Kubernetes provides a built-in <strong>Horizontal Pod Autoscaler (HPA)</strong> to manage workload scaling dynamically. HPA adjusts the number of pods based on CPU, memory, or custom metrics, optimizing performance and cost.</p><p><strong>How HPA Works</strong></p>
<p>Monitors resource utilization via the Kubernetes Metrics Server.<br>
Adjusts pod counts based on predefined thresholds.<br>
Continuously evaluates</p>]]></description><link>https://learn.ashiqpradeep.com/horizontal-pod-autoscaler-hpa-scaling-workloads-in-kubernetes/</link><guid isPermaLink="false">67e97224c4958c58bb8ab46d</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Sun, 30 Mar 2025 16:44:23 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/Horizontal-Pod-Autoscaler--HPA--Scaling-Workloads-in-Kubernetes--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/Horizontal-Pod-Autoscaler--HPA--Scaling-Workloads-in-Kubernetes--1-.png" alt="Horizontal Pod Autoscaler (HPA): Scaling Workloads in Kubernetes"><p>Kubernetes provides a built-in <strong>Horizontal Pod Autoscaler (HPA)</strong> to manage workload scaling dynamically. HPA adjusts the number of pods based on CPU, memory, or custom metrics, optimizing performance and cost.</p><p><strong>How HPA Works</strong></p>
<p>Monitors resource utilization via the Kubernetes Metrics Server.<br>
Adjusts pod counts based on predefined thresholds.<br>
Continuously evaluates demand and scales accordingly.</p>
<p>Setting Up HPA</p>
<p>Install Metrics Server</p>
<pre><code>HPA requires the Metrics Server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml</code></pre><p>Deploy a Sample Application</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: alpine
        command: [&quot;/bin/sh&quot;]
        args: [&quot;-c&quot;, &quot;apk add --no-cache stress-ng &amp;&amp; stress-ng --cpu 1 --timeout 600s&quot;]
        resources:
          requests:
            cpu: &quot;200m&quot;
            memory: &quot;256Mi&quot;
          limits:
            cpu: &quot;500m&quot;
            memory: &quot;512Mi&quot;</code></pre><p>Configure HPA</p>
<pre><code>apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50</code></pre><pre><code>Apply the configuration:
kubectl apply -f my-app-hpa.yaml

Monitoring HPA
Watch HPA activity:
kubectl get hpa --watch

Monitor pod scaling:
kubectl get pods -w</code></pre>]]></content:encoded></item><item><title><![CDATA[Deploy Applications with ArgoCD on a Kubernetes Cluster | GitOps in Kubernetes]]></title><description><![CDATA[<p>Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. In this guide, we will install Argo CD on a Kubernetes cluster, configure it to sync with a Git repository, and deploy applications automatically.</p><p><strong>Prerequisites</strong></p>
<p>Ensure you have the following installed on your system:<br>
Kubernetes cluster<br>
kubectl command-line tool<br></p>]]></description><link>https://learn.ashiqpradeep.com/deploy-applications-with-argocd-on-a-kubernetes-cluster-gitops-in-kubernetes/</link><guid isPermaLink="false">67e8ebdbc4958c58bb8ab41a</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Sun, 30 Mar 2025 07:11:40 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/Deploy-Applications-with-ArgoCD-on-a-Kubernetes-Cluster--GitOps-in-Kubernetes.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/Deploy-Applications-with-ArgoCD-on-a-Kubernetes-Cluster--GitOps-in-Kubernetes.png" alt="Deploy Applications with ArgoCD on a Kubernetes Cluster | GitOps in Kubernetes"><p>Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. In this guide, we will install Argo CD on a Kubernetes cluster, configure it to sync with a Git repository, and deploy applications automatically.</p><p><strong>Prerequisites</strong></p>
<p>Ensure you have the following installed on your system:<br>
Kubernetes cluster<br>
kubectl command-line tool<br>
Internet access to download Argo CD</p>
<p><strong>Step 1: Install Argo CD</strong></p>
<pre><code>First, create a namespace for Argo CD:

kubectl create namespace argocd</code></pre><pre><code>Then, apply the Argo CD installation manifest:

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</code></pre><p><strong>Step 2: Install Argo CD CLI</strong></p>
<pre><code>Download and install the Argo CD CLI:

curl -sLO https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
chmod +x argocd-linux-amd64
sudo mv argocd-linux-amd64 /usr/local/bin/argocd</code></pre><p><strong>Step 3: Verify Argo CD Installation</strong></p>
<pre><code>Check if the Argo CD pods are running:

kubectl get pods -n argocd

List the services:

kubectl get svc -n argocd</code></pre><p><strong>Step 4: Expose Argo CD Server</strong></p>
<pre><code>Forward the Argo CD server port to access it from your local machine:

kubectl port-forward svc/argocd-server -n argocd 8080:443 --address 0.0.0.0 &amp;</code></pre><p><strong>Step 5: Retrieve the Initial Admin Password</strong></p>
<pre><code>Get the initial admin password and decode it:

kubectl get secret argocd-initial-admin-secret -n argocd -o yaml

echo provide-pass-got-from-the-above-command | base64 -d</code></pre><p><strong>Step 6: Login to Argo CD</strong></p>
<pre><code>Authenticate with Argo CD using the admin credentials:

argocd login localhost:8080 --username admin --password &lt;INITIAL_PASSWORD&gt; --insecure</code></pre><p><strong>Step 7: Connect Argo CD to Your Git Repository</strong></p>
<pre><code>Add your Git repository to Argo CD:

argocd repo add https://github.com/pradeep-kudukkil/kubernetes.git \
  --username your-user-name \
  --password &lt;YOUR_GITHUB_ACCESS_TOKEN&gt;</code></pre><pre><code>Verify the repository connection:

argocd repo list</code></pre><p><strong>Step 8: Create and Deploy an Application</strong></p>
<pre><code>Create an application in Argo CD:

argocd app create my-app \
  --repo https://github.com/pradeep-kudukkil/kubernetes.git \
  --path workload \
  --dest-server https://kubernetes.default.svc \
  --dest-namespace default</code></pre><pre><code>Enable automatic synchronization:

argocd app set my-app --sync-policy automated --auto-prune --self-heal</code></pre><pre><code>Verify deployment:

kubectl get pods
argocd app get my-app</code></pre><p>Step 9: Delete an Application (If Needed)</p>
<pre><code>To remove an application, run:

argocd app delete my-app</code></pre>]]></content:encoded></item><item><title><![CDATA[Track & Audit AWS CLI Commands with Azure DevOps]]></title><description><![CDATA[<p><strong>Who ran it?</strong> Captured automatically in the logs.<br>
<strong>Command Status?</strong> Success / Failure &#x2013; Recorded instantly.</p>
<p>This Azure DevOps pipeline dynamically executes AWS CLI commands and logs who triggered the command and its status in an S3-hosted CSV file. With automated logging, you gain full visibility, transparency, and auditability in your</p>]]></description><link>https://learn.ashiqpradeep.com/track-audit-aws-cli-commands-with-azure-devops/</link><guid isPermaLink="false">67d129e6c4958c58bb8ab406</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Wed, 12 Mar 2025 06:31:59 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/track---Audit-AWS-CLI-Commands-with-Azure-DevOps.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/track---Audit-AWS-CLI-Commands-with-Azure-DevOps.png" alt="Track &amp; Audit AWS CLI Commands with Azure DevOps"><p><strong>Who ran it?</strong> Captured automatically in the logs.<br>
<strong>Command Status?</strong> Success / Failure &#x2013; Recorded instantly.</p>
<p>This Azure DevOps pipeline dynamically executes AWS CLI commands and logs who triggered the command and its status in an S3-hosted CSV file. With automated logging, you gain full visibility, transparency, and auditability in your cloud operations.</p>
<p>Every run is tracked&#x2014;no more guessing.</p>
<p>Youtube Link:<a href="http://">https://youtu.be/mLIYsHW3qNE</a></p>
<pre><code>trigger: none

pool:
  vmImage: &apos;ubuntu-latest&apos;

variables:
  AWS_REGION: &apos;ap-south-1&apos;
  S3_BUCKET: &apos;aws-cli-tracker&apos;
  LOG_FILE: &apos;cli-execution.csv&apos;

parameters:
  - name: awsCliCommand
    displayName: &quot;Enter AWS CLI Command&quot;
    type: string
    default: &quot;aws s3 ls&quot;

steps:
  - task: AWSShellScript@1
    inputs:
      awsCredentials: &apos;Pradeep-AWS&apos;
      regionName: $(AWS_REGION)
      scriptType: &apos;inline&apos;
      inlineScript: |
        # Capture execution details
        if [ -n &quot;$(Build.QueuedBy)&quot; ]; then
          EXECUTOR=&quot;\&quot;$(Build.QueuedBy)\&quot;&quot;  # Using QueuedBy and quoting to handle spaces
        else
          EXECUTOR=&quot;\&quot;Unknown\&quot;&quot;  # Fallback if no value
        fi

        echo &quot;QueuedBy: $(Build.QueuedBy)&quot;
        
        TIMESTAMP=$(date -u +&quot;%Y-%m-%dT%H:%M:%SZ&quot;)
        CLI_COMMAND=&quot;${{ parameters.awsCliCommand }}&quot;
        
        # Run command and capture result
        OUTPUT_FILE=&quot;aws_output.txt&quot;
        ERROR_FILE=&quot;aws_error.txt&quot;
        $CLI_COMMAND &gt; $OUTPUT_FILE 2&gt; $ERROR_FILE
        if [ $? -eq 0 ]; then
          RESULT=&quot;success&quot;
          cat $OUTPUT_FILE
        else
          RESULT=&quot;fail&quot;
          cat $ERROR_FILE
        fi

        # Prepare log entry in CSV format
        LOG_ENTRY=&quot;$EXECUTOR,$TIMESTAMP,\&quot;$CLI_COMMAND\&quot;,$RESULT&quot;

        # Download existing CSV log file from S3 (if exists)
        aws s3 cp s3://$S3_BUCKET/$LOG_FILE $LOG_FILE || touch $LOG_FILE

        # Append new row to the CSV log
        echo &quot;$LOG_ENTRY&quot; &gt;&gt; $LOG_FILE

        # Upload updated CSV back to S3
        aws s3 cp $LOG_FILE s3://$S3_BUCKET/$LOG_FILE
    displayName: &apos;Run AWS CLI and Upload Logs to S3&apos;</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes Network Policy: Allow DNS Resolution for Both Internal and External Services, Block All Other Traffic]]></title><description><![CDATA[<p>Allowing DNS resolution for both internal and external services while blocking all other traffic helps secure your Kubernetes environment. This setup ensures that pods can resolve domain names like google.com for external services and kubernetes.default.svc.cluster.local for internal services, but no HTTP, HTTPS, or other non-DNS</p>]]></description><link>https://learn.ashiqpradeep.com/kubernetes-network-policy-allow-dns-resolution-for-both-internal-and-external-services-block-all-other-traffic/</link><guid isPermaLink="false">67cbeeb4c4958c58bb8ab3e1</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Sat, 08 Mar 2025 07:36:32 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/Kubernetes-Network-Policy-Allow-DNS-Resolution-for-Both-Internal-and-External-Services--Block-All-Other-Traffic.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/Kubernetes-Network-Policy-Allow-DNS-Resolution-for-Both-Internal-and-External-Services--Block-All-Other-Traffic.png" alt="Kubernetes Network Policy: Allow DNS Resolution for Both Internal and External Services, Block All Other Traffic"><p>Allowing DNS resolution for both internal and external services while blocking all other traffic helps secure your Kubernetes environment. This setup ensures that pods can resolve domain names like google.com for external services and kubernetes.default.svc.cluster.local for internal services, but no HTTP, HTTPS, or other non-DNS traffic is allowed.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-only
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
    - ports:
      - protocol: UDP
        port: 53
      - protocol: TCP
        port: 53</code></pre><p><strong>Explanation:</strong><br>
podSelector: Selects all pods in the default namespace.<br>
policyTypes: Specifies that the policy applies only to egress traffic.<br>
egress: Allows traffic to port 53 (DNS) for both UDP and TCP protocols. This ensures DNS resolution for both internal and external domains.</p>
<p>Testing the Policy:</p><pre><code>Create a test pod:
kubectl run dns-tester --image=busybox --restart=Never -- sleep 3600

Access the dns-tester pod and test external DNS resolution (e.g., google.com):
kubectl exec -it dns-tester -- sh
nslookup google.com

DNS resolution for external domains (e.g., google.com) should succeed 

Test HTTP/HTTPS (Should Fail):
wget http://google.com

Accessing websites over HTTP or HTTPS should fail, as only DNS resolution is allowed, and all other traffic is blocked.</code></pre><p>Deleting Pod and Network Policy:</p><pre><code>kubectl delete pod dns-tester
kubectl delete networkpolicy allow-dns-only -n default
</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes Network Policy: Control Outbound Traffic to Specific APIs]]></title><description><![CDATA[<p>Real-World Use Case: Controlling outbound traffic from your client pods by limiting their connections to specific APIs ensures that your pods are not sending data to unauthorized or malicious destinations.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-external-api
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: client
  policyTypes:
  - Egress</code></pre>]]></description><link>https://learn.ashiqpradeep.com/kubernetes-network-policy-control-outbound-traffic-to-specific-apis/</link><guid isPermaLink="false">67cbebb8c4958c58bb8ab3be</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Sat, 08 Mar 2025 07:13:16 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/Kubernetes-Network-Policy-Control-Outbound-Traffic-to-Specific-APIs.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/Kubernetes-Network-Policy-Control-Outbound-Traffic-to-Specific-APIs.png" alt="Kubernetes Network Policy: Control Outbound Traffic to Specific APIs"><p>Real-World Use Case: Controlling outbound traffic from your client pods by limiting their connections to specific APIs ensures that your pods are not sending data to unauthorized or malicious destinations.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-external-api
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: client
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: Nginx-pod-ip/32  # IP of your Nginx pod - first complete below pods creation command
    ports:
    - protocol: TCP
      port: 80
</code></pre><p>Testing the Policy:</p><pre><code>Deploy a client pod:
kubectl run client --image=busybox --restart=Never --labels=app=client -- sleep 3600

Deploy a Nginx pod:
kubectl run Nginx-pod --image=nginx 

Access the client pod and try to connect to the Nginx pod:
kubectl exec -it client -- sh
wget -qO- http://Nginx-pod-ip

Expected Result: The request should succeed, and any other outbound connections should fail.</code></pre><p>Deleting Pods and Network Policy:</p><pre><code>kubectl delete pod client
kubectl delete pod Nginx-pod
kubectl delete networkpolicy allow-external-api -n default</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes Network Policy Allow Only HTTP Traffic to Your Web Application]]></title><description><![CDATA[<p><strong>Real-World Use Case:</strong> <strong>Restricting inbound traffic</strong> to your web application by allowing only HTTP (port 80) ensures that no other ports are exposed, reducing the attack surface for potential malicious activities.</p><pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-http
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  -</code></pre>]]></description><link>https://learn.ashiqpradeep.com/kubernetes-network-policy-allow-only-http-traffic-to-your-web-application/</link><guid isPermaLink="false">67cbea6fc4958c58bb8ab3a6</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Sat, 08 Mar 2025 07:02:58 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/Untitled-design--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/Untitled-design--1-.png" alt="Kubernetes Network Policy Allow Only HTTP Traffic to Your Web Application"><p><strong>Real-World Use Case:</strong> <strong>Restricting inbound traffic</strong> to your web application by allowing only HTTP (port 80) ensures that no other ports are exposed, reducing the attack surface for potential malicious activities.</p><pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-http
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector: {}
    ports:
    - protocol: TCP
      port: 80
</code></pre><p>Testing the Policy:</p><pre><code>Deploy a web pod:
kubectl run web --image=nginx --labels=app=web --port=80 --restart=Never

Deploy a test client pod:
kubectl run test-client --image=busybox --restart=Never -- sleep 3600

Access the test-client pod and try to access the web pod:
kubectl exec -it test-client -- sh
wget -qO- http://web-ip

Expected Result: The request should succeed because port 80 is open.
</code></pre><p>Deleting Pods and Network Policy:</p><pre><code>kubectl delete pod web
kubectl delete pod test-client
kubectl delete networkpolicy allow-http -n default</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes Network Policy: Limit Traffic Between Frontend and Backend Pods]]></title><description><![CDATA[<p>Real-World Use Case: Restricting traffic to backend services by only allowing communication from the frontend namespace helps ensure a secure separation of duties, where only trusted services can interact with sensitive backend applications.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend
  namespace: backend
spec:
  podSelector:
    matchLabels:
      app: backend</code></pre>]]></description><link>https://learn.ashiqpradeep.com/kubernetes-network-policy-limit-traffic-between-frontend-and-backend-pods/</link><guid isPermaLink="false">67cbe8ccc4958c58bb8ab38d</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Sat, 08 Mar 2025 06:54:23 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/Kubernetes-Network-Policy-Limit-Traffic-Between-Frontend-and-Backend-Pods.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/Kubernetes-Network-Policy-Limit-Traffic-Between-Frontend-and-Backend-Pods.png" alt="Kubernetes Network Policy: Limit Traffic Between Frontend and Backend Pods"><p>Real-World Use Case: Restricting traffic to backend services by only allowing communication from the frontend namespace helps ensure a secure separation of duties, where only trusted services can interact with sensitive backend applications.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend
  namespace: backend
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: frontend
</code></pre><p>Testing the Policy:</p><pre><code>Create pods in both namespaces:

kubectl create ns frontend
kubectl create ns backend
kubectl label namespaces frontend name=frontend
kubectl run backend-pod --image=busybox --restart=Never --labels=app=backend -n backend -- sleep 3600
kubectl run frontend-pod --image=busybox --restart=Never -n frontend -- sleep 3600

Access the frontend pod and try to connect to the backend pod

kubectl exec -n frontend -it frontend-pod -- sh
ping backend-ip

Expected Result: The ping should succeed since the frontend pod is allowed to connect to the backend pod.</code></pre><p>Deleting Pods, Namespaces, and Network Policy:</p><pre><code>kubectl delete pod frontend-pod -n frontend
kubectl delete pod backend-pod -n backend
kubectl delete ns frontend
kubectl delete ns backend
kubectl delete networkpolicy allow-frontend -n backend</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes Network Policy: Deny All Traffic for Better Cluster Isolation]]></title><description><![CDATA[<p>Real-World Use Case: Implementing a deny-all traffic policy ensures that no pod can communicate with others unless explicitly allowed, offering maximum isolation and enhanced security in your Kubernetes cluster.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
</code></pre><p><strong>Testing the</strong></p>]]></description><link>https://learn.ashiqpradeep.com/kubernetes-network-policy-deny-all-traffic-for-better-cluster-isolation/</link><guid isPermaLink="false">67cbe572c4958c58bb8ab372</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Sat, 08 Mar 2025 06:46:03 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/Kubernetes-Network-Policy-Deny-All-Traffic-for-Better-Cluster-Isolation.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/Kubernetes-Network-Policy-Deny-All-Traffic-for-Better-Cluster-Isolation.png" alt="Kubernetes Network Policy: Deny All Traffic for Better Cluster Isolation"><p>Real-World Use Case: Implementing a deny-all traffic policy ensures that no pod can communicate with others unless explicitly allowed, offering maximum isolation and enhanced security in your Kubernetes cluster.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
</code></pre><p><strong>Testing the Policy:</strong></p><pre><code>Create two pods:

kubectl run pod-a --image=busybox --restart=Never -- sleep 3600
kubectl run pod-b --image=busybox --restart=Never -- sleep 3600
</code></pre><pre><code>Access pod-a and try to communicate with pod-b:

kubectl exec -it pod-a -- sh
ping pod-b-ip

Expected Result: The ping should fail due to the deny-all policy blocking all traffic.</code></pre><p>Deleting Pods and Network Policy:</p><pre><code>kubectl delete pod pod-a
kubectl delete pod pod-b
kubectl delete networkpolicy deny-all -n default
</code></pre>]]></content:encoded></item><item><title><![CDATA[How to Install MicroK8s on Ubuntu]]></title><description><![CDATA[<p>MicroK8s is a lightweight Kubernetes distribution designed for developers and small-scale deployments. This guide will walk you through the steps to install MicroK8s on Ubuntu.</p>
<p><strong>Step 1: Update System Packages</strong></p>
<p>Before installing MicroK8s, update your system to ensure all packages are up to date:</p>
<pre><code>sudo apt update &amp;&amp; sudo</code></pre>]]></description><link>https://learn.ashiqpradeep.com/how-to-install-microk8s-on-ubuntu/</link><guid isPermaLink="false">67c97c31c4958c58bb8ab343</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Thu, 06 Mar 2025 10:50:35 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/How-to-Install-MicroK8s-on-Ubuntu.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/How-to-Install-MicroK8s-on-Ubuntu.png" alt="How to Install MicroK8s on Ubuntu"><p>MicroK8s is a lightweight Kubernetes distribution designed for developers and small-scale deployments. This guide will walk you through the steps to install MicroK8s on Ubuntu.</p>
<p><strong>Step 1: Update System Packages</strong></p>
<p>Before installing MicroK8s, update your system to ensure all packages are up to date:</p>
<pre><code>sudo apt update &amp;&amp; sudo apt upgrade -y</code></pre><p><strong>Step 2: Install MicroK8s</strong></p>
<p>Install MicroK8s using Snap:</p>
<pre><code>sudo snap install microk8s --classic</code></pre><p><strong>Step 3: Add User to MicroK8s Group</strong></p>
<p>To avoid using sudo for MicroK8s commands, add your user to the microk8s group:</p>
<pre><code>sudo usermod -aG microk8s $USER
newgrp microk8s</code></pre><p><strong>Step 4: Verify Installation</strong></p>
<p>Check the status of MicroK8s:</p>
<pre><code>microk8s status --wait-ready</code></pre><p><strong>Step 5: Enable Essential Add-ons</strong></p>
<p>MicroK8s provides various add-ons that enhance its functionality. Enable some essential ones:</p>
<pre><code>microk8s enable dns storage dashboard</code></pre><p><strong>Step 6: Access Kubernetes with MicroK8s</strong></p>
<p>By default, MicroK8s uses its own version of kubectl. You can run:</p>
<pre><code>microk8s kubectl get nodes</code></pre><p>If you prefer using kubectl directly, create an alias:</p>
<pre><code>alias kubectl=&apos;microk8s kubectl&apos;</code></pre><p>To make this alias persistent, add it to your shell configuration file:</p>
<pre><code>echo &quot;alias kubectl=&apos;microk8s kubectl&apos;&quot; &gt;&gt; ~/.bashrc
source ~/.bashrc</code></pre><p><strong>Step 7: Check Cluster Information</strong></p>
<p>To verify cluster details, run:</p>
<pre><code>microk8s kubectl cluster-info</code></pre><p><strong>Optional: Enable MicroK8s on System Startup</strong></p>
<p>If you want MicroK8s to start automatically after a reboot, enable it:</p>
<pre><code>sudo snap enable microk8s</code></pre><p><strong>Uninstalling MicroK8s</strong></p>
<p>If you need to remove MicroK8s from your system, use:</p>
<pre><code>sudo snap remove microk8s</code></pre>]]></content:encoded></item><item><title><![CDATA[Enhance Your Kubernetes Security with AppArmor: Deny Write Access to Sensitive Files]]></title><description><![CDATA[<p><strong>What is AppArmor?</strong><br>
AppArmor is a security module that restricts what applications can do on a system by enforcing security policies. In Kubernetes, it helps control container access to specific system files and directories, adding an extra layer of security.</p>
<p><strong>Why Use AppArmor?</strong><br>
Even though containers are isolated, they might</p>]]></description><link>https://learn.ashiqpradeep.com/enhance-your-kubernetes-security-with-apparmor-deny-write-access-to-sensitive-files/</link><guid isPermaLink="false">67c9325fc4958c58bb8ab326</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Thu, 06 Mar 2025 05:36:20 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2025/03/Enhance-Your-Kubernetes-Security-with-AppArmor-Deny-Write-Access-to-Sensitive-Files.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2025/03/Enhance-Your-Kubernetes-Security-with-AppArmor-Deny-Write-Access-to-Sensitive-Files.png" alt="Enhance Your Kubernetes Security with AppArmor: Deny Write Access to Sensitive Files"><p><strong>What is AppArmor?</strong><br>
AppArmor is a security module that restricts what applications can do on a system by enforcing security policies. In Kubernetes, it helps control container access to specific system files and directories, adding an extra layer of security.</p>
<p><strong>Why Use AppArmor?</strong><br>
Even though containers are isolated, they might still access sensitive system files or directories. AppArmor lets you restrict access to prevent potential security risks by enforcing stricter boundaries for your containers.</p>
<p><strong>Prevent Write Access to Sensitive Areas</strong><br>
To prevent containers from modifying sensitive files, such as those in the /etc/ directory, you can define an AppArmor profile that blocks write access.</p>
<ol>
<li>Create the AppArmor Profile<br>
Create a profile that denies write access to /etc/**:</li>
</ol>
<pre><code>apparmor_parser -q &lt;&lt;EOF
#include &lt;tunables/global&gt;

profile block-etc-write flags=(attach_disconnected) { 
  #include &lt;abstractions/base&gt;

  file,

  # Deny all file writes.
  deny /etc/** w, 
}
EOF
</code></pre><p>In this profile:<br>
deny /etc/** w: This rule ensures that no files within the /etc/ directory can be written to by the container.</p>
<ol start="2">
<li>Apply the Profile in Kubernetes<br>
Once the profile is created, apply it to your Kubernetes deployment using annotations. Here&apos;s the configuration:</li>
</ol>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: hello-apparmor
  annotations:
    container.apparmor.security.beta.kubernetes.io/hello: localhost/block-etc-write
spec:
  containers:
  - name: hello
    image: busybox
    command: [ &quot;sh&quot;, &quot;-c&quot;, &quot;echo &apos;Hello AppArmor!&apos; &amp;&amp; sleep 1h&quot; ]
</code></pre><p>The annotation container.apparmor.security.beta.kubernetes.io/hello: localhost/block-etc-write applies the block-etc-write AppArmor profile to the container.</p>
<p>The container runs a simple busybox container with a shell command that prints &quot;Hello AppArmor!&quot; and then sleeps for an hour.</p>
<p>Apply the Pod Configuration<br>
Deploy the updated configuration:</p>
<pre><code>kubectl apply -f hello-armor.yaml</code></pre><p><strong>Why It Matters</strong></p>
<ul>
<li>Blocking write access to critical directories, like /etc/, ensures:</li>
<li>Protection from accidental or malicious changes to system files.<br>
Increased security by reducing the risk of unauthorized modifications.</li>
</ul>
<p><strong>Conclusion</strong><br>
With AppArmor, you can easily restrict container access to sensitive areas, like /etc/, adding another layer of protection to your Kubernetes environment.</p>
]]></content:encoded></item><item><title><![CDATA[Simple Kubernetes CronJob for System Monitoring]]></title><description><![CDATA[<p>This example demonstrates a straightforward Kubernetes CronJob that periodically runs <code>top</code> to display basic system information.</p><pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
  name: basic-system-monitor
spec:
  schedule: &quot;*/10 * * * *&quot;  # Runs every 10 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: system-monitor
              image: busybox  # Lightweight image with basic utilities
              args:
                -</code></pre>]]></description><link>https://learn.ashiqpradeep.com/kubernetes-cronjob-for-periodic-system-health-checks/</link><guid isPermaLink="false">672f77bb9cd81503b0e4db2f</guid><dc:creator><![CDATA[Pradeep Kudukkil]]></dc:creator><pubDate>Sat, 09 Nov 2024 15:00:24 GMT</pubDate><media:content url="https://learn.ashiqpradeep.com/content/images/2024/11/Simple-Kubernetes-CronJob-for-System-Monitoring.png" medium="image"/><content:encoded><![CDATA[<img src="https://learn.ashiqpradeep.com/content/images/2024/11/Simple-Kubernetes-CronJob-for-System-Monitoring.png" alt="Simple Kubernetes CronJob for System Monitoring"><p>This example demonstrates a straightforward Kubernetes CronJob that periodically runs <code>top</code> to display basic system information.</p><pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
  name: basic-system-monitor
spec:
  schedule: &quot;*/10 * * * *&quot;  # Runs every 10 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: system-monitor
              image: busybox  # Lightweight image with basic utilities
              args:
                - /bin/sh
                - -c
                - &quot;top -bn1&quot;
          restartPolicy: OnFailure
</code></pre><p><strong>schedule:</strong> Runs every 10 minutes (*/10 * * * *).<br>
<strong>image:</strong> Uses busybox, which includes essential utilities like top.<br>
<strong>args:</strong> Runs the top -bn1 command to output basic system information once.</p>
<h3 id="steps-to-deploy-and-verify">Steps to Deploy and Verify</h3><p>Deploy the CronJob: Apply the YAML configuration:<br>
kubectl apply -f system-health-check.yaml</p>
<p>Verify Execution:<br>
kubectl get cronjob<br>
kubectl get jobs --watch</p>
<p>Check Logs:<br>
kubectl logs pod-name</p>
<p>Remove the CronJob after testing:<br>
kubectl delete cronjob system-health-check</p>
]]></content:encoded></item></channel></rss>