CPU & Memory Monitoring 101: How to Check, Analyze, and Optimize System Performance on Linux, Windows, and macOS

System performance matters—whether you're running a heavy-duty backend server on Linux, multitasking on Windows, or pushing Xcode to its limits on macOS. You don’t want your laptop sounding like a jet engine or your EC2 instance crashing from an out-of-memory error.

This guide walks you through how to check and analyze CPU and memory usage, interpret the data, and take practical actions across Linux, Windows, and macOS. Let’s dive in.


Linux: Your Terminal Is Your Best Friend#

Illustration of a person monitoring system performance on multiple computer screens representing Linux terminal usage

Check CPU and Memory Usage#

Linux gives you surgical control via CLI tools. Start with:

  • top or htop: Real-time usage metrics

    top
    sudo apt install htop
    htop
  • ps aux --sort=-%mem: Sorts by memory usage

    ps aux --sort=-%mem | head -n 10
  • free -h: View memory in a human-readable format

    free -h
  • vmstat: Shows memory, swap, and CPU context switching

    vmstat 1 5

Learn more: Linux Memory Explained

Optimization Tips#

  • Enable swap (if disabled) – Many VMs (like EC2) don’t enable swap by default:

    sudo fallocate -l 4G /swapfile
    sudo chmod 600 /swapfile
    sudo mkswap /swapfile
    sudo swapon /swapfile
    echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
  • Tune Java apps (JVM-based) — Limit memory usage:

    -Xmx512M -Xms512M

Windows: Task Manager and Beyond#

Illustration of a person using a laptop in a workspace symbolizing system monitoring on Windows

Check Resource Usage#

  • Task Manager (Ctrl + Shift + Esc):

    • View CPU usage per core
    • Check memory consumption
    • Review app/resource breakdowns
  • Resource Monitor:

    • From Task Manager > Performance > Open Resource Monitor
    • Monitor by process, network, disk, and more
  • PowerShell:

    Get-Process | Sort-Object CPU -Descending | Select-Object -First 10
    Get-Process | Sort-Object WS -Descending | Select-Object -First 10

Learn more: Windows Performance Tuning

Optimization Tips#

  • Disable startup apps — Uncheck unnecessary ones in the Startup tab
  • Enable paging file (virtual memory)
  • Remove bloatware — Pre-installed apps often hog memory

macOS: Spotlight with Muscle#

Illustration of a person standing beside a large laptop showing gears, representing performance monitoring on macOS

Check Resource Usage#

  • Activity Monitor:

    • Open via Spotlight (Cmd + Space > “Activity Monitor”)
    • Tabs: CPU, Memory, Energy, Disk, Network
  • Terminal Tools:

    top
    vm_stat
    • Get free memory in MB:
      pagesize=$(pagesize)
      vm_stat | awk -v page_size=$pagesize '/Pages free/ {print $3 * page_size / 1024 / 1024 " MB"}'
  • ps + sort:

    ps aux | sort -nrk 3 | head -n 10 # Top CPU
    ps aux | sort -nrk 4 | head -n 10 # Top Memory

Learn more: Apple Developer Performance Tips

Optimization Tips#

  • Close idle Chrome tabs — Each one is a separate process
  • Purge caches (dev use only):
    sudo purge
  • Reindex Spotlight (if mds is hogging CPU):
    sudo mdutil -E /

Key Metrics to Watch (Across OSes)#

MetricWhat It Tells You
%CPUProcessor usage per task/core
RSS (Memory)Actual RAM used by a process
Swap UsedMemory overflow – indicates stress
Load AverageAverage system load (Linux)
Memory PressureRAM strain (macOS)

Bonus Tools (Cross-Platform)#


TL;DR - Fixes in a Flash#

SymptomQuick Fix
High memory, no swapEnable swap (Linux) / Check paging (Win)
JVM app using too much RAMLimit heap: -Xmx512M
Chrome eating RAMClose tabs, use Safari (macOS)
Random CPU spikes (Mac)Reindex Spotlight
Background process bloatUse ps, top, or Task Manager

Final Thoughts#

System performance isn’t just about uptime — it’s about user experience, developer productivity, and infrastructure cost. The key is to observe patterns, know what “normal” looks like, and take action before things go south.

Whether you're debugging a dev laptop or running a multi-node Kubernetes cluster, these tools and tips will help you stay fast and lean.

Nife.io makes multi-cloud infrastructure and application orchestration simple. It provides enterprises with a unified platform to automate, scale, and manage workloads effortlessly.

Discover how Nife streamlines Application Lifecycle Management.

Cloud Cost Optimization: Strategies for Sustainable Cloud Spending

Cloud computing has revolutionized the way we build and scale applications. But with great flexibility comes the challenge of cost control. Without governance, costs can spiral due to idle resources, over-provisioned instances, unnecessary data transfers, or underutilized services.

This guide outlines key principles, actionable steps, and proven strategies for optimizing cloud costs—whether you're on AWS, Azure, or GCP.


Why Cloud Cost Optimization Matters#

Illustration of a person thinking with question marks representing the complexity of cloud cost decisions
  • Avoid unexpected bills — Many teams only detect cost spikes after billing alarms go off.
  • Improve ROI — Optimize usage to get more value from your investment.
  • Enable FinOps — Align finance, engineering, and ops through shared accountability.
  • Sustainable operations — Efficiency often translates to lower energy usage and better sustainability.

Learn more from FinOps Foundation


Step-by-Step Approach to Cloud Cost Optimization#

Illustration of a woman analyzing charts and blocks symbolizing cloud cost analysis and visibility

1. Gain Visibility Into Your Spending#

Before you optimize, measure and monitor:

  • AWS: Cost Explorer, Budgets, and Cost & Usage Reports
  • Azure: Cost Management + Billing
  • GCP: Billing Reports and Cost Tables

Pro Tip: Set alerts with CloudWatch, Azure Monitor, or GCP Monitoring for anomaly detection.

Explore the site to start with AWS Cost Explorer and visualize your cloud usage trends.


2. Right-Size Your Resources#

Over-provisioning is expensive:

  • Use Auto Scaling for EC2/VMs
  • Monitor CPU, memory, disk usage
  • Use recommendations:
    • aws compute-optimizer
    • Azure Advisor
    • GCP Recommender

Automation Tip: Enforce policies with Terraform or remediation scripts.

Explore the site to get insights from AWS Compute Optimizer and reduce over-provisioned instances.


3. Use Reserved Instances & Savings Plans#

Instead of on-demand:

  • AWS: Savings Plans, Reserved Instances
  • Azure: Reserved VM Instances
  • GCP: Committed Use Discounts

Save 30–72% by committing for 1–3 years.


4. Eliminate Idle and Zombie Resources#

Common culprits:

  • Unattached EBS volumes (AWS)
  • Idle IPs (AWS, GCP)
  • Stopped VMs with persistent disks (Azure, GCP)
  • Forgotten load balancers
  • Old snapshots/backups

Tools: aws-nuke, gcloud cleanup, Azure CLI scripts


5. Optimize Storage and Data Transfer#

Storage and egress can sneak up on you:

  • Use CDNs: CloudFront, Azure CDN, GCP CDN
  • Tiered storage: S3 Glacier, Azure Archive, Nearline Storage
  • Set lifecycle policies for auto-delete/archive

For step-by-step examples, check AWS’s official guide on S3 Lifecycle Docs Configuration.


6. Consolidate & Modernize Architectures#

  • Use serverless: Lambda, Azure Functions, Cloud Functions
  • Containerize: ECS, EKS, AKS, GKE
  • Migrate to managed DBs: RDS, CosmosDB, Cloud SQL

Bonus Tools:

  • KubeCost (Kubernetes costs)
  • Infracost (Terraform cost insights)

Explore the site to understand Kubernetes cost monitoring with KubeCost and allocate expenses by workload.


7. Implement Cost Governance Policies#

  • Enforce tags by team, env, project
  • Set team-level budgets
  • Use chargeback/showback models
  • Auto-schedule non-prod environments:
    • AWS Instance Scheduler
    • Azure Logic Apps
    • GCP Cloud Scheduler

Deep Dive: CloudWatch Cost Breakdown#

Illustration of a team analyzing cost data with a rocket symbolizing optimized cloud operations and scaling
aws ce get-cost-and-usage \
--time-period Start=2025-04-01,End=$(date +%F) \
--granularity MONTHLY \
--metrics "UnblendedCost" \
--filter '{
"Dimensions": {
"Key": "SERVICE",
"Values": ["AmazonCloudWatch"]
}
}' \
--group-by '[{"Type": "DIMENSION", "Key": "USAGE_TYPE"}]' \
--region ap-south-1

đź”§ Optimization Tips:

  • Delete unused dashboards
  • Reduce custom metrics
  • Use embedded metrics format
  • Aggregate metrics (1-min or 5-min intervals)

Conclusion#

Cloud cost optimization is a continuous process. With visibility, automation, and governance, you can:

  • Reduce cloud spend
  • Boost operational efficiency
  • Build a cost-conscious engineering culture

Start small, iterate fast, and let your infrastructure pay off—without paying more.

Enterprises needing advanced automation can rely on Nife.io’s PlatUS platform to simplify multi-cloud storage orchestration and seamlessly integrate with AWS-native tools.

Nife.io delivers advanced orchestration capabilities for enterprises managing multi-cloud environments, enhancing and extending the power of AWS-native tools.

Nife Labs Recognized Among STL Partners’ Top 50 Edge Computing Companies to Watch in 2025

Nife Labs - STL Partners - Top 50 Edge Companies to Watch

Nife Labs is excited to be announced as one of @STL Partners' Top 50 Edge Companies to Watch, highlighting those who are making waves in edge computing and have exciting developments coming in 2025.

Take a look at what we achieved last year and learn a bit more about what’s next for us:

https://stlpartners.com/articles/edge-computing/50-edge-computing-companies-2025/#NifeLabs

Driving Innovation in Edge Computing#

At Nife Labs, we simplify the complexities of multi-cloud and edge computing environments, enabling enterprises to deploy, manage, and secure their applications effortlessly. Our platform offers:

  • Seamless orchestration across hybrid environments
  • Intelligent cost optimization strategies
  • Automated scaling capabilities

By streamlining these critical operations, we help businesses focus on innovation while ensuring high performance and cost efficiency.

Key Achievements in 2024#

2024 was a year of significant milestones for Nife Labs. We launched three flagship products tailored to address critical challenges in edge and multi-cloud ecosystems:

SyncDrive#

Secure, high-speed file synchronization between local systems and private clouds, giving enterprises full control over their data.

Platus#

A comprehensive cost visibility and optimization platform for cloud infrastructure, helping businesses manage deployment budgets efficiently.

Zeke#

A standalone orchestration solution that connects and optimizes multi-cloud environments for enhanced scalability and performance.

Additionally, we expanded our market presence into the United States and Middle East, supporting large-scale customers in retail, blockchain, e-commerce, and public sectors.

What’s Next: Our 2025 Roadmap#

Building on our momentum, Nife Labs is focusing on integrating cutting-edge AI technologies to further elevate our solutions in 2025. Key initiatives include:

  • AI-led Incident Response: Automating detection and resolution of incidents in cloud and edge environments.
  • Predictive Scaling: Anticipating resource needs with AI to optimize performance and costs.
  • Intelligent Edge Orchestration: Dynamically managing workloads across distributed edge locations for maximum efficiency.
  • AI-enhanced DevOps, Security & Cost Control: Streamlining operations and providing intelligent recommendations for secure, cost-effective deployments.

Leading the Future of Edge Computing#

Being recognized by STL Partners as a top edge computing company underscores our commitment to innovation and excellence. As enterprises continue adopting distributed computing models, Nife Labs remains dedicated to simplifying complexity and enabling seamless operations in hybrid and multi-cloud environments.

Learn more about Nife Labs at nife.io

CloudWatch Bills Out of Control? A Friendly Guide to Taming Your Cloud Costs

Cloud bills can feel like magic tricks—one minute, you're paying peanuts, and the next, poof!—your CloudWatch bill hits $258 for what seems like just logs and a few metrics. If this sounds familiar, don’t worry—you're not alone.

Let’s break down why this happens and walk through some practical, no-BS steps to optimize costs—whether you're on AWS, Azure, or GCP.


Why Is CloudWatch So Expensive?#

Illustration of people thinking about cloud costs

CloudWatch is incredibly useful for monitoring, but costs can spiral if you’re not careful. In one real-world case:

  • $258 in just three weeks
  • $46+ from just API requests (those sneaky APN*-CW:Requests charges)

And that’s before accounting for logs, custom metrics, and dashboards! If you're unsure how AWS calculates these costs, check the AWS CloudWatch Pricing page for a detailed breakdown.


Why You Should Care About Cloud Cost Optimization#

The cloud is flexible, but that flexibility can lead to:

  • Overprovisioned resources (paying for stuff you don’t need)
  • Ghost resources (old logs, unused dashboards, forgotten alarms)
  • Silent budget killers (high-frequency metrics, unnecessary storage)

The good news? You can fix this.


Step-by-Step: How to Audit & Slash Your Cloud Costs#

Illustration of a person climbing steps with a pencil, symbolizing step-by-step cloud cost reduction

Step 1: Get Visibility (Where’s the Money Going?)#

First, figure out what’s costing you.

For AWS Users:#

  • Cost Explorer (GUI-friendly)
  • AWS CLI (for the terminal lovers):
    aws ce get-cost-and-usage \
    --time-period Start=2025-04-01,End=$(date +%F) \
    --granularity MONTHLY \
    --metrics "UnblendedCost" \
    --filter '{"Dimensions":{"Key":"SERVICE","Values":["AmazonCloudWatch"]}}' \
    --group-by '[{"Type":"DIMENSION","Key":"USAGE_TYPE"}]'
    This breaks down CloudWatch costs by usage type. For more CLI tricks, refer to the AWS Cost Explorer Docs.

For Azure/GCP:#

  • Azure Cost Analysis or Google Cloud Cost Insights
  • Check for unused resources, high storage costs, and unnecessary logging.

Step 2: Find the Biggest Cost Culprits#

In CloudWatch, the usual suspects are:
âś… Log ingestion & storage (keeping logs too long?)
âś… Custom metrics ($0.30 per metric/month adds up!)
âś… Dashboards (each widget costs money)
âś… High-frequency metrics (do you really need data every second?)
âś… API requests (those APN*-CW:Requests charges)


Step 3: Cut the Waste#

Now, start trimming the fat.

1. Delete Old Logs & Reduce Retention#

aws logs put-retention-policy \
--log-group-name "/ecs/app-prod" \
--retention-in-days 7 # Keep logs for just a week if possible

For a deeper dive into log management best practices, check out our guide on Optimizing AWS Log Storage.

2. Kill Unused Alarms & Dashboards#

  • Unused alarms? Delete them.
  • Dashboards no one checks? Gone.

3. Optimize Metrics#

  • Aggregate metrics instead of sending every tiny data point.
  • Avoid 1-second granularity unless absolutely necessary.
  • Use Metric Streams to send data to cheaper storage (S3, Prometheus).

For a more advanced approach to log management, AWS offers a great solution for Cost-Optimized Log Aggregation and Archival in Amazon S3 using S3TAR.

Step 4: Set Budgets & Alerts (So You Don’t Get Surprised Again)#

Use AWS Budgets to:

  • Set monthly spending limits
  • Get alerts when CloudWatch (or any service) goes over budget
aws budgets create-budget --account-id 123456789012 \
--budget file://budget-config.json

Step 5: Automate Cleanup (Because Manual Work Sucks)#

Tools like Cloud Custodian can:

  • Delete old logs automatically
  • Notify you about high-cost resources
  • Schedule resources to shut down after hours

Bonus: Cost-Saving Tips for Any Cloud#

AWS#

🔹 Use Savings Plans for EC2 (up to 72% off)
🔹 Enable S3 Intelligent-Tiering (auto-moves cold data to cheaper storage)
🔹 Check Trusted Advisor for free cost-saving tips

Azure#

🔹 Use Azure Advisor for personalized recommendations
🔹 Reserved Instances & Spot VMs = big savings
🔹 Cost Analysis in Azure Portal = easy tracking

Google Cloud#

🔹 Committed Use Discounts = long-term savings
🔹 Object Lifecycle Management in Cloud Storage = auto-delete old files
🔹 Recommender API = AI-powered cost tips


Final Thoughts: Spend Smart, Not More#

Illustration of two people reviewing a checklist on a large clipboard, representing final thoughts and action items

Cloud cost optimization isn't about cutting corners—it's about working smarter. By regularly auditing your CloudWatch usage, setting retention policies, and eliminating waste, you can maintain robust monitoring while keeping costs predictable. Remember: small changes like adjusting log retention from 30 days to 7 days or consolidating metrics can lead to significant savings over time—without sacrificing visibility.

For cluster management solutions that simplify this process, explore Nife's Managed Clusters platform - your all-in-one solution for optimized cloud operations.

Looking for enterprise-grade cloud management solutions? Explore how Nife simplifies cloud operations with its cutting-edge platform.

Stay smart, stay optimized, and keep those cloud bills in check! 🚀

Enhancing LLMs with Retrieval-Augmented Generation (RAG): A Technical Deep Dive

Large Language Models (LLMs) have transformed natural language processing, enabling impressive feats like summarization, translation, and conversational agents. However, they’re not without limitations. One major drawback is their static nature—LLMs can't access knowledge beyond their training data, which makes handling niche or rapidly evolving topics a challenge.

This is where Retrieval-Augmented Generation (RAG) comes in. RAG is a powerful architecture that enhances LLMs by retrieving relevant, real-time information and combining it with generative capabilities. In this guide, we’ll explore how RAG works, walk through implementation steps, and share code snippets to help you build a RAG-enabled system.


What is RAG?#

Illustration of people discussing what RAG is

RAG integrates two main components:

  1. Retriever: Fetches relevant context from a knowledge base based on the user's query.
  2. Generator (LLM): Uses the retrieved context along with the query to generate accurate, grounded responses.

Instead of relying solely on what the model "knows," RAG allows it to augment answers with external knowledge.

Learn more from the original RAG paper by Facebook AI.


Why Use RAG?#

Here are some compelling reasons to adopt RAG:

  • Real-time Knowledge: Update the knowledge base anytime without retraining the model.
  • Improved Accuracy: Reduces hallucinations by anchoring responses in factual data.
  • Cost Efficiency: Avoids the need for expensive fine-tuning on domain-specific data.

Core Components of a RAG System#

Illustration showing components of a RAG system

1. Retriever#

The retriever uses text embeddings to match user queries with relevant documents.

Example with LlamaIndex:#

from llama_index import SimpleRetriever, EmbeddingRetriever
retriever = EmbeddingRetriever(index_path="./vector_index")
query = "What is RAG in AI?"
retrieved_docs = retriever.retrieve(query, top_k=3)

2. Knowledge Base#

Your retriever needs a knowledge base with embedded documents.

Key Steps:#

  • Document Loading: Ingest your data.
  • Chunking: Break text into meaningful chunks.
  • Embedding: Generate vector representations.
  • Indexing: Store them in a vector database like FAISS or Pinecone.

Example with OpenAI Embeddings:#

from openai.embeddings_utils import get_embedding
import faiss
documents = ["Doc 1 text", "Doc 2 text"]
embeddings = [get_embedding(doc) for doc in documents]
index = faiss.IndexFlatL2(len(embeddings[0]))
index.add(embeddings)

3. LLM Integration#

After retrieval, the documents are passed to the LLM along with the query.

Example:#

from transformers import pipeline
generator = pipeline("text-generation", model="gpt-3.5-turbo")
context = "\n".join([doc.text for doc in retrieved_docs])
augmented_query = f"{context}\nQuery: {query}"
response = generator(augmented_query, max_length=200)
print(response[0]['generated_text'])

You can experiment with Hugging Face’s Transformers library for more customization.


Best Practices & Considerations#

Illustration showing best practices for using RAG
  • Chunk Size: Balance between too granular (noisy) and too broad (irrelevant).

  • Retrieval Enhancements:

    • Combine embeddings with keyword search.
    • Add metadata filters (e.g., date, topic).
    • Use rerankers to boost relevance.
    • Use rerankers like Cohere Rerank or OpenAI’s function calling to boost relevance.

RAG vs. Fine-Tuning#

FeatureRAGFine-Tuning
Flexibility✅ High❌ Low
Real-Time Updates✅ Yes❌ No
Cost✅ Lower❌ Higher
Task Adaptationâś… Dynamicâś… Specific

RAG is ideal when you need accurate, timely responses without the burden of retraining.

Final Thoughts#

RAG brings the best of both worlds: LLM fluency and factual accuracy from external data. Whether you're building a smart chatbot, document assistant, or search engine, RAG provides the scaffolding for powerful, informed AI systems.

Start experimenting with RAG and give your LLMs a real-world upgrade!

Discover Seamless Deployment with Oikos on Nife.io

Looking for a streamlined, hassle-free deployment solution? Check out Oikos on Nife.io to explore how it simplifies application deployment with high efficiency and scalability. Whether you're managing microservices, APIs, or full-stack applications, Oikos provides a robust platform to deploy with ease.

Understanding Windows IIS (Internet Information Services)

Windows Internet Information Services (IIS) is Microsoft’s robust, enterprise-grade web server designed to host web applications and services. It’s tightly integrated with the Windows Server platform and widely used for everything from static sites to dynamic web apps built with ASP.NET, PHP, or Python.

In this guide, we’ll walk through what IIS is, its key components, common use cases, how to configure it, and ways to troubleshoot typical issues.


What is IIS?#

Illustration of a thinking brain representing the concept of IIS and server logic

IIS is a feature-rich web server that supports multiple protocols including HTTP, HTTPS, FTP, FTPS, SMTP, and WebSocket. It’s often chosen in Windows-centric environments for its performance, flexibility, and ease of use.For an official overview, check out Microsoft’s IIS documentation.

It can host:

  • Static websites
  • Dynamic applications using ASP.NET, PHP, or Python
  • Web services and APIs

IIS provides powerful security controls, application isolation via application pools, and extensive monitoring features.


Core Components of IIS#

IIS Manager#

The graphical user interface for managing IIS settings, websites, and application pools.

Web Server#

Handles incoming HTTP(S) traffic and serves static or dynamic content.

Application Pools#

Isolate applications to improve stability and security. Each pool runs in its own worker process.

FastCGI#

Used to run non-native apps like PHP or Python. For Python apps, IIS commonly uses wfastcgi to bridge communication. Learn more about hosting Python apps on IIS.

SSL/TLS Support#

IIS makes it easy to configure HTTPS, manage SSL certificates, and enforce secure connections.


Key Features#

Person unlocking a padlock symbolizing IIS security and feature access

Security & Authentication#

Supports multiple authentication schemes like Basic, Integrated Windows Auth, and custom modules. Can be tied into Active Directory.

Logging & Diagnostics#

Robust logging and diagnostics tools to help troubleshoot performance and runtime issues.For troubleshooting guides, visit Microsoft’s IIS troubleshooting resources.

Performance & Scalability#

Features like output caching, dynamic compression, and bandwidth throttling help scale under load.


How to Configure IIS on Windows Server#

Install IIS#

  1. Open Server Manager → Add Roles and Features
  2. Choose Web Server (IIS) and complete the wizard
  3. Launch IIS Manager using inetmgr in the Run dialog

Add a Website#

  1. In IIS Manager, right-click Sites → Add Website
  2. Set Site name, physical path, and port
  3. Optionally bind a domain or IP

Configure Application Pool#

Each new website creates a pool, but you can customize it:

  • Set .NET version
  • Change identity settings
  • Enable recycling

Enable HTTPS#

  1. Right-click site → Edit Bindings
  2. Add HTTPS binding with an SSL certificate

Set File Permissions#

Ensure that IIS has read (and optionally write) permissions on your site directory.


Troubleshooting Common Issues#

Engineer pointing at a screen with gears representing IIS troubleshooting and diagnostics

Website Not Starting#

  • Check event logs for errors
  • Ensure app pool is running
  • Confirm no port conflicts

Permission Denied Errors#

  • Confirm folder/file permissions for IIS user

Python FastCGI Issues#

  • Validate wfastcgi.py installation
  • Confirm FastCGI settings in IIS Manager

Slow Performance#

  • Enable caching and compression
  • Use performance monitor tools

For more community-driven insights, explore the Microsoft IIS Tech Community.

Conclusion#

IIS remains a top-tier web server solution for Windows environments. Whether you're running enterprise ASP.NET applications or lightweight Python services, IIS delivers in performance, security, and manageability.

With the right setup and understanding of its components, you can confidently deploy and manage scalable, secure web infrastructure on Windows Server.

Looking to streamline your cloud infrastructure, application delivery, or DevOps workflows? Visit nife.io/solutions to discover powerful tools and services tailored for modern application lifecycles. and specialized support for Unreal Engine app deployment in cloud environments.

Setting Up NGINX Ingress Controller in EKS with HTTP and TCP Routing

In AWS EKS, exposing each application with its own LoadBalancer is costly and inefficient. A smarter approach is using an NGINX Ingress Controller, which allows routing multiple applications through a single LoadBalancer — using host-based HTTP routing and TCP port-based routing.

This guide explains how to:

  • Deploy NGINX Ingress Controller via Helm
  • Set up host-based routing for HTTP apps
  • Configure TCP routing for non-HTTP services
  • Map domains via Cloudflare
  • Reference official docs

Why Use Ingress in EKS?#

Illustration of a confused man and woman surrounded by question marks, representing the question: Why Use Ingress in EKS?
Benefits
One LoadBalancer for many services
Lower costs
Host & path-based routing
Supports TCP & HTTP apps
Works with Cloudflare
Centralized config

Prerequisites#

  • EKS Cluster
  • Helm, kubectl, eksctl
  • Cloudflare account
  • Domain for your app
  • Applications/services already deployed in Kubernetes

Step 1: Install NGINX Ingress Controller via Helm#

Illustration of Person with wrench and gear symbolizing NGINX Ingress Controller installation via Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace ingress-nginx \
--set controller.service.type=LoadBalancer

For advanced configurations, refer to the official NGINX Ingress Helm chart documentation.

This exposes the controller via a single ELB.


Step 2: Get ELB DNS for Ingress Controller#

kubectl get svc -n ingress-nginx

Note the external ELB DNS, e.g.:

a1b2c3d4e5f6g7.elb.amazonaws.com

Step 3: Set Up DNS in Cloudflare#

Person configuring website DNS settings on a large screen, representing DNS setup in Cloudflare
  1. Go to DNS settings
  2. Add a record:
    • Type: CNAME or A
    • Name: your-subdomain.yourdomain.com
    • Target: ELB DNS
    • Proxy status: DNS Only or Proxied

For Cloudflare-specific optimizations, check Cloudflare’s Kubernetes integration guide.


Step 4: HTTP Host-Based Ingress Example#

Here's a sample Ingress manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pubggpiro9ypjn-ing
namespace: pubggpiro9ypjn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: metube-app-622604.clb2.nifetency.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-f35714cd-4cb5-4f7e-b9db-4daa699640b3
port:
number: 8081

Apply the file:

kubectl apply -f ingress.yaml

Learn more about Ingress annotations in the Kubernetes Ingress documentation.


Step 5: Add TCP Support (Optional)#

Step 5.1: Create TCP ConfigMap#

apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
"5432": "my-namespace/postgres-service:5432"

Step 5.2: Upgrade Controller to Load TCP ConfigMap#

helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.extraArgs.tcp-services-configmap=ingress-nginx/tcp-services

Or during first install:

helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.extraArgs.tcp-services-configmap=ingress-nginx/tcp-services \
--set controller.service.type=LoadBalancer

For troubleshooting TCP routing, see NGINX’s TCP/UDP passthrough guide.


Step 6: Expose TCP Port in LoadBalancer#

Edit the controller service:

kubectl edit svc ingress-nginx-controller -n ingress-nginx

Add:

ports:
- name: postgres
port: 5432
targetPort: 5432
protocol: TCP

Routing Overview#

TypeUsesIngress Object?ConfigMap?DNS
HTTPWeb apps, APIsYesNoSubdomain
TCPPostgreSQL, RedisNoYesSame ELB + Port

Tips#

  • Use ingressClassName: nginx to prevent conflicts.
  • Use cert-manager for HTTPS/TLS termination.
  • Isolate apps using namespaces.
  • Annotate Ingress for rewrites, caching, rate-limiting, etc.

Conclusion#

With this setup, you can serve both HTTP and TCP apps in your EKS cluster using a single LoadBalancer, simplifying your architecture and saving costs. HTTP traffic is managed using Ingress resources with host rules, while TCP apps like databases are handled using a custom ConfigMap.

This architecture is production-ready when combined with Cloudflare for DNS, TLS, and protection.

For cluster management solutions, explore our Nife's Managed Clusters platform.

Discover solutions for Managing Multiple Organizations across your infrastructure

Host Multiple Services in EKS with One LoadBalancer Using NGINX Ingress and Cloudflare

When managing Kubernetes workloads on AWS EKS, using a LoadBalancer for each service can quickly become expensive and inefficient. A cleaner, scalable, and more cost-effective solution is to use an Ingress Controller like NGINX to expose multiple services via a single LoadBalancer. This blog will walk you through how I set up Ingress in my EKS cluster using Helm, configured host-based routing, and mapped domains through Cloudflare.


Prerequisites#

  • AWS EKS Cluster set up
  • kubectl, helm, and aws-cli configured
  • Services already running in EKS
  • Cloudflare account to manage DNS

Get started with EKS in the AWS EKS User Guide.


Step 1: Deploy NGINX Ingress Controller using Helm#

Illustration of the NGINX Ingress Controller Helm deployment process
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set controller.service.type=LoadBalancer

This will install the NGINX Ingress Controller and expose it through a LoadBalancer service. You can get the external ELB DNS using:

kubectl get svc -n ingress-nginx

Note the EXTERNAL-IP of the nginx-ingress-controller—this is your public ELB DNS.

Learn more about NGINX Ingress at the official Kubernetes documentation.


Step 2: Create Your Ingress YAML for Host-Based Routing#

Illustration of a group of people holding a banner representing Ingress YAML configuration for host-based routing

Below is an example Ingress manifest to expose a service using a custom domain:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pubggpiro9ypjn-ing
namespace: pubggpiro9ypjn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: metube-app-622604.clb2.nifetency.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-f35714cd-4cb5-4f7e-b9db-4daa699640b3
port:
number: 8081

Apply the file using:

kubectl apply -f your-ingress.yaml

Step 3: Configure Domain in Cloudflare#

Illustration of Cloudflare DNS record configuration showing A and CNAME records for a domain.

Go to your Cloudflare dashboard and create a CNAME record:

  • Name: metube-app-622604 (or any subdomain you want)
  • Target: your NGINX LoadBalancer DNS (e.g., a1b2c3d4e5f6g7.elb.amazonaws.com)
  • Proxy status: Proxied âś…

Wait for DNS propagation (~1–5 minutes), and then your service will be available via the custom domain you configured.

Understand DNS management in Cloudflare with the Cloudflare DNS docs.


Verify the Setup#

Try accessing the domain in your browser:

http://metube-app-622604.clb2.nifetency.com

You should see the application running from port 8081 of the backend service.


Reference Document#

For more detailed steps and examples, check out this shared doc:
đź”— Ingress and DNS Setup Guide


Benefits of This Setup#

  • Cost-effective: One LoadBalancer for all services.
  • Scalable: Add new routes/domains by just updating the Ingress.
  • Secure: Easily integrate SSL with Cert-Manager or Cloudflare.
  • Customizable: Full control over routing, headers, and rewrites.

Conclusion#

Exposing multiple services in EKS using a single LoadBalancer with NGINX Ingress can streamline your infrastructure and reduce costs. Just remember:

  • Use Helm to install and manage the NGINX Ingress Controller
  • Configure host-based routing to serve multiple domains through one point
  • Use Cloudflare DNS to map custom domains to your LoadBalancer
  • Regularly test and validate access for each new service

With just a few commands and configurations, you can build a scalable and efficient ingress setup—ready for production.

Learn how to add and manage EKS clusters with Nife’s AWS EKS integration guide.

Learn how to add standalone Kubernetes clusters with Nife’s standalone cluster setup guide.


How to Make Your S3 Bucket Public: A Simple Guide for Beginners

Amazon S3 (Simple Storage Service) is one of the most popular cloud storage solutions. Whether you're hosting static websites, sharing media files, or distributing software packages, there are times when making your S3 bucket public is necessary. But how do you do it without compromising security? Let’s walk through it step-by-step.


What is S3 and Why Make It Public?#

Illustration of confused person about S3

S3 allows you to store and retrieve any amount of data, from anywhere, at any time. Public access is useful when you want your files to be openly downloadable—no credentials needed. Use cases include:

  • Hosting a static website
  • Sharing public documentation
  • Providing downloadable files like media, zip archives, or datasets

Important: Be cautious—public access means anyone on the internet can view or download those files.


How to Make Your S3 Bucket Public#

There are two primary ways to make files in your S3 bucket publicly accessible:

1. Bucket Policy (Full Bucket Access)#

Illustration of bucket policy security

This method grants public access to all objects within a bucket.

Example Policy:#

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
  • What it does: Allows anyone to perform s3:GetObject (i.e., download files).
  • How to apply it:
aws s3api put-bucket-policy --bucket mybucket --policy file://public-read-policy.json
  • When to use: Great for hosting full public websites or making all files downloadable.

    For a deeper dive into IAM policies, visit AWS IAM Policies.


2. Object-Level ACL (Individual File Access)#

Illustration of team working on ACL file access

You can make just one file public without exposing the whole bucket.

Example:#

aws s3api put-object-acl --bucket mybucket --key myfile.zip --acl public-read
  • What it does: Grants public read access to just myfile.zip.

  • When to use: When you only want to share select files and keep others private.

    For more details on managing ACLs, see AWS ACL Documentation.


Why Public Access Might Be Helpful#

Making files public isn’t just convenient—it can power your apps and workflows:

  • Static Websites: Serve HTML/CSS/JS directly from S3.

  • Public Downloads: Let users grab resources without signing in.

  • Media Hosting: Share images, videos, or documents in a lightweight, scalable way.

    Looking for an easy way to manage your static websites? Check out Amazon S3 Static Website Hosting.


Best Practices and Considerations#

Before making your S3 bucket public, keep these tips in mind:

  • Security: Double-check that no sensitive data is exposed.

  • Use the right method: Policies for full-bucket access, ACLs for individual files.

  • Monitor usage: Enable access logs and CloudTrail to audit activity.

    Learn more about monitoring with AWS CloudTrail Logs.


Conclusion#

Making your S3 bucket (or objects) public can unlock powerful use cases—from hosting content to sharing files freely. Just remember:

  • Use bucket policies for broad access
  • Use ACLs for targeted, file-specific access
  • Monitor and audit access to stay secure

With just a few AWS CLI commands, your content can go live in minutes—safely and intentionally.

Looking to scale your infrastructure seamlessly? Supercharge your containerized workloads by adding AWS EKS clusters with Nife.io!

Tired of complex, time-consuming deployments? Nife.io makes it effortless with One-Click Deployment—so you can launch applications instantly, without the hassle.


How to Delete Specific Lines from a File Using Line Numbers

When you're working with text files—be it config files, logs, or source code—you may need to delete specific lines based on their line numbers. This might sound intimidating, but it’s actually quite easy once you know which tool to use.

In this post, we’ll walk through several methods to remove lines using line numbers, using command-line tools like sed, awk, and even Python. Whether you're a beginner or a seasoned developer, there’s a solution here for you.


The Basic Idea#


To delete a specific range of lines from a file:

  1. Identify the start line and end line.
  2. Use a tool or script to remove the lines between those numbers.
  3. Save the changes back to the original file.

Let’s break this down by method.


1. Using sed (Stream Editor)#


sed is a command-line utility that’s perfect for modifying files line-by-line.

Basic Syntax#

sed 'START_LINE,END_LINEd' filename > temp_file && mv temp_file filename
  • Replace START_LINE and END_LINE with actual numbers.
  • d tells sed to delete those lines.

Example#

To delete lines 10 through 20:

sed '10,20d' myfile.txt > temp_file && mv temp_file myfile.txt

With Variables#

START_LINE=10
END_LINE=20
sed "${START_LINE},${END_LINE}d" myfile.txt > temp_file && mv temp_file myfile.txt

📚 More on sed line deletion


2. Using awk#

awk is a pattern scanning tool. It’s ideal for skipping specific lines.

Syntax#

awk 'NR < START_LINE || NR > END_LINE' filename > temp_file && mv temp_file filename

Example#

awk 'NR < 10 || NR > 20' myfile.txt > temp_file && mv temp_file myfile.txt

This prints all lines except lines 10 through 20.

📚 Learn more about awk


3. Using head and tail#

Perfect when you only need to chop lines off the start or end.

Example#

Delete lines 10 to 20:

head -n 9 myfile.txt > temp_file
tail -n +21 myfile.txt >> temp_file
mv temp_file myfile.txt
  • head -n 9 gets lines before line 10.
  • tail -n +21 grabs everything from line 21 onward.

📚 tail command explained


4. Using perl#

perl is great for more advanced file manipulation.

Syntax#

perl -ne 'print unless $. >= START_LINE && $. <= END_LINE' filename > temp_file && mv temp_file filename

Example#

perl -ne 'print unless $. >= 10 && $. <= 20' myfile.txt > temp_file && mv temp_file myfile.txt
  • $. is the line number variable in perl.

📚 Perl I/O Line Numbering


5. Using Python#

For full control or if you’re already using Python in your workflow:

Example#

start_line = 10
end_line = 20
with open("myfile.txt", "r") as file:
lines = file.readlines()
with open("myfile.txt", "w") as file:
for i, line in enumerate(lines):
if i < start_line - 1 or i > end_line - 1:
file.write(line)

Python is especially useful if you need to add logic or conditions around what gets deleted.

📚 Working with files in Python


Conclusion#


There are plenty of ways to delete lines from a file based on line numbers:

  • Use sed for simple, fast command-line editing.
  • Choose awk for conditional line selection.
  • Go with head/tail for edge-case trimming.
  • Try perl if you’re comfortable with regex and quick one-liners.
  • Opt for Python when you need logic-heavy, readable scripts.

Explore Nife.io for modern cloud infrastructure solutions, or check out OIKOS to see how edge orchestration is done right.