Skip to main content

Monitoring Logs and Analytics

View real-time logs from your pods and use AI-powered analysis to understand issues.


Pod Logs Overview

Pod logs show output from your running applications in the cluster.

What are Pod Logs?

Pod logs are text output generated by applications:

  • Application startup messages
  • Errors and warnings
  • Debug information
  • Info messages
  • Business events

Why View Logs?

  • Troubleshooting: Find out what went wrong
  • Debugging: Understand application behavior
  • Monitoring: Watch real-time activity
  • Auditing: Record what happened
  • Analysis: Find patterns and issues

Accessing Pod Logs

Option 1: Via Dashboard

  1. Go to Clusters page
  2. Select cluster with agent
  3. Click Pod Logs tab
  4. Select application from dropdown
  5. Click Fetch Logs or Stream Logs

Option 2: Via Command Line

# View recent logs
kubectl logs <pod-name> -n <namespace>

# Stream live logs
kubectl logs -f <pod-name> -n <namespace>

# View last 100 lines
kubectl logs <pod-name> -n <namespace> --tail=100

# View logs from specific time
kubectl logs <pod-name> --since=1h

Fetching Logs

One-Time Fetch

Get a snapshot of recent logs:

  1. Select Application

    • Choose app from dropdown
    • Select how many lines:
      • 50 lines: Last few seconds
      • 100 lines: Last minute
      • 500 lines: Last 5 minutes
      • 1000 lines: Last 10 minutes
  2. Click Fetch Logs

    • Wait for logs to load
    • Results appear in viewer
  3. View Results

    • See log entries with timestamps
    • Each line shows log level and message

Log Format

[Timestamp] [Pod Name] [Level] Message
[2024-01-15 14:32:45] [api-server-xyz] [INFO] Request processed
[2024-01-15 14:32:46] [api-server-xyz] [ERROR] Database connection failed

Streaming Logs

Watch logs in real-time as they're generated:

Start Streaming

  1. Select Application

    • Choose the app to monitor
    • Only one stream at a time
  2. Click Stream Logs

    • Live logs start appearing
    • New entries appear at bottom
    • Stream indicator shows status

Streaming Features

Auto-Scroll:

  • Toggle "Auto-scroll" on/off
  • When on: Jumps to newest entry
  • When off: Stay at current position

Search:

  • Type in search box
  • Filters logs in real-time
  • Shows matching entries
  • Case-insensitive

Filter by Level:

  • All Levels: Show everything
  • Error: Only error messages
  • Warn: Warnings and errors
  • Info: Info and above
  • Debug: All messages

Stop Streaming

  1. Click Stop Stream button
  2. Live updates stop
  3. View your captured logs
  4. Can export or analyze

Filtering and Searching

Search Logs

Find specific messages:

  1. Enter search term

    • Type what you're looking for
    • Search is real-time
  2. Results update

    • Only matching logs shown
    • Count shows matches found
  3. Clear search

    • Delete search text
    • All logs appear again

Filter by Level

Show only certain severity:

ERROR: Application errors
└─ Indicates something failed

WARN: Warnings
└─ Indicates potential issue

INFO: Information messages
└─ General status updates

DEBUG: Debug information
└─ Detailed technical info

Example Searches

Find database errors:

Search: "database"
Shows: All lines mentioning database

Find timeout errors:

Search: "timeout"
Shows: All timeout-related errors

Find a user's activity:

Search: "[email protected]"
Shows: Everything that user did

Exporting Logs

Save logs for analysis or archival:

Export Options

Export as Text (.txt)

  • Plain text format
  • Easy to read
  • Good for sharing
  • Use for: Documentation, emails

Export as JSON (.json)

  • Structured format
  • Includes metadata
  • Machine-readable
  • Use for: Analysis tools, automation

How to Export

  1. Load or filter logs

    • Fetch or stream logs first
    • Filter to what you want
  2. Click Export

    • Choose format (TXT or JSON)
    • File downloads automatically
  3. Use exported logs

    • Analyze offline
    • Share with team
    • Import to analysis tools
    • Archive for compliance

AI-Powered Log Analysis

Use AI to automatically analyze logs and find issues.

What AI Analysis Does

The AI analyzes your logs to:

  • Detect Issues: Find errors and problems
  • Identify Patterns: Spot recurring issues
  • Provide Recommendations: Suggest fixes
  • Explain Problems: Describe what went wrong

Using AI Analysis

Step 1: Prepare Logs

Option A: Use Current Pod Logs

  1. Toggle "Use current pod logs" ON
  2. Make sure logs are loaded
  3. Shows how many entries will be analyzed

Option B: Paste Logs Manually

  1. Toggle "Use current pod logs" OFF
  2. Paste your logs in the text area
  3. Any log format is fine

Step 2: Run Analysis

  1. Click Analyze with AI
  2. Wait for analysis (usually 10-30 seconds)
  3. Hourglass icon shows progress
  4. Results appear when ready

Step 3: Review Results

AI provides:

  • Summary: What happened overall
  • Issues Found: Specific problems detected
  • Severity: How serious each issue is
  • Recommendations: How to fix

Understanding AI Analysis Results

Analysis Summary

Brief overview of what AI found:

"Application experienced 5 errors in the last hour, 
mostly related to database timeouts. Performance
degraded after 14:30 UTC."

Issues Detected

Specific problems found:

IssueSeverityDescription
Database Connection TimeoutHighCould not connect to database
Memory LeakMediumMemory usage growing over time
Slow QueryHighQuery taking 5+ seconds

Recommendations

How to fix each issue:

Database Connection Timeout
Recommendations:
1. Increase database connection pool size
2. Check database server health
3. Verify network connectivity
4. Review query timeouts

Patterns Found

Recurring issues and trends:

Error Pattern 1: Database timeouts spike at 14:00-15:00 UTC
→ Coincides with backup jobs
→ Recommendation: Schedule backups at off-peak hours

Error Pattern 2: Memory usage grows 100MB per hour
→ Suggests memory leak in application
→ Recommendation: Profile application memory usage

Interpreting AI Insights

Issue Severity Levels

Critical: 🔴

  • Application is down or failing
  • Immediate action required
  • Fix immediately

High: 🟠

  • Performance degraded
  • Users affected
  • Fix very soon

Medium: 🟡

  • Minor issues
  • Should be addressed
  • Fix when convenient

Low: 🔵

  • Informational
  • Good to know
  • Can defer

Common Troubleshooting Scenarios

Scenario 1: Application Keeps Crashing

Logs show:

Starting application...
OutOfMemoryError: Java heap space
Application terminated

AI Analysis suggests:

  • Insufficient memory allocated
  • Possible memory leak
  • Large data processing causing spike

Solutions:

  1. Increase pod memory limit
  2. Check for memory leaks in code
  3. Process data in smaller chunks
  4. Enable memory profiling

Scenario 2: Database Errors

Logs show:

ERROR: Cannot connect to database
Connection timeout after 30s
Failed to execute query

AI Analysis suggests:

  • Database server unreachable
  • Network connectivity issues
  • Connection pool exhausted

Solutions:

  1. Verify database is running
  2. Check firewall rules
  3. Increase connection pool
  4. Review network configuration

Scenario 3: High Latency

Logs show:

Request received
Processing...
Completed in 5000ms (expected: 100ms)

AI Analysis suggests:

  • Slow queries
  • External API delays
  • Resource contention

Solutions:

  1. Optimize database queries
  2. Add caching
  3. Use CDN for external assets
  4. Scale cluster resources

Best Practices

1. Regular Monitoring

  • Check logs daily
  • Monitor trends
  • Act on warnings
  • Review errors

2. Use Streaming for Live Issues

  • Stream when troubleshooting
  • Watch real-time behavior
  • Easier than fetching later
  • See issue as it happens

3. Use AI Analysis Regularly

  • Run weekly analysis
  • Track recurring issues
  • Monitor for patterns
  • Proactive problem finding

4. Export and Archive

  • Export important logs
  • Keep for compliance
  • Analyze historical patterns
  • Document issues

5. Set Up Alerts

  • Alert on error rates
  • Alert on specific errors
  • Alert on performance degradation
  • Set up escalation

Log Retention

How Long are Logs Kept?

  • Real-time logs: 7 days
  • Archived logs: 30 days
  • Compliance logs: 1 year (if enabled)

Exporting Before Expiration

If you need logs longer:

  1. Export before expiration date
  2. Store in your own system
  3. Archive as needed
  4. Use for analysis later

Limitations

Maximum Log Entries

  • Fetch: Up to 1000 lines
  • Stream: Keeps last 500 lines
  • Export: Based on what's loaded

Network Requirements

  • Stable internet connection
  • 1+ Mbps bandwidth for streaming
  • Some browsers better than others

App Requirements

  • Application must have agent deployed
  • Agent must have Logging capability
  • Application must output logs to stdout/stderr

Next Steps

  1. View Security Findings - Check cluster security
  2. Manage Resources - Monitor cluster health
  3. Deploy Applications - Use your cluster

Support

Questions about logs?

  • Check the scenarios above
  • Review AI analysis suggestions
  • Contact support: [email protected]

Logs not appearing?

  • Verify agent is deployed
  • Check agent status
  • Ensure Logging capability enabled
  • Verify application outputs logs