K8s Logs API Reference - Complete Endpoint Documentation

This guide provides comprehensive documentation for the Kubernetes Logs management API endpoints, including request/response formats and integration examples.

Base URL#

https://api.nife.io/v1/k8s/logs

All endpoints are prefixed with this base URL.

Authentication#

All requests require authentication via Bearer token:

Authorization: Bearer YOUR_ACCESS_TOKEN

Obtain access tokens from the Access Tokens page in the UI.

API Endpoints Overview#

Collection Configuration (7 endpoints)#

Manage which logs to collect from Kubernetes clusters.

EndpointMethodPurpose
/configGETList collection configurations
/configPOSTCreate collection configuration
/configPUTUpdate configuration
/configDELETEDelete configuration
/config/bulk-updatePOSTBulk update multiple configs
/config/disablePOSTDisable collection for namespace
/config/disable/clusterPOSTDisable all collection for cluster

Log Retrieval (6 endpoints)#

Retrieve and search logs from your clusters.

EndpointMethodPurpose
/GETGet logs with filters
/searchPOSTAdvanced log search
/search/logqlPOSTSearch using LogQL
/namespacesGETList namespaces
/podsGETList pods in namespace
/containersGETList containers in pod

Log Collection (4 endpoints)#

Manage on-demand and continuous log collection jobs.

EndpointMethodPurpose
/collectPOSTStart one-time collection
/collect/continuousPOSTStart continuous collection
/collect/bulkPOSTCollect from all pods
/collect/jobsGETList collection jobs

Archive Management (6 endpoints)#

Create and manage log archival policies.

EndpointMethodPurpose
/archive/policiesGETList archive policies
/archive/policiesPOSTCreate archive policy
/archive/policiesPUTUpdate policy
/archive/policiesDELETEDelete policy
/archive/startPOSTStart archive job
/archive/jobsGETList archive jobs

S3 Storage (4 endpoints)#

Access logs stored in S3.

EndpointMethodPurpose
/s3/metadataGETGet S3 log metadata
/s3/contentGETGet log content from S3
/s3/statsGETGet S3 storage statistics
/storage/infoGETGet storage configuration

Metrics & Monitoring (3 endpoints)#

Monitor log collection and system health.

EndpointMethodPurpose
/metricsGETGet general log metrics
/clusters/{clusterId}/metricsGETGet cluster-specific metrics
/statsGETGet log statistics
/healthGETGet system health

Cluster Management (6 endpoints)#

Manage cluster connections and configuration.

EndpointMethodPurpose
/clusters/statusGETGet cluster status
/clusters/configsGETGet cluster configurations
/clusters/configsPOSTCreate cluster config
/clusters/configsPUTUpdate cluster config
/clusters/configsDELETEDelete cluster config
/clusters/refreshPOSTRefresh cluster connections

Detailed Endpoint Reference#

Create Collection Configuration#

POST /config

Create a new log collection configuration for a Kubernetes cluster.

Request Body:

{
"cluster_id": "cluster-123",
"namespace": "production",
"app_filter": "api-server",
"log_levels": ["info", "warn", "error"],
"collection_interval": 300,
"exclude_namespaces": ["kube-system", "kube-public"],
"max_logs_per_collection": 1000,
"is_enabled": true
}

Response (201):

{
"success": true,
"data": {
"id": "config-456",
"user_id": "user-789",
"org_id": "org-012",
"cluster_id": "cluster-123",
"namespace": "production",
"app_filter": "api-server",
"log_levels": ["info", "warn", "error"],
"collection_interval": 300,
"exclude_namespaces": ["kube-system", "kube-public"],
"max_logs_per_collection": 1000,
"is_enabled": true,
"created_at": "2024-01-06T10:30:00Z",
"updated_at": "2024-01-06T10:30:00Z"
}
}

List Collection Configurations#

GET /config

List all collection configurations, optionally filtered by cluster or namespace.

Query Parameters:

?cluster_id=cluster-123&namespace=production

Response (200):

{
"success": true,
"data": {
"configs": [
{
"id": "config-456",
"cluster_id": "cluster-123",
"namespace": "production",
"app_filter": "api-server",
"is_enabled": true,
"created_at": "2024-01-06T10:30:00Z"
}
]
}
}

Update Collection Configuration#

PUT /config

Update an existing collection configuration.

Query Parameters:

?cluster_id=cluster-123

Request Body:

{
"id": "config-456",
"cluster_id": "cluster-123",
"namespace": "production",
"app_filter": "api-gateway",
"log_levels": ["warn", "error"],
"collection_interval": 600,
"is_enabled": true
}

Response (200):

{
"success": true,
"data": {
"id": "config-456",
"cluster_id": "cluster-123",
"namespace": "production",
"is_enabled": true,
"updated_at": "2024-01-06T11:00:00Z"
}
}

Bulk Update Configurations#

POST /config/bulk-update

Enable or disable multiple configurations in a single request.

Request Body:

{
"configs": [
{
"cluster_id": "cluster-123",
"namespace": "production",
"is_enabled": true
},
{
"cluster_id": "cluster-123",
"namespace": "staging",
"is_enabled": false
}
]
}

Response (200):

{
"success": true,
"data": {
"updated": 2,
"message": "2 configurations updated successfully"
}
}

Search Logs#

POST /search

Advanced log search with flexible criteria.

Request Body:

{
"cluster_id": "cluster-123",
"namespace": "production",
"pod": "api-server-1",
"container": "api",
"level": "error",
"start_time": "2024-01-06T00:00:00Z",
"end_time": "2024-01-06T23:59:59Z",
"limit": 100,
"query": "database connection"
}

Response (200):

{
"success": true,
"data": {
"logs": [
{
"id": "log-001",
"timestamp": "2024-01-06T10:35:22Z",
"level": "error",
"namespace": "production",
"pod": "api-server-1",
"container": "api",
"message": "Connection timeout to database"
}
],
"total_count": 45,
"has_more": true
}
}

Search with LogQL#

POST /search/logql

Execute LogQL queries for advanced log analysis.

Request Body:

{
"cluster_id": "cluster-123",
"query": "{namespace=\"production\"} |= \"error\" | json | status >= 500",
"start_time": "2024-01-06T00:00:00Z",
"end_time": "2024-01-06T23:59:59Z",
"limit": 100
}

Response (200):

{
"success": true,
"data": {
"logs": [],
"total_count": 23
}
}

List Namespaces#

GET /namespaces

Get all available namespaces in a cluster.

Query Parameters:

?cluster_id=cluster-123

Response (200):

{
"success": true,
"data": {
"namespaces": [
"production",
"staging",
"development",
"default",
"kube-system"
],
"count": 5
}
}

Start Log Collection#

POST /collect

Start an immediate log collection from specific pods.

Request Body:

{
"app_id": "app-123",
"namespace": "production",
"pods": ["api-server-1", "api-server-2"],
"containers": ["api"],
"since": "1h",
"tail": 1000
}

Response (201):

{
"success": true,
"data": {
"job_id": "job-789",
"message": "Collection job started successfully"
}
}

Create Archive Policy#

POST /archive/policies

Create an automatic log archival policy.

Request Body:

{
"name": "Production 90-Day Archive",
"retention_days": 90,
"compression_type": "gzip",
"storage_location": "s3://my-bucket/logs",
"is_active": true,
"app_id": "app-123",
"namespace": "production"
}

Response (201):

{
"success": true,
"data": {
"id": "policy-111",
"name": "Production 90-Day Archive",
"retention_days": 90,
"compression_type": "gzip",
"is_active": true,
"created_at": "2024-01-06T10:30:00Z"
}
}

Get Archive Policies#

GET /archive/policies

List all archive policies.

Response (200):

{
"success": true,
"data": {
"policies": [
{
"id": "policy-111",
"name": "Production 90-Day Archive",
"retention_days": 90,
"is_active": true
}
]
}
}

Start Archive Job#

POST /archive/start

Manually trigger an archive job for a policy.

Request Body:

{
"policy_id": "policy-111"
}

Response (201):

{
"success": true,
"data": {
"job_id": "archive-job-222"
}
}

Get Archive Jobs#

GET /archive/jobs

List archive jobs.

Query Parameters:

?policy_id=policy-111&status=completed

Response (200):

{
"success": true,
"data": {
"jobs": [
{
"id": "archive-job-222",
"policy_id": "policy-111",
"status": "completed",
"start_time": "2024-01-06T10:00:00Z",
"end_time": "2024-01-06T10:45:00Z",
"logs_archived": 150000,
"size_bytes": 456789000,
"error": null
}
]
}
}

Get S3 Log Metadata#

GET /s3/metadata

List log files stored in S3.

Query Parameters:

?cluster_id=cluster-123&namespace=production&start_time=2024-01-01T00:00:00Z

Response (200):

{
"success": true,
"data": {
"metadata": [
{
"id": "s3-meta-001",
"cluster_id": "cluster-123",
"namespace": "production",
"pod_name": "api-server-1",
"container_name": "api",
"s3_key": "logs/production/api-server-1-2024-01-06.json.gz",
"s3_bucket": "my-logs-bucket",
"start_time": "2024-01-06T00:00:00Z",
"end_time": "2024-01-06T23:59:59Z",
"log_count": 10000,
"size_bytes": 2048000,
"compressed": true,
"created_at": "2024-01-07T00:15:00Z"
}
],
"count": 1,
"storage": "s3"
}
}

Get S3 Statistics#

GET /s3/stats

Get aggregate statistics about S3-stored logs.

Response (200):

{
"success": true,
"data": {
"total_files": 450,
"total_size": 5368709120,
"total_logs": 45000000,
"oldest_log": "2023-07-01T00:00:00Z",
"newest_log": "2024-01-06T23:59:59Z",
"by_cluster": {
"cluster-123": 250,
"cluster-456": 200
},
"by_namespace": {
"production": 300,
"staging": 150
}
}
}

Get General Metrics#

GET /metrics

Get log collection metrics.

Query Parameters:

?app_id=app-123&namespace=production

Response (200):

{
"success": true,
"data": {
"total_logs": 1500000,
"logs_per_hour": 62500,
"avg_log_size": 512,
"storage_used": "768 GB",
"by_level": {
"error": 15000,
"warn": 125000,
"info": 1200000,
"debug": 160000
},
"by_namespace": {
"production": 1200000,
"staging": 300000
}
}
}

Get Cluster Status#

GET /clusters/status

Get status of all connected clusters.

Response (200):

{
"success": true,
"data": {
"clusters": [
{
"cluster_id": "cluster-123",
"name": "Production Cluster",
"status": "healthy",
"last_health_check": "2024-01-06T11:30:00Z",
"log_collector_status": "active",
"metrics": {
"pods_monitored": 150,
"logs_per_minute": 5000
}
}
]
}
}

Get System Health#

GET /health

Get overall system health status.

Response (200):

{
"success": true,
"data": {
"status": "healthy",
"components": {
"log_collector": {
"status": "active",
"last_check": "2024-01-06T11:30:00Z"
},
"database": {
"status": "connected",
"last_check": "2024-01-06T11:30:00Z"
},
"storage": {
"status": "available",
"last_check": "2024-01-06T11:30:00Z"
}
},
"uptime": "45d 12h 30m"
}
}

Error Handling#

Common Status Codes#

CodeDescription
200Success
201Created
400Bad Request
401Unauthorized
403Forbidden
404Not Found
429Rate Limited
500Internal Server Error

Error Response Format#

{
"success": false,
"error": {
"code": "INVALID_CLUSTER_ID",
"message": "Cluster not found",
"details": "The specified cluster ID does not exist or you don't have access"
}
}

Rate Limiting#

API requests are rate-limited:

  • 100 requests per minute for standard endpoints
  • 10 requests per minute for heavy operations (search, archive)

Rate limit headers:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1704545400

Integration Examples#

Python Example#

import requests
from datetime import datetime, timedelta
BASE_URL = "https://api.nife.io/v1/k8s/logs"
TOKEN = "your_access_token"
headers = {"Authorization": f"Bearer {TOKEN}"}
# Search logs
response = requests.post(
f"{BASE_URL}/search",
headers=headers,
json={
"cluster_id": "cluster-123",
"namespace": "production",
"level": "error",
"start_time": (datetime.now() - timedelta(days=1)).isoformat(),
"limit": 100
}
)
logs = response.json()["data"]["logs"]
print(f"Found {len(logs)} error logs")

JavaScript Example#

const axios = require('axios');
const BASE_URL = 'https://api.nife.io/v1/k8s/logs';
const TOKEN = 'your_access_token';
const client = axios.create({
baseURL: BASE_URL,
headers: { 'Authorization': `Bearer ${TOKEN}` }
});
// Create archive policy
async function createArchivePolicy() {
try {
const response = await client.post('/archive/policies', {
name: 'Production Archive',
retention_days: 90,
compression_type: 'gzip',
storage_location: 's3://bucket/logs',
is_active: true
});
console.log('Policy created:', response.data);
} catch (error) {
console.error('Error:', error.response.data);
}
}
createArchivePolicy();

cURL Examples#

# Search logs
curl -X POST https://api.nife.io/v1/k8s/logs/search \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"cluster_id": "cluster-123",
"namespace": "production",
"level": "error",
"limit": 100
}'
# Create collection config
curl -X POST https://api.nife.io/v1/k8s/logs/config \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"cluster_id": "cluster-123",
"namespace": "production",
"log_levels": ["error", "warn"],
"is_enabled": true
}'

Best Practices#

  1. Always use specific filters - Reduce data transfer and improve performance
  2. Set reasonable limits - Start with limit=100, increase only if needed
  3. Use date ranges - Don't retrieve entire log history at once
  4. Handle pagination - Check has_more flag in responses
  5. Cache credentials - Use stored access tokens, don't regenerate frequently
  6. Monitor rate limits - Check headers and implement backoff
  7. Compress archive logs - Always enable compression to reduce costs
  8. Regular cleanup - Archive old logs to reduce database load

For more information, see the User Guide.