K8s Logs API Reference
This guide provides comprehensive documentation for the Kubernetes Logs management API endpoints, including request/response formats and integration examples.
Base URL
https://api.nife.io/v1/k8s/logs
All endpoints are prefixed with this base URL.
Authentication
All requests require authentication via Bearer token:
Authorization: Bearer YOUR_ACCESS_TOKEN
Obtain access tokens from the Access Tokens page in the UI.
API Endpoints Overview
Collection Configuration (7 endpoints)
Manage which logs to collect from Kubernetes clusters.
| Endpoint | Method | Purpose |
|---|---|---|
/config | GET | List collection configurations |
/config | POST | Create collection configuration |
/config | PUT | Update configuration |
/config | DELETE | Delete configuration |
/config/bulk-update | POST | Bulk update multiple configs |
/config/disable | POST | Disable collection for namespace |
/config/disable/cluster | POST | Disable all collection for cluster |
Log Retrieval (6 endpoints)
Retrieve and search logs from your clusters.
| Endpoint | Method | Purpose |
|---|---|---|
/ | GET | Get logs with filters |
/search | POST | Advanced log search |
/search/logql | POST | Search using LogQL |
/namespaces | GET | List namespaces |
/pods | GET | List pods in namespace |
/containers | GET | List containers in pod |
Log Collection (4 endpoints)
Manage on-demand and continuous log collection jobs.
| Endpoint | Method | Purpose |
|---|---|---|
/collect | POST | Start one-time collection |
/collect/continuous | POST | Start continuous collection |
/collect/bulk | POST | Collect from all pods |
/collect/jobs | GET | List collection jobs |
Archive Management (6 endpoints)
Create and manage log archival policies.
| Endpoint | Method | Purpose |
|---|---|---|
/archive/policies | GET | List archive policies |
/archive/policies | POST | Create archive policy |
/archive/policies | PUT | Update policy |
/archive/policies | DELETE | Delete policy |
/archive/start | POST | Start archive job |
/archive/jobs | GET | List archive jobs |
S3 Storage (4 endpoints)
Access logs stored in S3.
| Endpoint | Method | Purpose |
|---|---|---|
/s3/metadata | GET | Get S3 log metadata |
/s3/content | GET | Get log content from S3 |
/s3/stats | GET | Get S3 storage statistics |
/storage/info | GET | Get storage configuration |
Metrics & Monitoring (3 endpoints)
Monitor log collection and system health.
| Endpoint | Method | Purpose |
|---|---|---|
/metrics | GET | Get general log metrics |
/clusters/{clusterId}/metrics | GET | Get cluster-specific metrics |
/stats | GET | Get log statistics |
/health | GET | Get system health |
Cluster Management (6 endpoints)
Manage cluster connections and configuration.
| Endpoint | Method | Purpose |
|---|---|---|
/clusters/status | GET | Get cluster status |
/clusters/configs | GET | Get cluster configurations |
/clusters/configs | POST | Create cluster config |
/clusters/configs | PUT | Update cluster config |
/clusters/configs | DELETE | Delete cluster config |
/clusters/refresh | POST | Refresh cluster connections |
Detailed Endpoint Reference
Create Collection Configuration
POST /config
Create a new log collection configuration for a Kubernetes cluster.
Request Body:
{
"cluster_id": "cluster-123",
"namespace": "production",
"app_filter": "api-server",
"log_levels": ["info", "warn", "error"],
"collection_interval": 300,
"exclude_namespaces": ["kube-system", "kube-public"],
"max_logs_per_collection": 1000,
"is_enabled": true
}
Response (201):
{
"success": true,
"data": {
"id": "config-456",
"user_id": "user-789",
"org_id": "org-012",
"cluster_id": "cluster-123",
"namespace": "production",
"app_filter": "api-server",
"log_levels": ["info", "warn", "error"],
"collection_interval": 300,
"exclude_namespaces": ["kube-system", "kube-public"],
"max_logs_per_collection": 1000,
"is_enabled": true,
"created_at": "2024-01-06T10:30:00Z",
"updated_at": "2024-01-06T10:30:00Z"
}
}
List Collection Configurations
GET /config
List all collection configurations, optionally filtered by cluster or namespace.
Query Parameters:
?cluster_id=cluster-123&namespace=production
Response (200):
{
"success": true,
"data": {
"configs": [
{
"id": "config-456",
"cluster_id": "cluster-123",
"namespace": "production",
"app_filter": "api-server",
"is_enabled": true,
"created_at": "2024-01-06T10:30:00Z"
}
]
}
}
Update Collection Configuration
PUT /config
Update an existing collection configuration.
Query Parameters:
?cluster_id=cluster-123
Request Body:
{
"id": "config-456",
"cluster_id": "cluster-123",
"namespace": "production",
"app_filter": "api-gateway",
"log_levels": ["warn", "error"],
"collection_interval": 600,
"is_enabled": true
}
Response (200):
{
"success": true,
"data": {
"id": "config-456",
"cluster_id": "cluster-123",
"namespace": "production",
"is_enabled": true,
"updated_at": "2024-01-06T11:00:00Z"
}
}
Bulk Update Configurations
POST /config/bulk-update
Enable or disable multiple configurations in a single request.
Request Body:
{
"configs": [
{
"cluster_id": "cluster-123",
"namespace": "production",
"is_enabled": true
},
{
"cluster_id": "cluster-123",
"namespace": "staging",
"is_enabled": false
}
]
}
Response (200):
{
"success": true,
"data": {
"updated": 2,
"message": "2 configurations updated successfully"
}
}
Search Logs
POST /search
Advanced log search with flexible criteria.
Request Body:
{
"cluster_id": "cluster-123",
"namespace": "production",
"pod": "api-server-1",
"container": "api",
"level": "error",
"start_time": "2024-01-06T00:00:00Z",
"end_time": "2024-01-06T23:59:59Z",
"limit": 100,
"query": "database connection"
}
Response (200):
{
"success": true,
"data": {
"logs": [
{
"id": "log-001",
"timestamp": "2024-01-06T10:35:22Z",
"level": "error",
"namespace": "production",
"pod": "api-server-1",
"container": "api",
"message": "Connection timeout to database"
}
],
"total_count": 45,
"has_more": true
}
}
Search with LogQL
POST /search/logql
Execute LogQL queries for advanced log analysis.
Request Body:
{
"cluster_id": "cluster-123",
"query": "{namespace=\"production\"} |= \"error\" | json | status >= 500",
"start_time": "2024-01-06T00:00:00Z",
"end_time": "2024-01-06T23:59:59Z",
"limit": 100
}
Response (200):
{
"success": true,
"data": {
"logs": [],
"total_count": 23
}
}
List Namespaces
GET /namespaces
Get all available namespaces in a cluster.
Query Parameters:
?cluster_id=cluster-123
Response (200):
{
"success": true,
"data": {
"namespaces": [
"production",
"staging",
"development",
"default",
"kube-system"
],
"count": 5
}
}
Start Log Collection
POST /collect
Start an immediate log collection from specific pods.
Request Body:
{
"app_id": "app-123",
"namespace": "production",
"pods": ["api-server-1", "api-server-2"],
"containers": ["api"],
"since": "1h",
"tail": 1000
}
Response (201):
{
"success": true,
"data": {
"job_id": "job-789",
"message": "Collection job started successfully"
}
}
Create Archive Policy
POST /archive/policies
Create an automatic log archival policy.
Request Body:
{
"name": "Production 90-Day Archive",
"retention_days": 90,
"compression_type": "gzip",
"storage_location": "s3://my-bucket/logs",
"is_active": true,
"app_id": "app-123",
"namespace": "production"
}
Response (201):
{
"success": true,
"data": {
"id": "policy-111",
"name": "Production 90-Day Archive",
"retention_days": 90,
"compression_type": "gzip",
"is_active": true,
"created_at": "2024-01-06T10:30:00Z"
}
}
Get Archive Policies
GET /archive/policies
List all archive policies.
Response (200):
{
"success": true,
"data": {
"policies": [
{
"id": "policy-111",
"name": "Production 90-Day Archive",
"retention_days": 90,
"is_active": true
}
]
}
}
Start Archive Job
POST /archive/start
Manually trigger an archive job for a policy.
Request Body:
{
"policy_id": "policy-111"
}
Response (201):
{
"success": true,
"data": {
"job_id": "archive-job-222"
}
}
Get Archive Jobs
GET /archive/jobs
List archive jobs.
Query Parameters:
?policy_id=policy-111&status=completed
Response (200):
{
"success": true,
"data": {
"jobs": [
{
"id": "archive-job-222",
"policy_id": "policy-111",
"status": "completed",
"start_time": "2024-01-06T10:00:00Z",
"end_time": "2024-01-06T10:45:00Z",
"logs_archived": 150000,
"size_bytes": 456789000,
"error": null
}
]
}
}
Get S3 Log Metadata
GET /s3/metadata
List log files stored in S3.
Query Parameters:
?cluster_id=cluster-123&namespace=production&start_time=2024-01-01T00:00:00Z
Response (200):
{
"success": true,
"data": {
"metadata": [
{
"id": "s3-meta-001",
"cluster_id": "cluster-123",
"namespace": "production",
"pod_name": "api-server-1",
"container_name": "api",
"s3_key": "logs/production/api-server-1-2024-01-06.json.gz",
"s3_bucket": "my-logs-bucket",
"start_time": "2024-01-06T00:00:00Z",
"end_time": "2024-01-06T23:59:59Z",
"log_count": 10000,
"size_bytes": 2048000,
"compressed": true,
"created_at": "2024-01-07T00:15:00Z"
}
],
"count": 1,
"storage": "s3"
}
}
Get S3 Statistics
GET /s3/stats
Get aggregate statistics about S3-stored logs.
Response (200):
{
"success": true,
"data": {
"total_files": 450,
"total_size": 5368709120,
"total_logs": 45000000,
"oldest_log": "2023-07-01T00:00:00Z",
"newest_log": "2024-01-06T23:59:59Z",
"by_cluster": {
"cluster-123": 250,
"cluster-456": 200
},
"by_namespace": {
"production": 300,
"staging": 150
}
}
}
Get General Metrics
GET /metrics
Get log collection metrics.
Query Parameters:
?app_id=app-123&namespace=production
Response (200):
{
"success": true,
"data": {
"total_logs": 1500000,
"logs_per_hour": 62500,
"avg_log_size": 512,
"storage_used": "768 GB",
"by_level": {
"error": 15000,
"warn": 125000,
"info": 1200000,
"debug": 160000
},
"by_namespace": {
"production": 1200000,
"staging": 300000
}
}
}
Get Cluster Status
GET /clusters/status
Get status of all connected clusters.
Response (200):
{
"success": true,
"data": {
"clusters": [
{
"cluster_id": "cluster-123",
"name": "Production Cluster",
"status": "healthy",
"last_health_check": "2024-01-06T11:30:00Z",
"log_collector_status": "active",
"metrics": {
"pods_monitored": 150,
"logs_per_minute": 5000
}
}
]
}
}
Get System Health
GET /health
Get overall system health status.
Response (200):
{
"success": true,
"data": {
"status": "healthy",
"components": {
"log_collector": {
"status": "active",
"last_check": "2024-01-06T11:30:00Z"
},
"database": {
"status": "connected",
"last_check": "2024-01-06T11:30:00Z"
},
"storage": {
"status": "available",
"last_check": "2024-01-06T11:30:00Z"
}
},
"uptime": "45d 12h 30m"
}
}
Error Handling
Common Status Codes
| Code | Description |
|---|---|
| 200 | Success |
| 201 | Created |
| 400 | Bad Request |
| 401 | Unauthorized |
| 403 | Forbidden |
| 404 | Not Found |
| 429 | Rate Limited |
| 500 | Internal Server Error |
Error Response Format
{
"success": false,
"error": {
"code": "INVALID_CLUSTER_ID",
"message": "Cluster not found",
"details": "The specified cluster ID does not exist or you don't have access"
}
}
Rate Limiting
API requests are rate-limited:
- 100 requests per minute for standard endpoints
- 10 requests per minute for heavy operations (search, archive)
Rate limit headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1704545400
Integration Examples
Python Example
import requests
from datetime import datetime, timedelta
BASE_URL = "https://api.nife.io/v1/k8s/logs"
TOKEN = "your_access_token"
headers = {"Authorization": f"Bearer {TOKEN}"}
# Search logs
response = requests.post(
f"{BASE_URL}/search",
headers=headers,
json={
"cluster_id": "cluster-123",
"namespace": "production",
"level": "error",
"start_time": (datetime.now() - timedelta(days=1)).isoformat(),
"limit": 100
}
)
logs = response.json()["data"]["logs"]
print(f"Found {len(logs)} error logs")
JavaScript Example
const axios = require('axios');
const BASE_URL = 'https://api.nife.io/v1/k8s/logs';
const TOKEN = 'your_access_token';
const client = axios.create({
baseURL: BASE_URL,
headers: { 'Authorization': `Bearer ${TOKEN}` }
});
// Create archive policy
async function createArchivePolicy() {
try {
const response = await client.post('/archive/policies', {
name: 'Production Archive',
retention_days: 90,
compression_type: 'gzip',
storage_location: 's3://bucket/logs',
is_active: true
});
console.log('Policy created:', response.data);
} catch (error) {
console.error('Error:', error.response.data);
}
}
createArchivePolicy();
cURL Examples
# Search logs
curl -X POST https://api.nife.io/v1/k8s/logs/search \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"cluster_id": "cluster-123",
"namespace": "production",
"level": "error",
"limit": 100
}'
# Create collection config
curl -X POST https://api.nife.io/v1/k8s/logs/config \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"cluster_id": "cluster-123",
"namespace": "production",
"log_levels": ["error", "warn"],
"is_enabled": true
}'
Best Practices
- Always use specific filters - Reduce data transfer and improve performance
- Set reasonable limits - Start with limit=100, increase only if needed
- Use date ranges - Don't retrieve entire log history at once
- Handle pagination - Check
has_moreflag in responses - Cache credentials - Use stored access tokens, don't regenerate frequently
- Monitor rate limits - Check headers and implement backoff
- Compress archive logs - Always enable compression to reduce costs
- Regular cleanup - Archive old logs to reduce database load
For more information, see the User Guide.