Fix jwt-token

This commit is contained in:
2025-09-23 13:55:10 +02:00
parent ee4d3503e5
commit 8fbe2cb354
13 changed files with 1626 additions and 2 deletions

View File

@@ -0,0 +1,17 @@
# Data Retention Service Environment Variables
# Database Configuration
DB_HOST=postgres
DB_PORT=5432
DB_NAME=drone_detection
DB_USER=postgres
DB_PASSWORD=your_secure_password
# Service Configuration
NODE_ENV=production
# Set to 'true' to run cleanup immediately on startup (useful for testing)
IMMEDIATE_CLEANUP=false
# Logging level
LOG_LEVEL=info

View File

@@ -0,0 +1,30 @@
# Data Retention Service
FROM node:18-alpine
# Create app directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install only production dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S retention -u 1001
# Change ownership
RUN chown -R retention:nodejs /app
USER retention
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD node healthcheck.js
# Start the service
CMD ["node", "index.js"]

View File

@@ -0,0 +1,312 @@
# Data Retention Service
A lightweight, standalone microservice responsible for automated data cleanup based on tenant retention policies.
## Overview
This service runs as a separate Docker container and performs the following functions:
- **Automated Cleanup**: Daily scheduled cleanup at 2:00 AM UTC
- **Tenant-Aware**: Respects individual tenant retention policies
- **Lightweight**: Minimal resource footprint (~64-128MB RAM)
- **Resilient**: Continues operation even if individual tenant cleanups fail
- **Logged**: Comprehensive logging and health monitoring
## Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Main Backend │ │ Data Retention │ │ PostgreSQL │
│ Container │ │ Service │ │ Database │
│ │ │ │ │ │
│ • API Endpoints │ │ • Cron Jobs │◄──►│ • tenant data │
│ • Business Logic│ │ • Data Cleanup │ │ • detections │
│ • Rate Limiting │ │ • Health Check │ │ • heartbeats │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
## Features
### 🕒 Scheduled Operations
- Runs daily at 2:00 AM UTC via cron job
- Configurable immediate cleanup for development/testing
- Graceful shutdown handling
### 🏢 Multi-Tenant Support
- Processes all active tenants
- Respects individual retention policies:
- `-1` = Unlimited retention (no cleanup)
- `N` = Delete data older than N days
- Default: 90 days if not specified
### 🧹 Data Cleanup
- **Drone Detections**: Historical detection records
- **Heartbeats**: Device connectivity logs
- **Security Logs**: Audit trail entries (if applicable)
### 📊 Monitoring & Health
- Built-in health checks for Docker
- Memory usage monitoring
- Cleanup statistics tracking
- Error logging with tenant context
## Configuration
### Environment Variables
```bash
# Database Connection
DB_HOST=postgres # Database host
DB_PORT=5432 # Database port
DB_NAME=drone_detection # Database name
DB_USER=postgres # Database user
DB_PASSWORD=password # Database password
# Service Settings
NODE_ENV=production # Environment mode
IMMEDIATE_CLEANUP=false # Run cleanup on startup
LOG_LEVEL=info # Logging level
```
### Docker Compose Integration
```yaml
data-retention:
build:
context: ./data-retention-service
container_name: drone-detection-data-retention
restart: unless-stopped
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: drone_detection
DB_USER: postgres
DB_PASSWORD: your_secure_password
depends_on:
postgres:
condition: service_healthy
deploy:
resources:
limits:
memory: 128M
```
## Usage
### Start with Docker Compose
```bash
# Start all services including data retention
docker-compose up -d
# Start only data retention service
docker-compose up -d data-retention
# View logs
docker-compose logs -f data-retention
```
### Manual Container Build
```bash
# Build the container
cd data-retention-service
docker build -t data-retention-service .
# Run the container
docker run -d \
--name data-retention \
--env-file .env \
--network drone-network \
data-retention-service
```
### Development Mode
```bash
# Install dependencies
npm install
# Run with immediate cleanup
IMMEDIATE_CLEANUP=true npm start
# Run in development mode
npm run dev
```
## Logging Output
### Startup
```
🗂️ Starting Data Retention Service...
📅 Environment: production
💾 Database: postgres:5432/drone_detection
✅ Database connection established
⏰ Scheduled cleanup: Daily at 2:00 AM UTC
✅ Data Retention Service started successfully
```
### Cleanup Operation
```
🧹 Starting data retention cleanup...
⏰ Cleanup started at: 2024-09-23T02:00:00.123Z
🏢 Found 5 active tenants to process
🧹 Cleaning tenant acme-corp - removing data older than 90 days
✅ Tenant acme-corp: Deleted 1250 detections, 4500 heartbeats, 89 logs
⏭️ Skipping tenant enterprise-unlimited - unlimited retention
✅ Data retention cleanup completed
⏱️ Duration: 2347ms
📊 Deleted: 2100 detections, 8900 heartbeats, 156 logs
```
### Health Monitoring
```
💚 Health Check - Uptime: 3600s, Memory: 45MB, Last Cleanup: 2024-09-23T02:00:00.123Z
```
## API Integration
The main backend provides endpoints to interact with retention policies:
```bash
# Get current tenant limits and retention info
GET /api/tenant/limits
# Preview what would be deleted
GET /api/tenant/data-retention/preview
```
## Error Handling
### Tenant-Level Errors
- Service continues if individual tenant cleanup fails
- Errors logged with tenant context
- Failed tenants skipped, others processed normally
### Service-Level Errors
- Database connection issues cause service restart
- Health checks detect and report issues
- Graceful shutdown on container stop signals
### Example Error Log
```
❌ Error cleaning tenant problematic-tenant: SequelizeTimeoutError: Query timeout
⚠️ Errors encountered: 1
- problematic-tenant: Query timeout
```
## Performance
### Resource Usage
- **Memory**: 64-128MB typical usage
- **CPU**: Minimal, only during cleanup operations
- **Storage**: Logs rotate automatically
- **Network**: Database queries only
### Cleanup Performance
- Batch operations for efficiency
- Indexed database queries on timestamp fields
- Parallel tenant processing where possible
- Configurable batch sizes for large datasets
## Security
### Database Access
- Read/write access only to required tables
- Connection pooling with limits
- Prepared statements prevent SQL injection
### Container Security
- Non-root user execution
- Minimal base image (node:18-alpine)
- No exposed ports
- Isolated network access
## Monitoring
### Health Checks
```bash
# Docker health check
docker exec data-retention node healthcheck.js
# Container status
docker-compose ps data-retention
# Service logs
docker-compose logs -f data-retention
```
### Metrics
- Cleanup duration and frequency
- Records deleted per tenant
- Memory usage over time
- Error rates and types
## Troubleshooting
### Common Issues
**Service won't start**
```bash
# Check database connectivity
docker-compose logs postgres
docker-compose logs data-retention
# Verify environment variables
docker-compose config
```
**Cleanup not running**
```bash
# Check cron schedule
docker exec data-retention ps aux | grep cron
# Force immediate cleanup
docker exec data-retention node -e "
const service = require('./index.js');
service.performCleanup();
"
```
**High memory usage**
```bash
# Check cleanup frequency
docker stats data-retention
# Review tenant data volumes
docker exec data-retention node -e "
const { getModels } = require('./database');
// Check tenant data sizes
"
```
### Configuration Validation
```bash
# Test database connection
docker exec data-retention node healthcheck.js
# Verify tenant policies
docker exec -it data-retention node -e "
const { getModels } = require('./database');
(async () => {
const { Tenant } = await getModels();
const tenants = await Tenant.findAll();
console.log(tenants.map(t => ({
slug: t.slug,
retention: t.features?.data_retention_days
})));
})();
"
```
## Migration from Integrated Service
If upgrading from a version where data retention was part of the main backend:
1. **Deploy new container**: Add data retention service to docker-compose.yml
2. **Verify operation**: Check logs for successful startup and database connection
3. **Remove old code**: The integrated service code is automatically disabled
4. **Monitor transition**: Ensure cleanup operations continue normally
The service is designed to be backward compatible and will work with existing tenant configurations without changes.

View File

@@ -0,0 +1,222 @@
/**
* Database connection and models for Data Retention Service
*/
const { Sequelize, DataTypes } = require('sequelize');
let sequelize;
let models = {};
/**
* Initialize database connection
*/
async function initializeDatabase() {
// Database connection
sequelize = new Sequelize(
process.env.DB_NAME || 'drone_detection',
process.env.DB_USER || 'postgres',
process.env.DB_PASSWORD || 'password',
{
host: process.env.DB_HOST || 'localhost',
port: process.env.DB_PORT || 5432,
dialect: 'postgres',
logging: process.env.NODE_ENV === 'development' ? console.log : false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
}
);
// Test connection
await sequelize.authenticate();
// Define models
defineModels();
return sequelize;
}
/**
* Define database models
*/
function defineModels() {
// Tenant model
models.Tenant = sequelize.define('Tenant', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
slug: {
type: DataTypes.STRING(50),
unique: true,
allowNull: false
},
name: {
type: DataTypes.STRING(100),
allowNull: false
},
features: {
type: DataTypes.JSONB,
defaultValue: {}
},
is_active: {
type: DataTypes.BOOLEAN,
defaultValue: true
}
}, {
tableName: 'tenants',
timestamps: true,
createdAt: 'created_at',
updatedAt: 'updated_at'
});
// DroneDetection model
models.DroneDetection = sequelize.define('DroneDetection', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
tenant_id: {
type: DataTypes.INTEGER,
allowNull: false
},
device_id: {
type: DataTypes.STRING(50),
allowNull: false
},
timestamp: {
type: DataTypes.DATE,
allowNull: false
},
drone_type: {
type: DataTypes.INTEGER,
allowNull: true
},
rssi: {
type: DataTypes.FLOAT,
allowNull: true
},
frequency: {
type: DataTypes.FLOAT,
allowNull: true
}
}, {
tableName: 'drone_detections',
timestamps: false,
indexes: [
{
fields: ['tenant_id', 'timestamp']
},
{
fields: ['timestamp']
}
]
});
// Heartbeat model
models.Heartbeat = sequelize.define('Heartbeat', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
tenant_id: {
type: DataTypes.INTEGER,
allowNull: false
},
device_id: {
type: DataTypes.STRING(50),
allowNull: false
},
timestamp: {
type: DataTypes.DATE,
allowNull: false
},
status: {
type: DataTypes.STRING(20),
defaultValue: 'online'
}
}, {
tableName: 'heartbeats',
timestamps: false,
indexes: [
{
fields: ['tenant_id', 'timestamp']
},
{
fields: ['timestamp']
}
]
});
// SecurityLog model (optional, might not exist in all installations)
models.SecurityLog = sequelize.define('SecurityLog', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
tenant_id: {
type: DataTypes.INTEGER,
allowNull: true
},
timestamp: {
type: DataTypes.DATE,
allowNull: false
},
level: {
type: DataTypes.STRING(20),
allowNull: false
},
message: {
type: DataTypes.TEXT,
allowNull: false
},
metadata: {
type: DataTypes.JSONB,
defaultValue: {}
}
}, {
tableName: 'security_logs',
timestamps: false,
indexes: [
{
fields: ['tenant_id', 'timestamp']
},
{
fields: ['timestamp']
}
]
});
}
/**
* Get models
*/
async function getModels() {
if (!sequelize) {
await initializeDatabase();
}
return models;
}
/**
* Close database connection
*/
async function closeDatabase() {
if (sequelize) {
await sequelize.close();
}
}
module.exports = {
initializeDatabase,
getModels,
closeDatabase,
sequelize: () => sequelize
};

View File

@@ -0,0 +1,21 @@
/**
* Health check for Data Retention Service
*/
const { getModels } = require('./database');
async function healthCheck() {
try {
// Check database connection
const { Tenant } = await getModels();
await Tenant.findOne({ limit: 1 });
console.log('Health check passed');
process.exit(0);
} catch (error) {
console.error('Health check failed:', error);
process.exit(1);
}
}
healthCheck();

View File

@@ -0,0 +1,267 @@
/**
* Data Retention Service
* Standalone microservice for automated data cleanup
*/
const cron = require('node-cron');
const { Op } = require('sequelize');
require('dotenv').config();
// Initialize database connection
const { initializeDatabase, getModels } = require('./database');
class DataRetentionService {
constructor() {
this.isRunning = false;
this.lastCleanup = null;
this.cleanupStats = {
totalRuns: 0,
totalDetectionsDeleted: 0,
totalHeartbeatsDeleted: 0,
totalLogsDeleted: 0,
lastRunDuration: 0,
errors: []
};
}
/**
* Start the data retention cleanup service
*/
async start() {
console.log('🗂️ Starting Data Retention Service...');
console.log(`📅 Environment: ${process.env.NODE_ENV || 'development'}`);
console.log(`💾 Database: ${process.env.DB_HOST}:${process.env.DB_PORT}/${process.env.DB_NAME}`);
try {
// Initialize database connection
await initializeDatabase();
console.log('✅ Database connection established');
// Schedule daily cleanup at 2:00 AM UTC
cron.schedule('0 2 * * *', async () => {
await this.performCleanup();
}, {
scheduled: true,
timezone: "UTC"
});
console.log('⏰ Scheduled cleanup: Daily at 2:00 AM UTC');
// Run immediate cleanup in development or if IMMEDIATE_CLEANUP is set
if (process.env.NODE_ENV === 'development' || process.env.IMMEDIATE_CLEANUP === 'true') {
console.log('🧹 Running immediate cleanup...');
setTimeout(() => this.performCleanup(), 5000);
}
// Health check endpoint simulation
setInterval(() => {
this.logHealthStatus();
}, 60000); // Every minute
console.log('✅ Data Retention Service started successfully');
} catch (error) {
console.error('❌ Failed to start Data Retention Service:', error);
process.exit(1);
}
}
/**
* Perform cleanup for all tenants
*/
async performCleanup() {
if (this.isRunning) {
console.log('⏳ Data retention cleanup already running, skipping...');
return;
}
this.isRunning = true;
const startTime = Date.now();
try {
console.log('🧹 Starting data retention cleanup...');
console.log(`⏰ Cleanup started at: ${new Date().toISOString()}`);
const { Tenant, DroneDetection, Heartbeat, SecurityLog } = await getModels();
// Get all active tenants with their retention policies
const tenants = await Tenant.findAll({
attributes: ['id', 'slug', 'features'],
where: {
is_active: true
}
});
console.log(`🏢 Found ${tenants.length} active tenants to process`);
let totalDetectionsDeleted = 0;
let totalHeartbeatsDeleted = 0;
let totalLogsDeleted = 0;
let errors = [];
for (const tenant of tenants) {
try {
const result = await this.cleanupTenant(tenant);
totalDetectionsDeleted += result.detections;
totalHeartbeatsDeleted += result.heartbeats;
totalLogsDeleted += result.logs;
} catch (error) {
console.error(`❌ Error cleaning tenant ${tenant.slug}:`, error);
errors.push({
tenantSlug: tenant.slug,
error: error.message,
timestamp: new Date().toISOString()
});
}
}
const duration = Date.now() - startTime;
this.lastCleanup = new Date();
this.cleanupStats.totalRuns++;
this.cleanupStats.totalDetectionsDeleted += totalDetectionsDeleted;
this.cleanupStats.totalHeartbeatsDeleted += totalHeartbeatsDeleted;
this.cleanupStats.totalLogsDeleted += totalLogsDeleted;
this.cleanupStats.lastRunDuration = duration;
this.cleanupStats.errors = errors;
console.log('✅ Data retention cleanup completed');
console.log(`⏱️ Duration: ${duration}ms`);
console.log(`📊 Deleted: ${totalDetectionsDeleted} detections, ${totalHeartbeatsDeleted} heartbeats, ${totalLogsDeleted} logs`);
if (errors.length > 0) {
console.log(`⚠️ Errors encountered: ${errors.length}`);
errors.forEach(err => console.log(` - ${err.tenantSlug}: ${err.error}`));
}
} catch (error) {
console.error('❌ Data retention cleanup failed:', error);
this.cleanupStats.errors.push({
error: error.message,
timestamp: new Date().toISOString()
});
} finally {
this.isRunning = false;
}
}
/**
* Clean up data for a specific tenant
*/
async cleanupTenant(tenant) {
const retentionDays = tenant.features?.data_retention_days;
// Skip if unlimited retention (-1)
if (retentionDays === -1) {
console.log(`⏭️ Skipping tenant ${tenant.slug} - unlimited retention`);
return { detections: 0, heartbeats: 0, logs: 0 };
}
// Default to 90 days if not specified
const effectiveRetentionDays = retentionDays || 90;
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - effectiveRetentionDays);
console.log(`🧹 Cleaning tenant ${tenant.slug} - removing data older than ${effectiveRetentionDays} days (before ${cutoffDate.toISOString()})`);
const { DroneDetection, Heartbeat, SecurityLog } = await getModels();
// Clean up drone detections
const deletedDetections = await DroneDetection.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
// Clean up heartbeats
const deletedHeartbeats = await Heartbeat.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
// Clean up security logs (if they have tenant_id)
let deletedLogs = 0;
try {
deletedLogs = await SecurityLog.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
} catch (error) {
// SecurityLog might not have tenant_id field
console.log(`⚠️ Skipping security logs for tenant ${tenant.slug}: ${error.message}`);
}
console.log(`✅ Tenant ${tenant.slug}: Deleted ${deletedDetections} detections, ${deletedHeartbeats} heartbeats, ${deletedLogs} logs`);
return {
detections: deletedDetections,
heartbeats: deletedHeartbeats,
logs: deletedLogs
};
}
/**
* Log health status
*/
logHealthStatus() {
const memUsage = process.memoryUsage();
const uptime = process.uptime();
console.log(`💚 Health Check - Uptime: ${Math.floor(uptime)}s, Memory: ${Math.round(memUsage.heapUsed / 1024 / 1024)}MB, Last Cleanup: ${this.lastCleanup ? this.lastCleanup.toISOString() : 'Never'}`);
}
/**
* Get service statistics
*/
getStats() {
return {
...this.cleanupStats,
isRunning: this.isRunning,
lastCleanup: this.lastCleanup,
uptime: process.uptime(),
memoryUsage: process.memoryUsage(),
nextScheduledRun: '2:00 AM UTC daily'
};
}
/**
* Graceful shutdown
*/
async shutdown() {
console.log('🔄 Graceful shutdown initiated...');
// Wait for current cleanup to finish
while (this.isRunning) {
console.log('⏳ Waiting for cleanup to finish...');
await new Promise(resolve => setTimeout(resolve, 1000));
}
console.log('✅ Data Retention Service stopped');
process.exit(0);
}
}
// Initialize and start the service
const service = new DataRetentionService();
// Handle graceful shutdown
process.on('SIGTERM', () => service.shutdown());
process.on('SIGINT', () => service.shutdown());
// Start the service
service.start().catch(error => {
console.error('Failed to start service:', error);
process.exit(1);
});
module.exports = DataRetentionService;

View File

@@ -0,0 +1,24 @@
{
"name": "data-retention-service",
"version": "1.0.0",
"description": "Automated data retention cleanup service for drone detection system",
"main": "index.js",
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js",
"test": "jest"
},
"dependencies": {
"pg": "^8.11.3",
"sequelize": "^6.32.1",
"node-cron": "^3.0.2",
"dotenv": "^16.3.1"
},
"devDependencies": {
"nodemon": "^3.0.1",
"jest": "^29.6.1"
},
"engines": {
"node": ">=18.0.0"
}
}