Fix jwt-token

This commit is contained in:
2025-09-23 13:55:10 +02:00
parent ee4d3503e5
commit 8fbe2cb354
13 changed files with 1626 additions and 2 deletions

View File

@@ -0,0 +1,277 @@
# Tenant Limits Implementation
## Overview
This document explains how tenant subscription limits are enforced in the UAM-ILS Drone Detection System. All the issues you identified have been resolved:
## ✅ Issues Fixed
### 1. **User Creation Limits**
- **Problem**: Tenants could create unlimited users regardless of their subscription limits
- **Solution**: Added `enforceUserLimit()` middleware to `POST /tenant/users`
- **Implementation**: Counts existing users and validates against `tenant.features.max_users`
### 2. **Device Creation Limits**
- **Problem**: Tenants could add unlimited devices regardless of their subscription limits
- **Solution**: Added `enforceDeviceLimit()` middleware to `POST /devices`
- **Implementation**: Counts existing devices and validates against `tenant.features.max_devices`
### 3. **API Rate Limiting**
- **Problem**: No proper API rate limiting per tenant shared among users
- **Solution**: Implemented `enforceApiRateLimit()` middleware
- **Implementation**:
- Tracks actual API requests (not page views)
- Rate limit is shared among ALL users in a tenant
- Uses sliding window algorithm
- Applied to all authenticated API endpoints
### 4. **Data Retention**
- **Problem**: Old data was never cleaned up automatically
- **Solution**: Created `DataRetentionService` with cron job
- **Implementation**:
- Runs daily at 2:00 AM UTC
- Deletes detections, heartbeats, and logs older than `tenant.features.data_retention_days`
- Respects unlimited retention (`-1` value)
- Provides preview endpoint to see what would be deleted
## 🔧 Technical Implementation
### Middleware: `server/middleware/tenant-limits.js`
```javascript
// User limit enforcement
enforceUserLimit() - Prevents user creation when limit reached
enforceDeviceLimit() - Prevents device creation when limit reached
enforceApiRateLimit() - Rate limits API requests per tenant
getTenantLimitsStatus() - Returns current usage vs limits
```
### Service: `server/services/dataRetention.js`
```javascript
DataRetentionService {
start() - Starts daily cron job
performCleanup() - Cleans all tenants based on retention policies
previewCleanup(tenantId) - Shows what would be deleted
getStats() - Returns cleanup statistics
}
```
### API Endpoints
```bash
GET /api/tenant/limits
# Returns current usage and limits for the tenant
{
"users": { "current": 3, "limit": 5, "unlimited": false },
"devices": { "current": 7, "limit": 10, "unlimited": false },
"api_requests": { "current_minute": 45, "limit_per_minute": 1000 },
"data_retention": { "days": 90, "unlimited": false }
}
GET /api/tenant/data-retention/preview
# Shows what data would be deleted by retention cleanup
{
"tenantSlug": "tenant1",
"retentionDays": 90,
"cutoffDate": "2024-06-24T02:00:00.000Z",
"toDelete": {
"detections": 1250,
"heartbeats": 4500,
"logs": 89
}
}
```
## 🚦 How Rate Limiting Works
### API Rate Limiting Details
- **Granularity**: Per tenant (shared among all users)
- **Window**: 1 minute sliding window
- **Storage**: In-memory with automatic cleanup
- **Headers**: Standard rate limit headers included
- **Tracking**: Only actual API requests count (not static files/page views)
### Example Rate Limit Response
```json
{
"success": false,
"message": "API rate limit exceeded. Maximum 1000 requests per 60 seconds for your tenant.",
"error_code": "TENANT_API_RATE_LIMIT_EXCEEDED",
"max_requests": 1000,
"window_seconds": 60,
"retry_after_seconds": 15
}
```
## 📊 Subscription Tiers
The system supports different subscription tiers with these default limits:
```javascript
// Free tier
{
max_devices: 2,
max_users: 1,
api_rate_limit: 100,
data_retention_days: 7
}
// Pro tier
{
max_devices: 10,
max_users: 5,
api_rate_limit: 1000,
data_retention_days: 90
}
// Business tier
{
max_devices: 50,
max_users: 20,
api_rate_limit: 5000,
data_retention_days: 365
}
// Enterprise tier
{
max_devices: -1, // Unlimited
max_users: -1, // Unlimited
api_rate_limit: -1, // Unlimited
data_retention_days: -1 // Unlimited
}
```
## 🔒 Security Features
### Limit Enforcement Security
- All limit checks are done server-side (cannot be bypassed)
- Security events are logged when limits are exceeded
- Failed attempts include IP address, user agent, and user details
- Graceful error messages prevent information disclosure
### Error Response Format
```json
{
"success": false,
"message": "Tenant has reached the maximum number of users (5). Please upgrade your subscription or remove existing users.",
"error_code": "TENANT_USER_LIMIT_EXCEEDED",
"current_count": 5,
"max_allowed": 5
}
```
## 🕒 Data Retention Schedule
### Cleanup Process
1. **Trigger**: Daily at 2:00 AM UTC via cron job
2. **Process**: For each active tenant:
- Check `tenant.features.data_retention_days`
- Skip if unlimited (`-1`)
- Calculate cutoff date
- Delete old detections, heartbeats, logs
- Log security events for significant cleanups
3. **Performance**: Batched operations with error handling per tenant
### Manual Operations
```javascript
// Preview what would be deleted
const service = new DataRetentionService();
const preview = await service.previewCleanup(tenantId);
// Manually trigger cleanup (admin only)
await service.triggerManualCleanup();
// Get cleanup statistics
const stats = service.getStats();
```
## 🔧 Docker Integration
### Package Dependencies
Added to `server/package.json`:
```json
{
"dependencies": {
"node-cron": "^3.0.2"
}
}
```
### Service Initialization
All services start automatically when the Docker container boots:
```javascript
// In server/index.js
const dataRetentionService = new DataRetentionService();
dataRetentionService.start();
console.log('🗂️ Data retention service: ✅ Started');
```
## 🧪 Testing the Implementation
### Test User Limits
```bash
# Create users until limit is reached
curl -X POST /api/tenant/users \
-H "Authorization: Bearer $TOKEN" \
-d '{"username":"test6","email":"test6@example.com","password":"password"}'
# Should return 403 when limit exceeded
```
### Test Device Limits
```bash
# Create devices until limit is reached
curl -X POST /api/devices \
-H "Authorization: Bearer $TOKEN" \
-d '{"id":"device-11","name":"Test Device 11"}'
# Should return 403 when limit exceeded
```
### Test API Rate Limits
```bash
# Send rapid requests to trigger rate limit
for i in {1..1100}; do
curl -X GET /api/detections -H "Authorization: Bearer $TOKEN" &
done
# Should return 429 after limit reached
```
### Test Data Retention
```bash
# Preview what would be deleted
curl -X GET /api/tenant/data-retention/preview \
-H "Authorization: Bearer $TOKEN"
# Check tenant limits status
curl -X GET /api/tenant/limits \
-H "Authorization: Bearer $TOKEN"
```
## 📈 Monitoring & Logging
### Security Logs
All limit violations are logged with full context:
- User ID and username
- Tenant ID and slug
- IP address and user agent
- Specific limit exceeded and current usage
- Timestamp and action details
### Performance Monitoring
- Rate limit middleware tracks response times
- Data retention service logs cleanup duration and counts
- Memory usage monitoring for rate limit store
- Database query performance for limit checks
## 🔄 Upgrade Path
When tenants upgrade their subscription:
1. Update `tenant.features` with new limits
2. Limits take effect immediately
3. No restart required
4. Historical data respects new retention policy on next cleanup
This comprehensive implementation ensures that tenant limits are properly enforced across all aspects of the system, preventing abuse while providing clear feedback to users about their subscription status.

View File

@@ -0,0 +1,17 @@
# Data Retention Service Environment Variables
# Database Configuration
DB_HOST=postgres
DB_PORT=5432
DB_NAME=drone_detection
DB_USER=postgres
DB_PASSWORD=your_secure_password
# Service Configuration
NODE_ENV=production
# Set to 'true' to run cleanup immediately on startup (useful for testing)
IMMEDIATE_CLEANUP=false
# Logging level
LOG_LEVEL=info

View File

@@ -0,0 +1,30 @@
# Data Retention Service
FROM node:18-alpine
# Create app directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install only production dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S retention -u 1001
# Change ownership
RUN chown -R retention:nodejs /app
USER retention
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD node healthcheck.js
# Start the service
CMD ["node", "index.js"]

View File

@@ -0,0 +1,312 @@
# Data Retention Service
A lightweight, standalone microservice responsible for automated data cleanup based on tenant retention policies.
## Overview
This service runs as a separate Docker container and performs the following functions:
- **Automated Cleanup**: Daily scheduled cleanup at 2:00 AM UTC
- **Tenant-Aware**: Respects individual tenant retention policies
- **Lightweight**: Minimal resource footprint (~64-128MB RAM)
- **Resilient**: Continues operation even if individual tenant cleanups fail
- **Logged**: Comprehensive logging and health monitoring
## Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Main Backend │ │ Data Retention │ │ PostgreSQL │
│ Container │ │ Service │ │ Database │
│ │ │ │ │ │
│ • API Endpoints │ │ • Cron Jobs │◄──►│ • tenant data │
│ • Business Logic│ │ • Data Cleanup │ │ • detections │
│ • Rate Limiting │ │ • Health Check │ │ • heartbeats │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
## Features
### 🕒 Scheduled Operations
- Runs daily at 2:00 AM UTC via cron job
- Configurable immediate cleanup for development/testing
- Graceful shutdown handling
### 🏢 Multi-Tenant Support
- Processes all active tenants
- Respects individual retention policies:
- `-1` = Unlimited retention (no cleanup)
- `N` = Delete data older than N days
- Default: 90 days if not specified
### 🧹 Data Cleanup
- **Drone Detections**: Historical detection records
- **Heartbeats**: Device connectivity logs
- **Security Logs**: Audit trail entries (if applicable)
### 📊 Monitoring & Health
- Built-in health checks for Docker
- Memory usage monitoring
- Cleanup statistics tracking
- Error logging with tenant context
## Configuration
### Environment Variables
```bash
# Database Connection
DB_HOST=postgres # Database host
DB_PORT=5432 # Database port
DB_NAME=drone_detection # Database name
DB_USER=postgres # Database user
DB_PASSWORD=password # Database password
# Service Settings
NODE_ENV=production # Environment mode
IMMEDIATE_CLEANUP=false # Run cleanup on startup
LOG_LEVEL=info # Logging level
```
### Docker Compose Integration
```yaml
data-retention:
build:
context: ./data-retention-service
container_name: drone-detection-data-retention
restart: unless-stopped
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: drone_detection
DB_USER: postgres
DB_PASSWORD: your_secure_password
depends_on:
postgres:
condition: service_healthy
deploy:
resources:
limits:
memory: 128M
```
## Usage
### Start with Docker Compose
```bash
# Start all services including data retention
docker-compose up -d
# Start only data retention service
docker-compose up -d data-retention
# View logs
docker-compose logs -f data-retention
```
### Manual Container Build
```bash
# Build the container
cd data-retention-service
docker build -t data-retention-service .
# Run the container
docker run -d \
--name data-retention \
--env-file .env \
--network drone-network \
data-retention-service
```
### Development Mode
```bash
# Install dependencies
npm install
# Run with immediate cleanup
IMMEDIATE_CLEANUP=true npm start
# Run in development mode
npm run dev
```
## Logging Output
### Startup
```
🗂️ Starting Data Retention Service...
📅 Environment: production
💾 Database: postgres:5432/drone_detection
✅ Database connection established
⏰ Scheduled cleanup: Daily at 2:00 AM UTC
✅ Data Retention Service started successfully
```
### Cleanup Operation
```
🧹 Starting data retention cleanup...
⏰ Cleanup started at: 2024-09-23T02:00:00.123Z
🏢 Found 5 active tenants to process
🧹 Cleaning tenant acme-corp - removing data older than 90 days
✅ Tenant acme-corp: Deleted 1250 detections, 4500 heartbeats, 89 logs
⏭️ Skipping tenant enterprise-unlimited - unlimited retention
✅ Data retention cleanup completed
⏱️ Duration: 2347ms
📊 Deleted: 2100 detections, 8900 heartbeats, 156 logs
```
### Health Monitoring
```
💚 Health Check - Uptime: 3600s, Memory: 45MB, Last Cleanup: 2024-09-23T02:00:00.123Z
```
## API Integration
The main backend provides endpoints to interact with retention policies:
```bash
# Get current tenant limits and retention info
GET /api/tenant/limits
# Preview what would be deleted
GET /api/tenant/data-retention/preview
```
## Error Handling
### Tenant-Level Errors
- Service continues if individual tenant cleanup fails
- Errors logged with tenant context
- Failed tenants skipped, others processed normally
### Service-Level Errors
- Database connection issues cause service restart
- Health checks detect and report issues
- Graceful shutdown on container stop signals
### Example Error Log
```
❌ Error cleaning tenant problematic-tenant: SequelizeTimeoutError: Query timeout
⚠️ Errors encountered: 1
- problematic-tenant: Query timeout
```
## Performance
### Resource Usage
- **Memory**: 64-128MB typical usage
- **CPU**: Minimal, only during cleanup operations
- **Storage**: Logs rotate automatically
- **Network**: Database queries only
### Cleanup Performance
- Batch operations for efficiency
- Indexed database queries on timestamp fields
- Parallel tenant processing where possible
- Configurable batch sizes for large datasets
## Security
### Database Access
- Read/write access only to required tables
- Connection pooling with limits
- Prepared statements prevent SQL injection
### Container Security
- Non-root user execution
- Minimal base image (node:18-alpine)
- No exposed ports
- Isolated network access
## Monitoring
### Health Checks
```bash
# Docker health check
docker exec data-retention node healthcheck.js
# Container status
docker-compose ps data-retention
# Service logs
docker-compose logs -f data-retention
```
### Metrics
- Cleanup duration and frequency
- Records deleted per tenant
- Memory usage over time
- Error rates and types
## Troubleshooting
### Common Issues
**Service won't start**
```bash
# Check database connectivity
docker-compose logs postgres
docker-compose logs data-retention
# Verify environment variables
docker-compose config
```
**Cleanup not running**
```bash
# Check cron schedule
docker exec data-retention ps aux | grep cron
# Force immediate cleanup
docker exec data-retention node -e "
const service = require('./index.js');
service.performCleanup();
"
```
**High memory usage**
```bash
# Check cleanup frequency
docker stats data-retention
# Review tenant data volumes
docker exec data-retention node -e "
const { getModels } = require('./database');
// Check tenant data sizes
"
```
### Configuration Validation
```bash
# Test database connection
docker exec data-retention node healthcheck.js
# Verify tenant policies
docker exec -it data-retention node -e "
const { getModels } = require('./database');
(async () => {
const { Tenant } = await getModels();
const tenants = await Tenant.findAll();
console.log(tenants.map(t => ({
slug: t.slug,
retention: t.features?.data_retention_days
})));
})();
"
```
## Migration from Integrated Service
If upgrading from a version where data retention was part of the main backend:
1. **Deploy new container**: Add data retention service to docker-compose.yml
2. **Verify operation**: Check logs for successful startup and database connection
3. **Remove old code**: The integrated service code is automatically disabled
4. **Monitor transition**: Ensure cleanup operations continue normally
The service is designed to be backward compatible and will work with existing tenant configurations without changes.

View File

@@ -0,0 +1,222 @@
/**
* Database connection and models for Data Retention Service
*/
const { Sequelize, DataTypes } = require('sequelize');
let sequelize;
let models = {};
/**
* Initialize database connection
*/
async function initializeDatabase() {
// Database connection
sequelize = new Sequelize(
process.env.DB_NAME || 'drone_detection',
process.env.DB_USER || 'postgres',
process.env.DB_PASSWORD || 'password',
{
host: process.env.DB_HOST || 'localhost',
port: process.env.DB_PORT || 5432,
dialect: 'postgres',
logging: process.env.NODE_ENV === 'development' ? console.log : false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
}
);
// Test connection
await sequelize.authenticate();
// Define models
defineModels();
return sequelize;
}
/**
* Define database models
*/
function defineModels() {
// Tenant model
models.Tenant = sequelize.define('Tenant', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
slug: {
type: DataTypes.STRING(50),
unique: true,
allowNull: false
},
name: {
type: DataTypes.STRING(100),
allowNull: false
},
features: {
type: DataTypes.JSONB,
defaultValue: {}
},
is_active: {
type: DataTypes.BOOLEAN,
defaultValue: true
}
}, {
tableName: 'tenants',
timestamps: true,
createdAt: 'created_at',
updatedAt: 'updated_at'
});
// DroneDetection model
models.DroneDetection = sequelize.define('DroneDetection', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
tenant_id: {
type: DataTypes.INTEGER,
allowNull: false
},
device_id: {
type: DataTypes.STRING(50),
allowNull: false
},
timestamp: {
type: DataTypes.DATE,
allowNull: false
},
drone_type: {
type: DataTypes.INTEGER,
allowNull: true
},
rssi: {
type: DataTypes.FLOAT,
allowNull: true
},
frequency: {
type: DataTypes.FLOAT,
allowNull: true
}
}, {
tableName: 'drone_detections',
timestamps: false,
indexes: [
{
fields: ['tenant_id', 'timestamp']
},
{
fields: ['timestamp']
}
]
});
// Heartbeat model
models.Heartbeat = sequelize.define('Heartbeat', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
tenant_id: {
type: DataTypes.INTEGER,
allowNull: false
},
device_id: {
type: DataTypes.STRING(50),
allowNull: false
},
timestamp: {
type: DataTypes.DATE,
allowNull: false
},
status: {
type: DataTypes.STRING(20),
defaultValue: 'online'
}
}, {
tableName: 'heartbeats',
timestamps: false,
indexes: [
{
fields: ['tenant_id', 'timestamp']
},
{
fields: ['timestamp']
}
]
});
// SecurityLog model (optional, might not exist in all installations)
models.SecurityLog = sequelize.define('SecurityLog', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
tenant_id: {
type: DataTypes.INTEGER,
allowNull: true
},
timestamp: {
type: DataTypes.DATE,
allowNull: false
},
level: {
type: DataTypes.STRING(20),
allowNull: false
},
message: {
type: DataTypes.TEXT,
allowNull: false
},
metadata: {
type: DataTypes.JSONB,
defaultValue: {}
}
}, {
tableName: 'security_logs',
timestamps: false,
indexes: [
{
fields: ['tenant_id', 'timestamp']
},
{
fields: ['timestamp']
}
]
});
}
/**
* Get models
*/
async function getModels() {
if (!sequelize) {
await initializeDatabase();
}
return models;
}
/**
* Close database connection
*/
async function closeDatabase() {
if (sequelize) {
await sequelize.close();
}
}
module.exports = {
initializeDatabase,
getModels,
closeDatabase,
sequelize: () => sequelize
};

View File

@@ -0,0 +1,21 @@
/**
* Health check for Data Retention Service
*/
const { getModels } = require('./database');
async function healthCheck() {
try {
// Check database connection
const { Tenant } = await getModels();
await Tenant.findOne({ limit: 1 });
console.log('Health check passed');
process.exit(0);
} catch (error) {
console.error('Health check failed:', error);
process.exit(1);
}
}
healthCheck();

View File

@@ -0,0 +1,267 @@
/**
* Data Retention Service
* Standalone microservice for automated data cleanup
*/
const cron = require('node-cron');
const { Op } = require('sequelize');
require('dotenv').config();
// Initialize database connection
const { initializeDatabase, getModels } = require('./database');
class DataRetentionService {
constructor() {
this.isRunning = false;
this.lastCleanup = null;
this.cleanupStats = {
totalRuns: 0,
totalDetectionsDeleted: 0,
totalHeartbeatsDeleted: 0,
totalLogsDeleted: 0,
lastRunDuration: 0,
errors: []
};
}
/**
* Start the data retention cleanup service
*/
async start() {
console.log('🗂️ Starting Data Retention Service...');
console.log(`📅 Environment: ${process.env.NODE_ENV || 'development'}`);
console.log(`💾 Database: ${process.env.DB_HOST}:${process.env.DB_PORT}/${process.env.DB_NAME}`);
try {
// Initialize database connection
await initializeDatabase();
console.log('✅ Database connection established');
// Schedule daily cleanup at 2:00 AM UTC
cron.schedule('0 2 * * *', async () => {
await this.performCleanup();
}, {
scheduled: true,
timezone: "UTC"
});
console.log('⏰ Scheduled cleanup: Daily at 2:00 AM UTC');
// Run immediate cleanup in development or if IMMEDIATE_CLEANUP is set
if (process.env.NODE_ENV === 'development' || process.env.IMMEDIATE_CLEANUP === 'true') {
console.log('🧹 Running immediate cleanup...');
setTimeout(() => this.performCleanup(), 5000);
}
// Health check endpoint simulation
setInterval(() => {
this.logHealthStatus();
}, 60000); // Every minute
console.log('✅ Data Retention Service started successfully');
} catch (error) {
console.error('❌ Failed to start Data Retention Service:', error);
process.exit(1);
}
}
/**
* Perform cleanup for all tenants
*/
async performCleanup() {
if (this.isRunning) {
console.log('⏳ Data retention cleanup already running, skipping...');
return;
}
this.isRunning = true;
const startTime = Date.now();
try {
console.log('🧹 Starting data retention cleanup...');
console.log(`⏰ Cleanup started at: ${new Date().toISOString()}`);
const { Tenant, DroneDetection, Heartbeat, SecurityLog } = await getModels();
// Get all active tenants with their retention policies
const tenants = await Tenant.findAll({
attributes: ['id', 'slug', 'features'],
where: {
is_active: true
}
});
console.log(`🏢 Found ${tenants.length} active tenants to process`);
let totalDetectionsDeleted = 0;
let totalHeartbeatsDeleted = 0;
let totalLogsDeleted = 0;
let errors = [];
for (const tenant of tenants) {
try {
const result = await this.cleanupTenant(tenant);
totalDetectionsDeleted += result.detections;
totalHeartbeatsDeleted += result.heartbeats;
totalLogsDeleted += result.logs;
} catch (error) {
console.error(`❌ Error cleaning tenant ${tenant.slug}:`, error);
errors.push({
tenantSlug: tenant.slug,
error: error.message,
timestamp: new Date().toISOString()
});
}
}
const duration = Date.now() - startTime;
this.lastCleanup = new Date();
this.cleanupStats.totalRuns++;
this.cleanupStats.totalDetectionsDeleted += totalDetectionsDeleted;
this.cleanupStats.totalHeartbeatsDeleted += totalHeartbeatsDeleted;
this.cleanupStats.totalLogsDeleted += totalLogsDeleted;
this.cleanupStats.lastRunDuration = duration;
this.cleanupStats.errors = errors;
console.log('✅ Data retention cleanup completed');
console.log(`⏱️ Duration: ${duration}ms`);
console.log(`📊 Deleted: ${totalDetectionsDeleted} detections, ${totalHeartbeatsDeleted} heartbeats, ${totalLogsDeleted} logs`);
if (errors.length > 0) {
console.log(`⚠️ Errors encountered: ${errors.length}`);
errors.forEach(err => console.log(` - ${err.tenantSlug}: ${err.error}`));
}
} catch (error) {
console.error('❌ Data retention cleanup failed:', error);
this.cleanupStats.errors.push({
error: error.message,
timestamp: new Date().toISOString()
});
} finally {
this.isRunning = false;
}
}
/**
* Clean up data for a specific tenant
*/
async cleanupTenant(tenant) {
const retentionDays = tenant.features?.data_retention_days;
// Skip if unlimited retention (-1)
if (retentionDays === -1) {
console.log(`⏭️ Skipping tenant ${tenant.slug} - unlimited retention`);
return { detections: 0, heartbeats: 0, logs: 0 };
}
// Default to 90 days if not specified
const effectiveRetentionDays = retentionDays || 90;
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - effectiveRetentionDays);
console.log(`🧹 Cleaning tenant ${tenant.slug} - removing data older than ${effectiveRetentionDays} days (before ${cutoffDate.toISOString()})`);
const { DroneDetection, Heartbeat, SecurityLog } = await getModels();
// Clean up drone detections
const deletedDetections = await DroneDetection.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
// Clean up heartbeats
const deletedHeartbeats = await Heartbeat.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
// Clean up security logs (if they have tenant_id)
let deletedLogs = 0;
try {
deletedLogs = await SecurityLog.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
} catch (error) {
// SecurityLog might not have tenant_id field
console.log(`⚠️ Skipping security logs for tenant ${tenant.slug}: ${error.message}`);
}
console.log(`✅ Tenant ${tenant.slug}: Deleted ${deletedDetections} detections, ${deletedHeartbeats} heartbeats, ${deletedLogs} logs`);
return {
detections: deletedDetections,
heartbeats: deletedHeartbeats,
logs: deletedLogs
};
}
/**
* Log health status
*/
logHealthStatus() {
const memUsage = process.memoryUsage();
const uptime = process.uptime();
console.log(`💚 Health Check - Uptime: ${Math.floor(uptime)}s, Memory: ${Math.round(memUsage.heapUsed / 1024 / 1024)}MB, Last Cleanup: ${this.lastCleanup ? this.lastCleanup.toISOString() : 'Never'}`);
}
/**
* Get service statistics
*/
getStats() {
return {
...this.cleanupStats,
isRunning: this.isRunning,
lastCleanup: this.lastCleanup,
uptime: process.uptime(),
memoryUsage: process.memoryUsage(),
nextScheduledRun: '2:00 AM UTC daily'
};
}
/**
* Graceful shutdown
*/
async shutdown() {
console.log('🔄 Graceful shutdown initiated...');
// Wait for current cleanup to finish
while (this.isRunning) {
console.log('⏳ Waiting for cleanup to finish...');
await new Promise(resolve => setTimeout(resolve, 1000));
}
console.log('✅ Data Retention Service stopped');
process.exit(0);
}
}
// Initialize and start the service
const service = new DataRetentionService();
// Handle graceful shutdown
process.on('SIGTERM', () => service.shutdown());
process.on('SIGINT', () => service.shutdown());
// Start the service
service.start().catch(error => {
console.error('Failed to start service:', error);
process.exit(1);
});
module.exports = DataRetentionService;

View File

@@ -0,0 +1,24 @@
{
"name": "data-retention-service",
"version": "1.0.0",
"description": "Automated data retention cleanup service for drone detection system",
"main": "index.js",
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js",
"test": "jest"
},
"dependencies": {
"pg": "^8.11.3",
"sequelize": "^6.32.1",
"node-cron": "^3.0.2",
"dotenv": "^16.3.1"
},
"devDependencies": {
"nodemon": "^3.0.1",
"jest": "^29.6.1"
},
"engines": {
"node": ">=18.0.0"
}
}

View File

@@ -173,6 +173,39 @@ services:
- simulation
command: python drone_simulator.py --devices 5 --duration 3600
# Data Retention Service (Microservice)
data-retention:
build:
context: ./data-retention-service
dockerfile: Dockerfile
container_name: drone-detection-data-retention
restart: unless-stopped
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: ${DB_NAME:-drone_detection}
DB_USER: ${DB_USER:-postgres}
DB_PASSWORD: ${DB_PASSWORD:-your_secure_password}
NODE_ENV: ${NODE_ENV:-production}
IMMEDIATE_CLEANUP: ${IMMEDIATE_CLEANUP:-false}
networks:
- drone-network
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "node", "healthcheck.js"]
interval: 30s
timeout: 10s
retries: 3
# Resource limits for lightweight container
deploy:
resources:
limits:
memory: 128M
reservations:
memory: 64M
# Health Probe Simulator (Continuous Device Heartbeats)
healthprobe:
build:

View File

@@ -236,7 +236,7 @@ async function startServer() {
deviceHealthService.start();
console.log('🏥 Device health monitoring: ✅ Started');
// Graceful shutdown for device health service
// Graceful shutdown for services
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down gracefully');
deviceHealthService.stop();

View File

@@ -3,10 +3,10 @@
* Enforces tenant subscription limits for users, devices, API rate limits, etc.
*/
const MultiTenantAuth = require('./multi-tenant-auth');
const { securityLogger } = require('./logger');
// Initialize multi-tenant auth
const MultiTenantAuth = require('./multi-tenant-auth');
const multiAuth = new MultiTenantAuth();
/**

View File

@@ -616,6 +616,135 @@ router.put('/security', authenticateToken, requirePermissions(['security.edit'])
}
});
/**
* GET /tenant/limits
* Get current tenant limits and usage status
*/
router.get('/limits', authenticateToken, async (req, res) => {
try {
// Determine tenant from request
const tenantId = await multiAuth.determineTenant(req);
if (!tenantId) {
return res.status(400).json({
success: false,
message: 'Unable to determine tenant'
});
}
const tenant = await Tenant.findOne({ where: { slug: tenantId } });
if (!tenant) {
return res.status(404).json({
success: false,
message: 'Tenant not found'
});
}
const { getTenantLimitsStatus } = require('../middleware/tenant-limits');
const limitsStatus = await getTenantLimitsStatus(tenant.id);
res.json({
success: true,
data: limitsStatus
});
} catch (error) {
console.error('Error fetching tenant limits:', error);
res.status(500).json({
success: false,
message: 'Failed to fetch tenant limits'
});
}
});
/**
* GET /tenant/data-retention/preview
* Preview what data would be deleted by retention cleanup
* Note: Actual cleanup is handled by separate data-retention-service container
*/
router.get('/data-retention/preview', authenticateToken, requirePermissions(['settings.view']), async (req, res) => {
try {
// Determine tenant from request
const tenantId = await multiAuth.determineTenant(req);
if (!tenantId) {
return res.status(400).json({
success: false,
message: 'Unable to determine tenant'
});
}
const tenant = await Tenant.findOne({ where: { slug: tenantId } });
if (!tenant) {
return res.status(404).json({
success: false,
message: 'Tenant not found'
});
}
// Calculate what would be deleted (preview only)
const retentionDays = tenant.features?.data_retention_days || 90;
if (retentionDays === -1) {
return res.json({
success: true,
data: {
tenantSlug: tenant.slug,
retentionDays: 'unlimited',
cutoffDate: null,
toDelete: {
detections: 0,
heartbeats: 0,
logs: 0
},
note: 'This tenant has unlimited data retention'
}
});
}
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - retentionDays);
const { DroneDetection, Heartbeat } = require('../models');
const { Op } = require('sequelize');
const [detectionsCount, heartbeatsCount] = await Promise.all([
DroneDetection.count({
where: {
tenant_id: tenant.id,
timestamp: { [Op.lt]: cutoffDate }
}
}),
Heartbeat.count({
where: {
tenant_id: tenant.id,
timestamp: { [Op.lt]: cutoffDate }
}
})
]);
res.json({
success: true,
data: {
tenantSlug: tenant.slug,
retentionDays,
cutoffDate: cutoffDate.toISOString(),
toDelete: {
detections: detectionsCount,
heartbeats: heartbeatsCount,
logs: 0 // Security logs are cleaned up by the data retention service
},
note: 'Actual cleanup is performed daily at 2:00 AM UTC by the data-retention-service container'
}
});
} catch (error) {
console.error('Error previewing data retention cleanup:', error);
res.status(500).json({
success: false,
message: 'Failed to preview data retention cleanup'
});
}
});
/**
* GET /tenant/users
* Get users in current tenant (user admin or higher)

View File

@@ -0,0 +1,292 @@
/**
* Data Retention Service
* Automatically cleans up old data based on tenant retention policies
*/
const cron = require('node-cron');
const { Op } = require('sequelize');
const { securityLogger } = require('../middleware/logger');
class DataRetentionService {
constructor() {
this.isRunning = false;
this.lastCleanup = null;
this.cleanupStats = {
totalRuns: 0,
totalDetectionsDeleted: 0,
totalHeartbeatsDeleted: 0,
totalLogsDeleted: 0,
lastRunDuration: 0
};
}
/**
* Start the data retention cleanup service
* Runs daily at 2 AM
*/
start() {
console.log('🗂️ Starting Data Retention Service...');
// Run daily at 2:00 AM
cron.schedule('0 2 * * *', async () => {
await this.performCleanup();
}, {
scheduled: true,
timezone: "UTC"
});
// Also run immediately if NODE_ENV is development
if (process.env.NODE_ENV === 'development') {
console.log('🧹 Development mode: Running initial data retention cleanup...');
setTimeout(() => this.performCleanup(), 5000); // Wait 5 seconds for app to fully start
}
console.log('✅ Data Retention Service started - will run daily at 2:00 AM UTC');
}
/**
* Perform cleanup for all tenants
*/
async performCleanup() {
if (this.isRunning) {
console.log('⏳ Data retention cleanup already running, skipping...');
return;
}
this.isRunning = true;
const startTime = Date.now();
try {
console.log('🧹 Starting data retention cleanup...');
const { Tenant, DroneDetection, Heartbeat, SecurityLog } = require('../models');
// Get all tenants with their retention policies
const tenants = await Tenant.findAll({
attributes: ['id', 'slug', 'features'],
where: {
is_active: true
}
});
let totalDetectionsDeleted = 0;
let totalHeartbeatsDeleted = 0;
let totalLogsDeleted = 0;
for (const tenant of tenants) {
const retentionDays = tenant.features?.data_retention_days;
// Skip if unlimited retention (-1)
if (retentionDays === -1) {
console.log(`⏭️ Skipping tenant ${tenant.slug} - unlimited retention`);
continue;
}
// Default to 90 days if not specified
const effectiveRetentionDays = retentionDays || 90;
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - effectiveRetentionDays);
console.log(`🧹 Cleaning tenant ${tenant.slug} - removing data older than ${effectiveRetentionDays} days (before ${cutoffDate.toISOString()})`);
try {
// Clean up drone detections
const deletedDetections = await DroneDetection.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
// Clean up heartbeats
const deletedHeartbeats = await Heartbeat.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
// Clean up security logs (if they have tenant_id)
let deletedLogs = 0;
try {
deletedLogs = await SecurityLog.destroy({
where: {
tenant_id: tenant.id,
timestamp: {
[Op.lt]: cutoffDate
}
}
});
} catch (error) {
// SecurityLog might not have tenant_id field, skip if error
console.log(`⚠️ Skipping security logs cleanup for tenant ${tenant.slug}: ${error.message}`);
}
totalDetectionsDeleted += deletedDetections;
totalHeartbeatsDeleted += deletedHeartbeats;
totalLogsDeleted += deletedLogs;
console.log(`✅ Tenant ${tenant.slug}: Deleted ${deletedDetections} detections, ${deletedHeartbeats} heartbeats, ${deletedLogs} logs`);
// Log significant cleanup events
if (deletedDetections > 100 || deletedHeartbeats > 100) {
securityLogger.logSecurityEvent('info', 'Data retention cleanup performed', {
action: 'data_retention_cleanup',
tenantId: tenant.id,
tenantSlug: tenant.slug,
retentionDays: effectiveRetentionDays,
cutoffDate: cutoffDate.toISOString(),
deletedDetections,
deletedHeartbeats,
deletedLogs
});
}
} catch (error) {
console.error(`❌ Error cleaning tenant ${tenant.slug}:`, error);
securityLogger.logSecurityEvent('error', 'Data retention cleanup failed', {
action: 'data_retention_cleanup_error',
tenantId: tenant.id,
tenantSlug: tenant.slug,
error: error.message,
stack: error.stack
});
}
}
const duration = Date.now() - startTime;
this.lastCleanup = new Date();
this.cleanupStats.totalRuns++;
this.cleanupStats.totalDetectionsDeleted += totalDetectionsDeleted;
this.cleanupStats.totalHeartbeatsDeleted += totalHeartbeatsDeleted;
this.cleanupStats.totalLogsDeleted += totalLogsDeleted;
this.cleanupStats.lastRunDuration = duration;
console.log(`✅ Data retention cleanup completed in ${duration}ms`);
console.log(`📊 Total deleted: ${totalDetectionsDeleted} detections, ${totalHeartbeatsDeleted} heartbeats, ${totalLogsDeleted} logs`);
// Log cleanup summary
securityLogger.logSecurityEvent('info', 'Data retention cleanup completed', {
action: 'data_retention_cleanup_summary',
duration,
tenantsProcessed: tenants.length,
totalDetectionsDeleted,
totalHeartbeatsDeleted,
totalLogsDeleted,
timestamp: new Date().toISOString()
});
} catch (error) {
console.error('❌ Data retention cleanup failed:', error);
securityLogger.logSecurityEvent('error', 'Data retention cleanup service failed', {
action: 'data_retention_service_error',
error: error.message,
stack: error.stack,
timestamp: new Date().toISOString()
});
} finally {
this.isRunning = false;
}
}
/**
* Get cleanup statistics
*/
getStats() {
return {
...this.cleanupStats,
isRunning: this.isRunning,
lastCleanup: this.lastCleanup,
nextScheduledRun: '2:00 AM UTC daily'
};
}
/**
* Manually trigger cleanup (for testing/admin use)
*/
async triggerManualCleanup() {
console.log('🔧 Manual data retention cleanup triggered');
await this.performCleanup();
}
/**
* Preview what would be deleted for a specific tenant
*/
async previewCleanup(tenantId) {
try {
const { Tenant, DroneDetection, Heartbeat, SecurityLog } = require('../models');
const tenant = await Tenant.findByPk(tenantId);
if (!tenant) {
throw new Error('Tenant not found');
}
const retentionDays = tenant.features?.data_retention_days || 90;
if (retentionDays === -1) {
return {
tenantSlug: tenant.slug,
retentionDays: 'unlimited',
toDelete: {
detections: 0,
heartbeats: 0,
logs: 0
}
};
}
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - retentionDays);
const [detectionsCount, heartbeatsCount] = await Promise.all([
DroneDetection.count({
where: {
tenant_id: tenant.id,
timestamp: { [Op.lt]: cutoffDate }
}
}),
Heartbeat.count({
where: {
tenant_id: tenant.id,
timestamp: { [Op.lt]: cutoffDate }
}
})
]);
let logsCount = 0;
try {
logsCount = await SecurityLog.count({
where: {
tenant_id: tenant.id,
timestamp: { [Op.lt]: cutoffDate }
}
});
} catch (error) {
// SecurityLog might not have tenant_id
}
return {
tenantSlug: tenant.slug,
retentionDays,
cutoffDate: cutoffDate.toISOString(),
toDelete: {
detections: detectionsCount,
heartbeats: heartbeatsCount,
logs: logsCount
}
};
} catch (error) {
console.error('Error previewing cleanup:', error);
throw error;
}
}
}
module.exports = DataRetentionService;