Skip to content

Dev Server Infrastructure Guide

Version: 2.1.0 | Last Updated: 2026-03-16 | Status: Production


Overview

Use the current runbook for live operations

For current runtime values, troubleshooting flow, and rollback procedures, use DEV-SERVER-RUNBOOK.md. This guide remains a broad infrastructure reference and includes historical sections that may not match the latest live tuning values.

This document provides comprehensive documentation for the MBPanel Development Server infrastructure hosted on Virtuozzo Application Platform (Jelastic PaaS). It covers all server configurations, deployment procedures, maintenance operations, and troubleshooting.

Infrastructure Summary

Server IP Address URL User Platform
Backend 15.204.15.210 https://dev-backend.mightybox.site apache Apache 2.4.66 + mod_wsgi 4.9.4
Frontend 15.204.15.169 http://dev-frontend.mightybox.site nodejs Node.js 25.7.0 + PM2 + Nginx

Connected Infrastructure Services

Service Host Port Purpose
PostgreSQL 172.16.4.139 5432 Primary database (mbpanel_dev)
Redis 172.16.3.213 6379 Caching & sessions
RabbitMQ 172.16.3.83 5672 Message broker for SSE/Celery

1. Backend Server (15.204.15.210)

1.1 System Specifications

Component Configuration
OS AlmaLinux 9.7 (Moss Jungle Cat)
Platform Virtuozzo Application Platform (Jelastic PaaS)
CPU Intel Xeon E-2388G @ 3.20GHz (2 cores)
Memory 2 GB RAM + 192 MB Swap
Disk 29 GB (3.4 GB used, 24 GB available)
Web Server Apache 2.4.66
WSGI mod_wsgi 4.9.4
Python 3.12.12
Framework FastAPI 0.124.4 with ASGI-to-WSGI adapter

1.2 Running Services

UNIT                     STATUS    DESCRIPTION
httpd.service            running   The Apache HTTP Server
atd.service              running   Deferred execution scheduler
crond.service            running   Command Scheduler
dbus-broker.service      running   D-Bus System Message Bus
gssproxy.service         running   GSSAPI Proxy Daemon
nscd.service             running   Name Service Cache Daemon
rpcbind.service          running   RPC Bind
rsyslog.service          running   System Logging Service
saslauthd.service        running   SASL authentication daemon
sendmail.service         running   Sendmail Mail Transport Agent
sm-client.service        running   Sendmail Mail Transport Client
sshd.service             running   OpenSSH server daemon
systemd-journald.service running   Journal Service
systemd-udevd.service    running   Rule-based Manager for Device Events

1.3 Directory Structure

/var/www/webroot/
├── ROOT/                           # Application root (deployed code)
│   ├── app/                        # FastAPI application modules
│   │   ├── api/                    # API routes
│   │   ├── core/                   # Core configuration
│   │   ├── domains/                # Domain modules (DDD)
│   │   ├── modules/                # Feature modules
│   │   ├── infrastructure/         # External integrations
│   │   └── main.py                 # FastAPI app entry point
│   ├── alembic/                    # Database migrations
│   ├── wsgi.py                     # WSGI entry point
│   ├── supervisord.conf            # Supervisor config for Celery
│   ├── supervisor.d/               # Supervisor program configs
│   │   └── celery-worker.ini       # Celery worker definition
│   ├── requirements.txt            # Python dependencies
│   ├── pyproject.toml              # Project configuration
│   └── .env                        # Environment variables
├── virtenv/                        # Python virtual environment
│   └── lib/python3.12/site-packages/
├── backups/                        # Deployment backups
│   └── ROOT.backup.YYYYMMDD_HHMMSS/
└── logs/                           # (symlink to /var/log/httpd/)

1.4 Apache Configuration

Main Configuration: /etc/httpd/conf/httpd.conf

ServerRoot "/etc/httpd"
Listen [::]:80
User apache
Group apache
Pidfile /var/run/httpd/httpd.pid
LimitRequestBody 104857600
DocumentRoot "/var/www/html"

# MPM Configuration (prefork)
<IfModule prefork.c>
StartServers         1
MinSpareServers      1
MaxSpareServers      3
ServerLimit         10
MaxRequestWorkers   10
MaxConnectionsPerChild 500
</IfModule>

# Compression
AddOutputFilterByType DEFLATE text/plain text/html text/xml text/css
AddOutputFilterByType DEFLATE application/xml application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml application/javascript

Include conf.modules.d/*.conf
IncludeOptional conf.d/*.conf

WSGI Configuration: /etc/httpd/conf.d/wsgi.conf

Protocols h2 h2c http/1.1

Header unset Upgrade

WSGIDaemonProcess apache user=apache group=apache processes=1 threads=25 \
    python-path="/var/www/webroot/virtenv/lib/python3.12/site-packages:/var/www/webroot/" \
    home="/var/www/webroot/"

ServerRoot "/var/www/webroot/"
DocumentRoot "/var/www/webroot/"
User apache
Group apache

DefaultRuntimeDir "/var/run"

ErrorLog "/var/log/httpd/error_log"
CustomLog "/var/log/httpd/access_log" combined

<Directory "/var/www/webroot/">
  AllowOverride all
  Options -MultiViews
</Directory>

Alias /robots.txt /var/www/webroot/ROOT/robots.txt
Alias /favicon.ico /var/www/webroot/ROOT/favicon.ico
Alias /images /var/www/webroot/ROOT/images
Alias /static /var/www/webroot/ROOT/static
Alias /.well-known /var/www/webroot/ROOT/.well-known

WSGIScriptAlias / /var/www/webroot/ROOT/wsgi.py
WSGISocketPrefix "/tmp/wsgi"
WSGIPassAuthorization On
WSGIProcessGroup apache
WSGIApplicationGroup %{GLOBAL}

Known Configuration Issue

The server's actual WSGI config contains python3.14 in the path, but the virtual environment is python3.12. This mismatch exists on the live server and should be corrected:

# Fix command (if needed):
sudo sed -i 's/python3\.14/python3.12/g' /etc/httpd/conf.d/wsgi.conf
sudo systemctl restart httpd

Critical: WSGIApplicationGroup %{GLOBAL}

This directive is required for PyO3/Pydantic compatibility. Without it, the server will crash with:

ImportError: PyO3 modules do not yet support subinterpreters

SSL Configuration: /etc/httpd/conf.d/ssl.conf

<IfModule ssl_module>
Listen [::]:443
<VirtualHost *:443>
    ErrorLog /var/log/httpd/ssl-error.log
    CustomLog /var/log/httpd/ssl-access.log combined

    SSLEngine on

    SSLProtocol             all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
    SSLCipherSuite          ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
    SSLHonorCipherOrder     off
    SSLSessionTickets       off

    SSLCertificateFile      /var/lib/jelastic/SSL/jelastic.crt
    SSLCertificateKeyFile   /var/lib/jelastic/SSL/jelastic.key
    SSLCACertificateFile    /var/lib/jelastic/SSL/jelastic-ca.crt

    <IfModule headers_module>
        Header always set Strict-Transport-Security "max-age=5; includeSubDomains"
        Header always set Expect-CT "max-age=3600, enforce"
    </IfModule>
</VirtualHost>
</IfModule>

Security Headers: /etc/httpd/conf.d/10-shared_headers.conf

<IfModule headers_module>
    Header always set Cross-Origin-Embedder-Policy unsafe-none
    Header always set Cross-Origin-Opener-Policy same-origin-allow-popups
    Header always set Cross-Origin-Resource-Policy same-origin
    Header always set Permissions-Policy "geolocation=(self), payment=(self)"
    Header always set Referrer-Policy strict-origin-when-cross-origin
    Header always set X-Content-Type-Options nosniff
    Header always set X-Frame-Options SAMEORIGIN
    Header always set X-Permitted-Cross-Domain-Policies none
    Header always set X-XSS-Protection "1; mode=block;"
</IfModule>

ModSecurity Configuration: /etc/httpd/conf.d/mod_security.conf

<IfModule mod_security2.c>
    IncludeOptional modsecurity.d/*.conf
    IncludeOptional modsecurity.d/activated_rules/*.conf
    IncludeOptional modsecurity.d/activated_rules/rules/*.conf

    SecRuleEngine On
    SecStatusEngine On
    SecRequestBodyAccess On
    SecRequestBodyLimit 13107200
    SecRequestBodyNoFilesLimit 131072
    SecRequestBodyInMemoryLimit 131072
    SecRequestBodyLimitAction Reject
    SecResponseBodyAccess Off
    SecDebugLog /var/log/httpd/modsec_debug.log
    SecDebugLogLevel 0
    SecAuditEngine RelevantOnly
    SecAuditLogRelevantStatus "^(?:5|4(?!04))"
    SecAuditLogParts ABIJDEFHZ
    SecAuditLogType serial
    SecAuditLog /var/log/httpd/modsec_audit.log
    SecArgumentSeparator &
    SecCookieFormat 0
    SecTmpDir /var/lib/mod_security
    SecDataDir /var/lib/mod_security
</IfModule>

1.5 Systemd Service: /etc/systemd/system/httpd.service

[Unit]
Description=The Apache HTTP Server
After=network.target remote-fs.target nss-lookup.target
Documentation=man:httpd(8)
Documentation=man:apachectl(8)

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/httpd
EnvironmentFile=-/usr/local/etc/httpd_vars
ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND
ExecReload=/usr/sbin/httpd $OPTIONS -k graceful
ExecStop=/bin/kill -WINCH ${MAINPID}
KillSignal=SIGCONT
PrivateTmp=true

[Install]
WantedBy=multi-user.target

1.6 Environment Variables: /usr/local/etc/httpd_vars

APACHE_VERSION=2.4.66
DOCKER_EXPOSED_PORT=21,22,25,443,7979,80
MASTER_HOST=node9809
MASTER_ID=9809
MASTER_IP=172.16.2.198
MOD_WSGI_VERSION=4.9.4
OWASP_MODSECURITY_CRS_VERSION=3.3.2
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHON_VERSION=3.12.12
STACK_VERSION=2.4.66
VERSION=3.12.12
WEBROOT=/var/www/webroot
WSGI_SCRIPT=/var/www/webroot/ROOT/wsgi.py

1.7 WSGI Entry Point: /var/www/webroot/ROOT/wsgi.py

"""
WSGI entry point for Apache/mod_wsgi deployment.

IMPORTANT: a2wsgi does not support the ASGI lifespan protocol.
This means FastAPI's lifespan context manager (which starts the SSE broker)
is NEVER executed when running under mod_wsgi.

As a workaround, we start the SSE broker in a background thread at module
load time. This ensures the broker connects to RabbitMQ and can consume
events for SSE fan-out.

See: https://github.com/abersheeran/a2wsgi (lifespan limitation)
"""

import asyncio
import os
import sys
import threading

# Add the application directory to the Python path
sys.path.insert(0, os.path.dirname(__file__))

from a2wsgi import ASGIMiddleware

from app.core.config import settings
from app.infrastructure.messaging.sse import broker
from app.main import app

# Global reference to keep the broker thread alive
_broker_thread: threading.Thread | None = None
_broker_loop: asyncio.AbstractEventLoop | None = None


def _start_sse_broker() -> None:
    """Start the SSE broker in a background thread.

    This is a workaround for a2wsgi not supporting the ASGI lifespan protocol.
    The broker needs to run in its own event loop in a background thread.
    """
    global _broker_loop

    # Create a new event loop for this thread
    _broker_loop = asyncio.new_event_loop()
    asyncio.set_event_loop(_broker_loop)

    try:
        # Start the broker (connects to RabbitMQ)
        _broker_loop.run_until_complete(broker.start())
        # Keep the loop running to handle background tasks
        _broker_loop.run_forever()
    except Exception as e:
        # Log error but don't crash the thread
        import traceback

        print(f"[WSGI] SSE broker thread error: {e}", file=sys.stderr)
        traceback.print_exc()


# Start SSE broker at module load time if enabled
# This runs when Apache/mod_wsgi imports this module
if settings.enable_sse_broker:
    _broker_thread = threading.Thread(
        target=_start_sse_broker,
        name="sse-broker",
        daemon=True,  # Dies when main process exits
    )
    _broker_thread.start()

# Create the WSGI application
# Note: ASGIMiddleware does NOT execute the FastAPI lifespan context manager
application = ASGIMiddleware(app)

SSE Broker Background Thread

The WSGI entry point includes a background thread for the SSE (Server-Sent Events) broker because a2wsgi doesn't support ASGI lifespan events. This thread connects to RabbitMQ to fan-out real-time events to clients.


2. Supervisor/Celery Configuration

2.1 Overview

Supervisor is a process control system that manages the Celery worker for background task processing. It ensures the worker: - Starts automatically on server boot - Restarts automatically if it crashes - Logs output to files for debugging

2.2 Architecture

┌─────────────────────────────────────────────────────────────┐
│                     Backend Server                          │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  ┌─────────────┐     ┌─────────────────────────────────┐   │
│  │  Apache     │     │       Supervisor                │   │
│  │  (httpd)    │     │  ┌───────────────────────────┐  │   │
│  │             │     │  │ supervisord (daemon)      │  │   │
│  │  Port 80/443│     │  │                           │  │   │
│  │             │     │  │  ┌─────────────────────┐  │  │   │
│  └─────────────┘     │  │  │ celery-worker       │  │  │   │
│                      │  │  │ (2 concurrency)     │  │  │   │
│  ┌─────────────┐     │  │  │                     │  │  │   │
│  │  Redis      │◄────┼──┼──┤  Processes async   │  │  │   │
│  │  (broker)   │     │  │  │  tasks from queue  │  │  │   │
│  └─────────────┘     │  │  └─────────────────────┘  │  │   │
│                      │  │                           │  │   │
│  ┌─────────────┐     │  └───────────────────────────┘  │   │
│  │ RabbitMQ    │◄────┤                                │   │
│  │ (external)  │     └─────────────────────────────────┘   │
│  └─────────────┘                                           │
│                                                             │
└─────────────────────────────────────────────────────────────┘

2.3 Main Supervisor Config: /etc/supervisord.conf

[unix_http_server]
file=/run/supervisor/supervisor.sock
chmod=0700

[supervisord]
logfile=/var/log/supervisor/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/run/supervisord.pid
nodaemon=false
minfds=1024
minprocs=200

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///run/supervisor/supervisor.sock

[include]
files = supervisord.d/*.ini

2.4 Application Supervisor Config: /var/www/webroot/ROOT/supervisord.conf

[unix_http_server]
file=/tmp/supervisor.sock
chmod=0700

[supervisord]
logfile=/var/log/celery/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/tmp/supervisord.pid
nodaemon=false
minfds=1024
minprocs=200

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///tmp/supervisor.sock

[include]
files = /var/www/webroot/ROOT/supervisor.d/*.ini

2.5 Celery Worker Config: /var/www/webroot/ROOT/supervisor.d/celery-worker.ini

[program:celery-worker]
; Celery worker for background task processing
command=/var/www/webroot/virtenv/bin/celery -A app.infrastructure.background.celery_app worker --loglevel=info --concurrency=2
directory=/var/www/webroot/ROOT
user=apache
numprocs=1
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=60
stopasgroup=true
killasgroup=true
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker_error.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
environment=PYTHONPATH="/var/www/webroot/virtenv/lib/python3.12/site-packages:/var/www/webroot/ROOT"

2.6 Supervisor Startup Script: /usr/local/bin/start-supervisor.sh

#!/bin/bash
# Celery Supervisor Startup Script
set -e

VENV_PATH="/var/www/webroot/virtenv"
CONFIG_PATH="/var/www/webroot/ROOT/supervisord.conf"
PIDFILE="/tmp/supervisord.pid"

# Check if already running
if [ -f "$PIDFILE" ] && kill -0 $(cat "$PIDFILE") 2>/dev/null; then
    echo "Supervisor is already running (PID: $(cat $PIDFILE))"
    exit 0
fi

# Activate virtual environment and start supervisor
source "$VENV_PATH/bin/activate"
supervisord -c "$CONFIG_PATH"

echo "Supervisor started successfully"

2.7 Cron Job for Auto-Start

@reboot /usr/local/bin/start-supervisor.sh >> /var/log/celery/supervisor-startup.log 2>&1

2.8 Supervisor Commands Reference

# Check if supervisor is running
ps aux | grep supervisord | grep -v grep

# Check status of all managed processes
supervisorctl status

# Check status of specific process
supervisorctl status celery-worker

# Start supervisor daemon (if not running)
/usr/local/bin/start-supervisor.sh

# Or start manually
source /var/www/webroot/virtenv/bin/activate
supervisord -c /var/www/webroot/ROOT/supervisord.conf

# Stop supervisor daemon
supervisorctl shutdown

# Restart supervisor daemon
supervisorctl restart all

# Start specific process
supervisorctl start celery-worker

# Stop specific process
supervisorctl stop celery-worker

# Restart specific process
supervisorctl restart celery-worker

# View logs (tail -f style)
supervisorctl tail -f celery-worker

# View stderr logs
supervisorctl tail -f celery-worker stderr

# Clear process logs
supervisorctl clear celery-worker

# Reload configuration (after config changes)
supervisorctl reread
supervisorctl update

# Connect to supervisor socket
supervisorctl -s unix:///tmp/supervisor.sock status

2.9 Checking for Errors

Check Supervisor Status

ssh apache@15.204.15.210

# Check if supervisord daemon is running
ps aux | grep supervisord
# Expected: /usr/bin/python3 /var/www/webroot/virtenv/bin/supervisord

# Check managed process status
supervisorctl status
# Expected: celery-worker  RUNNING  pid 12345, uptime X:XX:XX

Check Supervisor Logs

# Main supervisor daemon log
tail -100 /var/log/celery/supervisord.log

# Celery worker stdout
tail -100 /var/log/celery/worker.log

# Celery worker stderr (errors)
tail -100 /var/log/celery/worker_error.log

# Startup log
tail -50 /var/log/celery/supervisor-startup.log

Common Error Scenarios

Error Symptom Solution
"Connection refused" Celery can't connect to RabbitMQ Check RabbitMQ is running at 172.16.3.83:5672
"No module named 'app'" Import error in worker Check PYTHONPATH in supervisor config
"Permission denied" Can't write to log files sudo chown -R apache:apache /var/log/celery/
"Address already in use" Worker already running supervisorctl stop celery-worker && supervisorctl start celery-worker
"BACKLOG LIMIT EXCEEDED" Too many pending tasks Clear Redis Celery queue or increase concurrency

Debugging Celery Worker

# Test Celery worker manually (outside supervisor)
source /var/www/webroot/virtenv/bin/activate
cd /var/www/webroot/ROOT
celery -A app.infrastructure.background.celery_app worker --loglevel=debug

# Check Celery queues
source /var/www/webroot/virtenv/bin/activate
cd /var/www/webroot/ROOT
celery -A app.infrastructure.background.celery_app inspect active
celery -A app.infrastructure.background.celery_app inspect reserved
celery -A app.infrastructure.background.celery_app inspect registered

# Purge all tasks (dangerous!)
celery -A app.infrastructure.background.celery_app purge

2.10 Troubleshooting Checklist

  1. Supervisor not starting?

    # Check if supervisor is installed
    which supervisord
    # Check config syntax
    supervisord -c /var/www/webroot/ROOT/supervisord.conf --check
    # Check log directory exists
    ls -la /var/log/celery/
    

  2. Celery worker keeps crashing?

    # Check error log
    tail -f /var/log/celery/worker_error.log
    # Check if virtual env has all dependencies
    /var/www/webroot/virtenv/bin/pip list | grep celery
    # Test import manually
    source /var/www/webroot/virtenv/bin/activate
    python -c "from app.infrastructure.background.celery_app import celery_app"
    

  3. Tasks not being processed?

    # Check RabbitMQ connection
    # From backend server:
    python3 -c "import pika; conn=pika.BlockingConnection(pika.URLParameters('amqp://mbauser:password@172.16.3.83:5672/')); print('OK')"
    
    # Check if worker is consuming
    supervisorctl tail celery-worker
    

  4. Worker consuming too much memory?

    # Check memory usage
    ps aux | grep celery | awk '{print $6, $11}'
    
    # Restart worker to free memory
    supervisorctl restart celery-worker
    


3. Frontend Server (15.204.15.169)

3.1 System Specifications

Component Configuration
OS AlmaLinux 9.7 (Moss Jungle Cat)
Platform Virtuozzo Node.js Hosting
CPU Intel Xeon E-2288G @ 3.70GHz (3 cores)
Memory 2.5 GB RAM + 240 MB Swap
Disk 48 GB (3.2 GB used, 43 GB available)
Runtime Node.js v25.7.0
Package Manager npm 11.10.1
Framework Next.js 16 with App Router
Process Manager PM2
Reverse Proxy Nginx

3.2 Running Services

UNIT                     STATUS    DESCRIPTION
nginx.service            running   The nginx HTTP and reverse proxy server
crond.service            running   Command Scheduler
dbus-broker.service      running   D-Bus System Message Bus
gssproxy.service         running   GSSAPI Proxy Daemon
nscd.service             running   Name Service Cache Daemon
rpcbind.service          running   RPC Bind
rsyslog.service          running   System Logging Service
saslauthd.service        running   SASL authentication daemon
sendmail.service         running   Sendmail Mail Transport Agent
sm-client.service        running   Sendmail Mail Transport Client
sshd.service             running   OpenSSH server daemon
systemd-journald.service running   Journal Service
systemd-udevd.service    running   Rule-based Manager for Device Events

3.3 Directory Structure

/home/jelastic/ROOT/
├── .next/                      # Next.js build output
├── public/                     # Static assets
├── node_modules/               # Dependencies
├── ecosystem.config.cjs        # PM2 configuration
├── package.json                # Node.js dependencies
├── package-lock.json           # Dependency lockfile
├── next.config.mjs             # Next.js configuration
└── .env                        # Environment variables

/var/lib/jelastic/keys/
├── privkey.pem                 # SSL private key
├── fullchain.pem               # SSL certificate chain
├── cert.pem                    # Server certificate
├── ca.cer                      # CA certificate
├── dev-frontend.mightybox.site.cer
├── dev-frontend.mightybox.site.key
└── letsencrypt/                # Let's Encrypt data

3.4 Nginx Configuration

Main Config: /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        include /etc/nginx/default.d/*.conf;

        error_page 404 /404.html;
        error_page 500 502 503 504 /50x.html;
    }
}

HTTP Virtual Host: /etc/nginx/conf.d/dev-frontend.conf

server {
    listen 80;
    server_name dev-frontend.mightybox.site;

    # ACME challenge location for Let's Encrypt
    location /.well-known/acme-challenge/ {
        root /var/www/letsencrypt;
    }

    # Proxy to Next.js
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }
}

HTTPS Virtual Host: /etc/nginx/conf.d/dev-frontend-ssl.conf

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name dev-frontend.mightybox.site;

    ssl_certificate /var/lib/jelastic/keys/fullchain.pem;
    ssl_certificate_key /var/lib/jelastic/keys/privkey.pem;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers off;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_cache_bypass $http_upgrade;
    }
}

3.5 PM2 Configuration

Ecosystem Config: /home/jelastic/ROOT/ecosystem.config.cjs

module.exports = {
  apps: [{
    name: "MightyBox",
    script: "npm",
    args: "start",
    cwd: "/home/jelastic/ROOT",
    env: {
      NODE_ENV: "production",
      PORT: 3000
    }
  }]
}

Current PM2 Status

┌────┬────────────┬───────────┬─────────┬────────┬──────────┬────────┐
│ id │ name       │ version   │ mode    │ pid    │ status   │ memory  │
├────┼────────────┼───────────┼─────────┼────────┼──────────┼────────┤
│ 0  │ MightyBox  │ 0.40.4    │ fork    │ 1574308│ online   │ 41.9mb  │
└────┴────────────┴───────────┴─────────┴────────┴──────────┴────────┘

PM2 App Name

The PM2 application is named "MightyBox", not "nextjs-app". Use this name in all PM2 commands.

3.6 Next.js Configuration: /home/jelastic/ROOT/next.config.mjs

/** @type {import('next').NextConfig} */
const nextConfig = {
  reactStrictMode: true,
  turbopack: {
    root: '.',
  },
  images: {
    remotePatterns: [
      {
        protocol: 'https',
        hostname: 'mightybox.io',
        pathname: '/wp-content/uploads/**',
      },
    ],
  },
  async rewrites() {
    let apiUrl = process.env.INTERNAL_API_URL || process.env.NEXT_PUBLIC_API_URL;
    if (!apiUrl || !/^https?:\/\//.test(apiUrl)) {
      apiUrl = 'http://localhost:8000';
    }
    return [
      {
        source: '/api/:path*',
        destination: `${apiUrl}/api/:path*`,
      },
    ];
  },
};

export default nextConfig;

3.7 Application Environment: /home/jelastic/ROOT/.env

NODE_ENV=production
NEXT_PUBLIC_API_URL=https://dev-backend.mightybox.site
INTERNAL_API_URL=https://dev-backend.mightybox.site
NEXT_PUBLIC_RECAPTCHA_SITE_KEY=6LfMmlQqAAAAAPaH9TXPjGWVhM2qao3c3rMtJDvE
NEXT_PUBLIC_REFRESH_IDLE_TIMEOUT_MINUTES=10
NEXT_PUBLIC_PLATFORM_DOMAIN=us-east.sites.mightybox.dev
NEXT_PUBLIC_SITE_DOMAIN=mightybox.cloud

# API Configuration
API_TIMEOUT=30000
API_RETRY_ATTEMPTS=3

# Stripe Payment Gateway
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_51PWEiKKiyBMl6m5sIzoIlDwMAEK2acVXSfAWF0UxI6r4XV0NH6HG2vC8920CTKUvuB2MNDqxrXmP9sj9p25HzFzj002rLKh6eT

# Billing Configuration
NEXT_PUBLIC_BILLING_MIN_AMOUNT=1
NEXT_PUBLIC_BILLING_THRESHOLDS=1,5,10,25,50,100
NEXT_PUBLIC_BILLING_AMOUNTS=1,500,1000,2500,5000,10000

3.8 PM2 Commands Reference

# Check status
pm2 status

# View logs
pm2 logs MightyBox

# View last 100 lines
pm2 logs MightyBox --lines 100

# Monitor (CPU/Memory)
pm2 monit

# Restart application
pm2 restart MightyBox

# Restart with environment reload
pm2 restart MightyBox --update-env

# Stop application
pm2 stop MightyBox

# Start application
pm2 start ecosystem.config.cjs --only MightyBox

# Save configuration (persists across reboots)
pm2 save

# Delete application
pm2 delete MightyBox

# Describe application (detailed info)
pm2 describe MightyBox

# Flush logs
pm2 flush MightyBox

# Reload (graceful)
pm2 reload MightyBox

3.9 SSL Certificates

Location: /var/lib/jelastic/keys/

File Purpose
privkey.pem Private key
fullchain.pem Full certificate chain
cert.pem Server certificate
ca.cer CA certificate

Certificate Validity: - Not Before: Mar 9 15:31:04 2026 GMT - Not After: Jun 7 15:31:03 2026 GMT


4. Deployment Procedures

4.1 Deployment Triggers

Both servers deploy automatically via GitHub Actions when changes are pushed to the development branch:

Workflow Trigger Path Target Server
deploy-backend.yml backend/** Backend (15.204.15.210)
deploy-frontend.yml frontend/** Frontend (15.204.15.169)

4.2 Manual Deployment

# Backend
gh workflow run deploy-backend.yml --ref development

# Frontend
gh workflow run deploy-frontend.yml --ref development

5. Log Locations Reference

5.1 Backend Logs

Log File Purpose Rotation
/var/log/httpd/access_log HTTP access log Daily, 30 days
/var/log/httpd/error_log HTTP error log Daily, 30 days
/var/log/httpd/ssl-access.log HTTPS access log Daily, 30 days
/var/log/httpd/ssl-error.log HTTPS error log Daily, 30 days
/var/log/httpd/pip.log Pip install logs Daily, 30 days
/var/log/httpd/modsec_audit.log ModSecurity audit Daily, 30 days
/var/log/celery/worker.log Celery worker stdout 50MB, 10 backups
/var/log/celery/worker_error.log Celery worker stderr 50MB, 10 backups
/var/log/celery/supervisord.log Supervisor daemon 50MB, 10 backups
/var/log/celery/supervisor-startup.log Startup log No rotation
/var/log/letsencrypt.log SSL renewal log No rotation

5.2 Frontend Logs

Log Source Command
Nginx access sudo tail -f /var/log/nginx/access.log
Nginx error sudo tail -f /var/log/nginx/error.log
PM2 logs pm2 logs MightyBox
PM2 logs file /home/jelastic/.pm2/logs/

6. Quick Reference

SSH Commands

ssh apache@15.204.15.210    # Backend
ssh nodejs@15.204.15.169    # Frontend

Service Restart Commands

# Backend
sudo systemctl restart httpd           # Apache
supervisorctl restart celery-worker    # Celery
/usr/local/bin/start-supervisor.sh     # Start supervisor

# Frontend
pm2 restart MightyBox                  # Next.js app
sudo systemctl restart nginx           # Nginx

Health Check Endpoints

# Backend
curl https://dev-backend.mightybox.site/api/v1/health

# Frontend
curl -I http://dev-frontend.mightybox.site

Critical Configuration Files

Server File Purpose
Backend /etc/httpd/conf.d/wsgi.conf WSGI configuration
Backend /etc/httpd/conf.d/ssl.conf SSL configuration
Backend /usr/local/etc/httpd_vars Environment variables
Backend /var/www/webroot/ROOT/supervisord.conf Supervisor config
Backend /var/www/webroot/ROOT/supervisor.d/celery-worker.ini Celery worker
Backend /var/www/webroot/ROOT/.env Application environment
Backend /var/www/webroot/ROOT/wsgi.py WSGI entry point
Frontend /etc/nginx/conf.d/dev-frontend.conf HTTP proxy
Frontend /etc/nginx/conf.d/dev-frontend-ssl.conf HTTPS proxy
Frontend /home/jelastic/ROOT/ecosystem.config.cjs PM2 config
Frontend /home/jelastic/ROOT/.env Application environment

7. Virtuozzo Platform Reference

Official Documentation

  • Virtuozzo Application Platform: https://www.virtuozzo.com/application-platform-docs/
  • Python Center: https://www.virtuozzo.com/application-platform-docs/python-center/
  • Node.js Documentation: https://www.virtuozzo.com/application-management-docs/nodejs/
  • mod_wsgi Documentation: https://modwsgi.readthedocs.io/

Platform-Specific Notes

  1. Virtual Environment Required: Virtuozzo expects applications at /var/www/webroot/virtenv/
  2. PyO3 Subinterpreter Issue: Must use WSGIApplicationGroup %{GLOBAL} for Pydantic
  3. SSL Certificate Locations:
  4. Backend: /var/lib/jelastic/SSL/
  5. Frontend: /var/lib/jelastic/keys/
  6. SFTP Preferred: Use SFTP to avoid shell output interference

Changelog

Date Version Changes
2026-03-16 2.1.0 Added comprehensive Supervisor/Celery documentation with troubleshooting
2026-03-16 2.0.0 Complete rewrite with actual server configurations
2026-03-16 1.0.0 Initial documentation creation

Maintained by: MBPanel Engineering Team Contact: eng@mbpanel.io