29 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
MCP Database Server is a WebSocket/SSE-based database tooling service that exposes database operations through the Model Context Protocol (MCP). It allows AI clients to interact with multiple databases (PostgreSQL and SQL Server) through a unified, authenticated interface.
Background and Goals
Problem Statement: The original MCP implementation used STDIO transport, requiring each AI client to install and maintain a local copy of the codebase. This caused maintenance overhead and made it difficult to share database access across multiple clients or hosts.
Solution: A long-running server that:
- Runs as a daemon/container accessible from multiple AI clients
- Supports remote transports (WebSocket and SSE)
- Provides centralized authentication and audit logging
- Enables multi-database/multi-schema access through a single endpoint
- Eliminates per-client installation requirements
Key Design Decisions (from v1.0.0):
- Transport Layer: MCP SDK lacks server-side WebSocket support, so we implemented custom
WebSocketServerTransportandSSEServerTransportclasses - Multi-Schema Access: Single configuration supports multiple databases with different schemas accessible via
environmentparameter - Authentication: Token-based (Bearer) authentication by default; mTLS support reserved for future
- Concurrency Model: Per-client session isolation with independent connection pools
- Code Separation: Complete separation from original STDIO-based codebase; this is a standalone server implementation
- Database Driver Abstraction (v1.0.2): DatabaseDriver interface allows pluggable database backends (PostgreSQL, SQL Server)
- Server Mode (v1.0.3): Single server-level configuration with dynamic database discovery, eliminating per-database environment configuration
Build and Development Commands
Build
npm run build
Compiles TypeScript to JavaScript in the dist/ directory.
Start Server
# Production
npm start
# Development (with hot reload)
npm run dev
Generate Authentication Token
node scripts/generate-token.js
Generates a secure 64-character hex token for Bearer authentication.
Test Database Connection
npx tsx scripts/test-connection.ts
IMPORTANT: After making any code changes, always update changelog.json with version number, date, and description of changes. See "Version History and Roadmap" section for details.
Architecture
Core Layers
The codebase is organized into distinct layers:
-
server.ts - Main entry point that:
- Loads and validates configuration
- Initializes the UnifiedServerManager (handles both WebSocket and SSE)
- Creates PostgresMcp instance for database operations
- Manages session lifecycle and graceful shutdown
-
transport/ - Multi-transport support:
unified-server.ts: Single HTTP server handling both WebSocket and SSE transportswebsocket-server-transport.ts: WebSocket client transport implementationsse-server-transport.ts: Server-Sent Events client transport- Both transports share the same authentication and session management
-
drivers/ - Database driver abstraction layer (v1.0.2+):
database-driver.ts: DatabaseDriver interface (60+ methods)driver-factory.ts: Factory function to create drivers by typepostgres/postgres-driver.ts: PostgreSQL driver implementationsqlserver/sqlserver-driver.ts: SQL Server driver implementation- Enables multi-database support with unified API
-
core/ - Database abstraction layer (DatabaseMcp):
connection-manager.ts: Pool management for multiple database environmentsquery-runner.ts: SQL query execution with schema path handlingtransaction-manager.ts: Transaction lifecycle (BEGIN/COMMIT/ROLLBACK)metadata-browser.ts: Schema introspection (tables, views, functions, etc.)bulk-helpers.ts: Batch insert operationsdiagnostics.ts: Query analysis and performance diagnostics
-
tools/ - MCP tool registration:
- Each file (
metadata.ts,query.ts,data.ts,diagnostics.ts) registers a group of MCP tools - Tools use zod schemas for input validation
- Tools delegate to DatabaseMcp core methods
- Each file (
-
session/ - Session management:
- Per-client session tracking with unique session IDs
- Transaction-to-session binding (transactions are bound to the session's client)
- Query concurrency limits per session
- Automatic stale session cleanup
-
config/ - Configuration system:
- Supports JSON configuration files with environment variable resolution (
ENV:VAR_NAMEsyntax) - Three-tier override: config file → environment variables → CLI arguments
- Validation using zod schemas
- Multiple database environments per server
- Supports both PostgreSQL and SQL Server environments
- Supports JSON configuration files with environment variable resolution (
-
auth/ - Authentication:
- Token-based authentication (Bearer tokens in WebSocket/SSE handshake)
- Verification occurs at connection time (both WebSocket upgrade and SSE endpoint)
-
audit/ - Audit logging:
- JSON Lines format for structured logging
- SQL parameter redaction for security
- Configurable output (stdout or file)
-
health/ - Health monitoring:
/healthendpoint provides server status and per-environment connection status- Includes active connection counts and pool statistics
- changelog/ - Version tracking:
/changelogendpoint exposes version history without authentication- Version information automatically synced from
changelog.json - Used for tracking system updates and changes
Key Design Patterns
Environment Isolation: Each configured database "environment" (e.g., "drworks", "ipworkstation") has:
- Isolated connection pool
- Independent schema search paths
- Separate permission modes (readonly/readwrite/ddl)
- Per-environment query timeouts
Session-Transaction Binding:
- When
pg_begin_transactionis called, a dedicated database client is bound to that session - All subsequent queries in that session use the same client until commit/rollback
- Sessions are automatically cleaned up on disconnect or timeout
- This prevents transaction leaks across different AI clients
Schema Path Resolution:
- Tools accept optional
schemaparameter - Resolution order: tool parameter → environment defaultSchema → environment searchPath
- Search path is set per-query using PostgreSQL's
SET search_path
Unified Transport Architecture:
- Single HTTP server handles both WebSocket (upgrade requests) and SSE (GET /sse)
- Transport-agnostic MCP server implementation
- Both transports use the same authentication, session management, and tool registration
Configuration
Configuration File Structure
Configuration uses config/database.json (see config/database.example.json for template).
PostgreSQL Environment Example:
{
"server": {
"listen": { "host": "0.0.0.0", "port": 7700 },
"auth": { "type": "token", "token": "ENV:MCP_AUTH_TOKEN" },
"allowUnauthenticatedRemote": false,
"maxConcurrentClients": 50,
"logLevel": "info"
},
"environments": {
"drworks": {
"type": "postgres",
"connection": {
"host": "localhost",
"port": 5432,
"database": "shcis_drworks_cpoe_pg",
"user": "postgres",
"password": "ENV:MCP_DRWORKS_PASSWORD",
"ssl": { "require": true }
},
"defaultSchema": "dbo",
"searchPath": ["dbo", "api", "nurse"],
"pool": { "max": 10, "idleTimeoutMs": 30000 },
"statementTimeoutMs": 60000,
"slowQueryMs": 2000,
"mode": "readwrite"
}
},
"audit": {
"enabled": true,
"output": "stdout",
"format": "json",
"redactParams": true,
"maxSqlLength": 200
}
}
SQL Server Environment Example:
{
"environments": {
"sqlserver-dev": {
"type": "sqlserver",
"connection": {
"host": "localhost",
"port": 1433,
"database": "MyDatabase",
"user": "sa",
"password": "ENV:MSSQL_PASSWORD",
"encrypt": true,
"trustServerCertificate": true
},
"defaultSchema": "dbo",
"pool": { "max": 10, "idleTimeoutMs": 30000 },
"statementTimeoutMs": 60000,
"mode": "readwrite"
}
}
}
Server Mode Example (v1.0.3+):
Server mode allows configuring a single database server connection and dynamically accessing any database on that server, eliminating the need to configure each database as a separate environment.
{
"environments": {
"pg-server": {
"type": "postgres",
"serverMode": true,
"connection": {
"host": "47.99.124.43",
"port": 5432,
"user": "postgres",
"password": "ENV:MCP_PG_PASSWORD",
"ssl": { "require": true }
},
"defaultSchema": "dbo",
"pool": { "max": 100 },
"mode": "ddl"
}
}
}
Key differences in server mode:
serverMode: true- Enables server-level configurationdatabasefield is optional - Not required in connection config- All tools accept optional
databaseparameter to specify target database dynamically
Configuration Fields
server - Global server settings:
listen.host/port: Listen address (default: 0.0.0.0:7700)auth.type: Authentication type (token|mtls|none)auth.token: Bearer token value (supportsENV:prefix)allowUnauthenticatedRemote: Allow listening on non-localhost without auth (default: false, use with caution)maxConcurrentClients: Max WebSocket connections (default: 50)logLevel: Log level (debug|info|warn|error)
environments - Database connection configurations:
- Each environment is an isolated connection pool with unique name
type: Database type (postgres|sqlserver)serverMode: Enable server-level connection (v1.0.3+, default: false)- When
true,databasefield becomes optional - Connection pools are created dynamically per
(envName, database)pair
- When
- For PostgreSQL:
connection.ssl: SSL configuration ({ require: true }orfalse)searchPath: Array of schemas for PostgreSQL search_path
- For SQL Server:
connection.encrypt: Enable encryption (default: true)connection.trustServerCertificate: Trust self-signed certs (default: false)
- Common fields:
defaultSchema: Default schema when not specified in tool callspool.max: Max connections in pool (default: 10)pool.idleTimeoutMs: Idle connection timeout (default: 30000)statementTimeoutMs: Query timeout (default: 60000)slowQueryMs: Slow query threshold for warnings (default: 2000)mode: Permission mode (readonly|readwrite|ddl)
audit - Audit logging configuration:
enabled: Enable audit logging (default: true)output: Output destination (stdoutor file path)format: Log format (jsonrecommended)redactParams: Redact SQL parameters (default: true)maxSqlLength: Max SQL preview length (default: 200)
Environment Variable Resolution
Environment variables can be referenced using ENV:VAR_NAME syntax:
{
"password": "ENV:MCP_DRWORKS_PASSWORD"
}
Naming Convention: MCP_<ENVIRONMENT>_<FIELD>
- Environment names are uppercased
- Non-alphanumeric chars become underscores
- Example:
drworks→MCP_DRWORKS_PASSWORD - Nested fields use double underscore:
ssl.ca→MCP_DRWORKS_SSL__CA
Priority order (highest to lowest):
- CLI arguments (
--auth-token,--listen,--log-level) - Environment variables (
MCP_AUTH_TOKEN,MCP_LISTEN,MCP_LOG_LEVEL) - Configuration file
Multi-Schema Access Examples
Scenario 1: Use default schema
// Uses environment's defaultSchema ("dbo")
await client.callTool('pg_list_tables', {
environment: 'drworks'
});
Scenario 2: Switch to specific schema
// Temporarily use "api" schema
await client.callTool('pg_list_tables', {
environment: 'drworks',
schema: 'api'
});
Scenario 3: Custom search path
// Query with custom search path priority
await client.callTool('pg_query', {
environment: 'drworks',
searchPath: ['nurse', 'dbo', 'api'],
query: 'SELECT * FROM patient_info' // Searches nurse → dbo → api
});
Server Mode and Dynamic Database Discovery (v1.0.3+)
Server mode eliminates the need to configure each database as a separate environment. Instead, configure a single server-level connection and dynamically specify the database at query time.
Scenario 1: Discover all databases on server
// List all databases on the server
const databases = await client.callTool('pg_discover_databases', {
environment: 'pg-server'
});
// Returns: ["shcis_drworks_cpoe_pg", "shcis_ipworkstation", "postgres", ...]
Scenario 2: Query specific database dynamically
// List tables in a specific database
await client.callTool('pg_list_tables', {
environment: 'pg-server',
database: 'shcis_drworks_cpoe_pg',
schema: 'dbo'
});
// Execute query in a specific database
await client.callTool('pg_query', {
environment: 'pg-server',
database: 'shcis_ipworkstation',
sql: 'SELECT * FROM users LIMIT 10'
});
Scenario 3: Mix traditional and server mode environments
{
"environments": {
"prod-drworks": {
"type": "postgres",
"connection": {
"host": "prod-db.example.com",
"database": "shcis_drworks_cpoe_pg",
"user": "readonly_user",
"password": "ENV:PROD_PASSWORD"
},
"mode": "readonly"
},
"dev-server": {
"type": "postgres",
"serverMode": true,
"connection": {
"host": "dev-db.example.com",
"user": "admin",
"password": "ENV:DEV_PASSWORD"
},
"mode": "ddl"
}
}
}
Connection Pool Behavior in Server Mode:
- Pools are cached by composite key:
envName:database - First query to a database creates a new pool
- Subsequent queries to the same database reuse the cached pool
- When no database is specified, defaults to
postgres(PostgreSQL) ormaster(SQL Server)
Key Implementation Notes
Security and Authentication
Token Authentication (default in v1.0.0):
- Uses Bearer tokens in HTTP Authorization header during WebSocket/SSE handshake
- Token format: 64-character hex string (256-bit random, generated via
crypto.randomBytes(32)) - Token verification happens at connection time via
verifyClienthook - Failed authentication returns HTTP 401 or WebSocket close code 1008
- Token can be configured via: CLI args → env vars → config file
Token Transmission:
GET / HTTP/1.1
Upgrade: websocket
Authorization: Bearer <64-char-token>
Security Best Practices:
- Always use
wss://(WebSocket over TLS) in production - Never log tokens in audit logs (only first 8 chars as clientId)
- Rotate tokens periodically
- Use environment variables for token storage, not config files
- Enable SSL for database connections (
ssl.require: true) - Use
mode: "readonly"for read-only access scenarios
mTLS Support (reserved for future):
- Mutual TLS authentication via client certificates
- Configured via
auth.type: "mtls"with CA/cert/key paths
Audit Logging
Format: JSON Lines (one JSON object per line)
{
"timestamp": "2025-12-23T10:30:45.123Z",
"level": "audit",
"sessionId": "550e8400-e29b-41d4-a716-446655440000",
"clientId": "abc12345",
"environment": "drworks",
"tool": "pg_query",
"sqlHash": "a3b2c1d4",
"sqlPreview": "SELECT * FROM dbo.users WHERE id = $1 LIMIT 100",
"params": "[REDACTED]",
"durationMs": 234,
"rowCount": 15,
"status": "success"
}
Privacy Protection:
- SQL parameters are never logged (always "[REDACTED]")
- SQL preview is truncated to
maxSqlLength(default: 200 chars) - String literals in SQL are replaced with
[STRING], numbers with[NUMBER] - Error messages are sanitized to remove potential data leaks
Slow Query Warnings:
- Queries exceeding
slowQueryMs(default: 2000ms) trigger warning logs - Includes
sqlHashfor correlation with audit logs - Used for performance monitoring without exposing full queries
Concurrency and Session Management
Architecture:
WebSocket Connection → Session (UUID) → Query Queue → Connection Pool
↓
Transaction Client (if active)
Concurrency Limits:
- Global:
maxConcurrentClients(default: 50) - max WebSocket connections - Per-Session:
maxQueriesPerSession(default: 5) - concurrent queries per client - Per-Environment:
pool.max(default: 10) - database connections per environment
Session Isolation:
- Each WebSocket connection gets a unique
sessionId(UUID v4) - Sessions track
activeQueriescount and enforce per-session limits - Sessions automatically timeout after
sessionTimeout(default: 1 hour) - Stale sessions are cleaned up every 60 seconds
Transaction Binding:
pg_begin_transactionacquires a dedicated client from the pool- This client is stored in
session.transactionClientand bound to the session - All subsequent queries in that environment use this client (not the pool)
pg_commit_transactionorpg_rollback_transactionreleases the client- If client disconnects during transaction, automatic ROLLBACK occurs
- This prevents transaction state leaks across different AI clients
Implementation Detail (src/session/session-manager.ts:80-120):
// Transaction begins
session.transactionClient = await pool.connect();
session.transactionEnv = environmentName;
// Subsequent queries route to transaction client
if (session.transactionClient && session.transactionEnv === env) {
return session.transactionClient.query(sql);
}
// On disconnect or timeout
if (session.transactionClient) {
await session.transactionClient.query('ROLLBACK');
session.transactionClient.release();
}
Transaction Safety:
- The TransactionManager stores transaction clients in a WeakMap keyed by session ID
- On disconnect, the SessionManager automatically rolls back active transactions
- Never use pool.query() for operations within a transaction; always use the session-bound client
Connection Pool Lifecycle:
- Pools are lazily created on first use (getPool method)
- Each pool has configurable max connections, idle timeout, and statement timeout
- Graceful shutdown closes all pools via
PostgresConnectionManager.closeAll()
MCP Tool Registration:
- Tools are registered in
tools/index.tsby calling registration functions from each tool category - Each tool must have a unique name (prefixed with
pg_) - Tool schemas use zod for validation; the MCP SDK handles schema conversion
Error Handling:
- Database errors are caught and returned as MCP error responses
- The server never crashes on query errors; only fatal startup errors exit the process
- Audit logger sanitizes SQL and redacts parameters before logging
Client Configuration
MCP Client Setup
AI clients (Claude Code, Cursor, etc.) connect via MCP client configuration:
{
"mcpServers": {
"database": {
"transport": "websocket",
"endpoint": "ws://localhost:7700",
"headers": {
"Authorization": "Bearer your-token-here"
}
}
}
}
For SSE transport (added in v1.0.0.1):
{
"mcpServers": {
"database": {
"url": "http://localhost:7700/sse",
"headers": {
"Authorization": "Bearer your-token-here"
}
}
}
}
Available MCP Tools
The server exposes 30+ PostgreSQL tools grouped by category:
Metadata Tools:
pg_list_environments- List configured environmentspg_discover_databases- List all databases on the server (v1.0.3+, for serverMode environments)pg_list_schemas- List schemas in environmentpg_list_tables- List tables in schemapg_describe_table- Get table structure (columns, types, constraints)pg_list_views- List viewspg_list_functions- List functionspg_list_indexes- List indexespg_list_constraints- List constraintspg_list_triggers- List triggers
Query Tools:
pg_query- Execute read-only SELECT querypg_explain- Get query execution plan (EXPLAIN)
Data Manipulation Tools:
pg_insert- Insert single rowpg_update- Update rowspg_delete- Delete rowspg_upsert- Insert or update (ON CONFLICT)pg_bulk_insert- Batch insert multiple rows
Transaction Tools:
pg_begin_transaction- Start transactionpg_commit_transaction- Commit transactionpg_rollback_transaction- Rollback transaction
Diagnostic Tools:
pg_analyze_query- Analyze query performancepg_check_connection- Verify database connectivity
All tools require environment parameter to specify which database to use. In server mode (v1.0.3+), tools also accept an optional database parameter to dynamically target a specific database.
Version History and Roadmap
Changelog Maintenance (IMPORTANT)
Every code change MUST be documented in the changelog. This is a critical project requirement.
How to Update Changelog
When making any code changes:
-
Update
changelog.jsonin the project root:- Increment the version number following semantic versioning
- Use format:
major.minor.patchormajor.minor.patch-buildnumber(e.g.,1.0.1.03) - Add/update the version entry with:
version: New version numberdate: Current date (YYYY-MM-DD format)description: Brief summary of changes (Chinese or English)changes: Array of specific changes made
-
Version number is auto-synced:
- The server automatically reads
currentVersionfromchangelog.json - No need to manually update
package.jsonorserver.ts - The
/changelogendpoint exposes full version history
- The server automatically reads
-
Example changelog entry:
{
"version": "1.0.1.04",
"date": "2024-12-25",
"description": "添加新功能",
"changes": [
"新增 XXX 功能",
"修复 YYY bug",
"优化 ZZZ 性能"
]
}
- View changelog:
# Via HTTP endpoint (no authentication required)
curl http://localhost:7700/changelog
# Or check the file directly
cat changelog.json
Remember: Documentation is as important as code. Always update the changelog before committing!
Version History
v1.0.0 (Initial Release)
- WebSocket transport with custom
WebSocketServerTransportimplementation - Token-based authentication
- Multi-environment configuration with per-environment connection pools
- Multi-schema access via
defaultSchemaandsearchPath - Session-based transaction management
- Audit logging with SQL parameter redaction
- Health check endpoint
- Docker support with graceful shutdown
- 30+ PostgreSQL MCP tools
v1.0.1 (2024-12-21)
- Added SSE transport to support clients without WebSocket (e.g., cursor-browser-extension)
- Unified server now handles both WebSocket and SSE on same port
- SSE endpoint:
GET /ssefor stream,POST /messagesfor client messages - Backward compatible with v1.0.0 WebSocket clients
v1.0.1.01 (2024-12-22)
- Fixed connection pool leak issue
- Fixed SSE disconnect/reconnect logic
- Improved error handling
v1.0.1.02 (2024-12-23)
- Fixed
ssl.require: falseconfiguration not taking effect - Improved SSL configuration validation logic
- Updated documentation for SSL configuration
v1.0.1.03 (2024-12-24)
- Added
allowUnauthenticatedRemoteconfiguration option - Allow explicit enabling of unauthenticated remote access in trusted networks
- Improved security validation error messages
- New
/changelogendpoint to view version update history (no authentication required)
v1.0.2 (2024-12-27)
- Multi-database support: Added SQL Server alongside PostgreSQL
- Database Driver Abstraction: New
DatabaseDriverinterface (60+ methods) - New driver implementations:
PostgresDriver: PostgreSQL usingpglibrarySqlServerDriver: SQL Server usingmssqllibrary
- SQL Server features:
- Connection pooling via mssql ConnectionPool
- Parameter placeholder:
@p1, @p2, ... - Identifier quoting:
[name] - OFFSET/FETCH pagination (SQL Server 2012+)
- MERGE statement for UPSERT
- OUTPUT clause for returning data
- sys.* system tables for metadata
- Configuration supports
type: "postgres"ortype: "sqlserver" - Factory functions:
createDatabaseMcp(),createDriver() - Convenience classes:
PostgresMcp,SqlServerMcp - Unit tests for both drivers (99+ tests)
v1.0.3 (2024-12-28)
- Server Mode Support: Dynamic database discovery without per-database configuration
- New
serverModeconfiguration option for server-level connections databasefield now optional in connection config whenserverMode: true- All MCP tools accept optional
databaseparameter for dynamic database switching - New
pg_discover_databasestool to list all databases on a server - ConnectionManager supports dynamic connection pools keyed by
(envName, database) - PostgreSQL driver:
buildListDatabasesQuery()usingpg_databasecatalog - SQL Server driver:
buildListDatabasesQuery()usingsys.databases - Configuration validation:
databaserequired unlessserverModeis enabled - Backward compatible: existing configurations work without modification
Future Roadmap
Multi-database support (SQL Server, MySQL adapters)✅ Completed in v1.0.2- mTLS authentication implementation
- RBAC (role-based access control) for fine-grained permissions
- Rate limiting and quota management per client
- Configuration hot reload
- Metrics and Prometheus export
Testing Notes
The project uses Vitest for unit testing. Tests are located in src/__tests__/:
postgres-driver.test.ts- PostgreSQL driver unit tests (40 tests)sqlserver-driver.test.ts- SQL Server driver unit tests (41 tests)config.test.ts- Configuration type and factory tests (18 tests)
Running tests:
npm test # Run all tests once
npm run test:watch # Run tests in watch mode
npm run test:coverage # Run tests with coverage report
Test coverage goals:
- Driver methods (quoteIdentifier, buildXxxQuery, mapToGenericType, etc.)
- Configuration type guards (isPostgresConfig, isSqlServerConfig)
- Factory functions (createDriver, createDatabaseMcp)
Deployment and Operations
Docker Deployment
Build Image:
docker build -t mcp-database-server:1.0.3 .
Run Container:
docker run -d \
--name mcp-database-server \
-p 7700:7700 \
-v $(pwd)/config/database.json:/app/config/database.json:ro \
-e MCP_AUTH_TOKEN=your-token \
-e MCP_DRWORKS_PASSWORD=your-password \
mcp-database-server:1.0.3
Docker Compose:
docker compose up -d
Health Check
curl http://localhost:7700/health
Response format:
{
"status": "ok",
"uptime": 3600,
"version": "1.0.0",
"clients": 5,
"environments": [
{
"name": "drworks",
"status": "connected",
"poolSize": 10,
"activeConnections": 2
}
],
"timestamp": "2025-12-23T10:30:00.000Z"
}
Status values: ok (all connected) | degraded (some disconnected) | error (critical failure)
Changelog Endpoint
View version history and system updates:
curl http://localhost:7700/changelog
Response format:
{
"currentVersion": "1.0.1.03",
"changelog": [
{
"version": "1.0.1.03",
"date": "2024-12-24",
"description": "增强安全配置灵活性和更新日志功能",
"changes": [
"添加 allowUnauthenticatedRemote 配置选项",
"允许在受信任网络中显式启用无认证远程访问",
"改进安全验证错误提示信息",
"新增 /changelog 端点查看版本更新历史"
]
}
]
}
Note: This endpoint does not require authentication and can be accessed publicly.
Graceful Shutdown
The server handles SIGTERM and SIGINT signals:
- Stops accepting new connections
- Rolls back active transactions
- Closes all sessions
- Closes database connection pools
- Exits cleanly
# Docker
docker stop mcp-database-server
# Direct process
kill -TERM $(pgrep -f "node dist/src/server.js")
Command Line Options
node dist/src/server.js [options]
Options:
--config <path> Configuration file path (default: ./config/database.json)
--listen <host:port> Listen address (overrides config)
--auth-token <token> Auth token (overrides config)
--log-level <level> Log level (overrides config: debug/info/warn/error)
Environment variables override configuration file:
MCP_CONFIG- Configuration file pathMCP_LISTEN- Listen address (host:port)MCP_AUTH_TOKEN- Authentication tokenMCP_LOG_LEVEL- Log levelMCP_<ENV>_PASSWORD- Database password for environment