Sending bulk SMS messages – whether for marketing campaigns, notifications, or alerts – requires more than just a simple loop calling an API. A production-ready system needs to handle potential failures, manage rate limits, provide status tracking, and scale reliably.
This guide details how to build such a system using Node.js with the high-performance Fastify framework, leveraging the Sinch SMS API for message delivery and Redis with BullMQ for robust background job queuing.
What we will build:
- A Fastify API endpoint to accept bulk SMS requests (list of recipients and a message body).
- A background job queue (using BullMQ and Redis) to process each SMS message individually, ensuring reliability and resilience.
- A worker process that picks up jobs from the queue and sends SMS messages via the Sinch API.
- Database persistence (using Prisma and PostgreSQL) to track the status of each message and the overall batch.
- An API endpoint to check the status of a submitted bulk messaging batch.
- Robust error handling, retries, logging, and basic security measures.
Core Technologies:
- Node.js: The JavaScript runtime environment.
- Fastify: A high-performance, low-overhead web framework for Node.js. Chosen for its speed, extensibility, and focus on developer experience.
- Sinch SMS API: The third-party service used to send SMS messages. We'll interact with its REST API.
- Redis: An in-memory data structure store, used here as the backend for our message queue.
- BullMQ: A robust, fast, and reliable Redis-based queue system for Node.js. Essential for handling background jobs like sending individual SMS messages.
- Prisma: A modern database toolkit for Node.js and TypeScript, simplifying database access, migrations, and type safety. Used here with PostgreSQL.
- PostgreSQL: A powerful, open-source object-relational database system.
- Axios: A promise-based HTTP client for making requests to the Sinch API.
System Architecture:
+-----------+ +-----------------+ +--------------+ +----------------+ +-----------+
| Client |----->| Fastify API |----->| Redis Queue |----->| Node.js Worker|----->| Sinch API |
| (Browser/ | | (Receives Bulk | | (BullMQ) | | (Processes Job)| | (Sends SMS)|
| App) | | Request, Queues | +--------------+ +----------------+ +-----------+
+-----------+ | Individual | | |
| Jobs) | | |
+-----------------+ | |
| ^ | |
| | Check Status | Update Status |
v | v v
+-----------------------------------------------------------+
| PostgreSQL Database (Prisma) |
| (Stores Batch Info, Individual Message Status, Errors) |
+-----------------------------------------------------------+
Prerequisites:
- Node.js (v18 or later recommended) and npm/yarn.
- Access to a terminal or command prompt.
- A Sinch account with SMS API credentials (Service Plan ID, API Token) and a configured Sinch phone number.
- Docker installed and running (for easily running Redis and PostgreSQL locally) or access to separate Redis and PostgreSQL instances.
- Basic understanding of Node.js, asynchronous programming, REST APIs, and databases.
curl
or a tool like Postman for testing the API.
Final Outcome:
By the end of this guide, you will have a scalable and reliable Node.js application capable of accepting requests to send thousands of SMS messages, processing them reliably in the background, handling errors gracefully, and allowing clients to check the status of their bulk sends.
1. Setting up the project
Let's start by initializing our Node.js project and installing the necessary dependencies.
1. Initialize the project:
Open your terminal, create a project directory, and navigate into it.
mkdir fastify-sinch-bulk-sms
cd fastify-sinch-bulk-sms
npm init -y
This creates a basic package.json
file.
2. Install dependencies:
We need Fastify for the web server, Axios for HTTP requests, BullMQ for the queue, ioredis as the Redis client for BullMQ, dotenv for environment variables, Prisma for database interaction, and pino-pretty for human-readable logs during development.
npm install fastify axios bullmq ioredis dotenv @prisma/client pino-pretty @fastify/rate-limit @fastify/helmet
npm install --save-dev prisma nodemon
fastify
: The web framework.axios
: To make requests to the Sinch API.bullmq
: The job queue library.ioredis
: Redis client required by BullMQ and potentially rate limiting.dotenv
: To load environment variables from a.env
file.@prisma/client
: Prisma's database client.pino-pretty
: Makes Fastify's logs readable during development.@fastify/rate-limit
: Plugin for API rate limiting.@fastify/helmet
: Plugin for setting security headers.prisma
(dev): The Prisma CLI for migrations and generation.nodemon
(dev): Automatically restarts the server during development when files change.
3. Configure Development Scripts:
Open your package.json
and modify the scripts
section:
// package.json
{
// ... other configurations
"scripts": {
"start": "node src/server.js",
"dev": "nodemon --watch src --exec 'node -r pino-pretty src/server.js'",
"worker": "node src/worker.js",
"dev:worker": "nodemon --watch src src/worker.js",
"prisma:migrate": "prisma migrate dev",
"prisma:generate": "prisma generate",
"test": "echo \"Error: no test specified\" && exit 1"
},
// ... other configurations
}
start
: Runs the main API server for production.dev
: Runs the API server in development mode usingnodemon
andpino-pretty
.worker
: Runs the background job worker for production.dev:worker
: Runs the worker in development mode usingnodemon
.prisma:migrate
: Applies database schema changes.prisma:generate
: Generates the Prisma Client based on your schema.
4. Set up Project Structure:
Create the following directory structure within your project root:
fastify-sinch-bulk-sms/
├── prisma/
├── src/
│ ├── config/
│ ├── lib/
│ ├── routes/
│ ├── workers/
│ ├── server.js
│ └── worker.js
├── .env
├── .gitignore
├── docker-compose.yml
└── package.json
prisma/
: Will contain your database schema and migrations.src/
: Contains all the application source code.src/config/
: Configuration files (e.g., queue setup).src/lib/
: Shared libraries/utilities (e.g., Prisma client, Sinch client).src/routes/
: Fastify route handlers.src/workers/
: Background worker logic.src/server.js
: Entry point for the Fastify API server.src/worker.js
: Entry point for the BullMQ worker process..env
: Stores environment variables (API keys, database URLs, etc.)..gitignore
: Specifies files/directories to ignore in Git.docker-compose.yml
: Defines local development services (Postgres, Redis).
5. Create .gitignore
:
Create a .gitignore
file in the project root to avoid committing sensitive information and unnecessary files:
# .gitignore
node_modules
dist
.env
*.log
# Prisma
prisma/migrations/*/*.sql
prisma/dev.db*
# OS specific
.DS_Store
Thumbs.db
6. Set up Environment Variables (.env
):
Create a .env
file in the project root. This file will hold your secrets and configuration. Never commit this file to version control.
# .env
# Sinch API Credentials
# Get these from your Sinch Customer Dashboard under SMS -> APIs
SINCH_SERVICE_PLAN_ID="YOUR_SERVICE_PLAN_ID"
SINCH_API_TOKEN="YOUR_API_TOKEN"
SINCH_NUMBER="YOUR_SINCH_VIRTUAL_NUMBER" # e.g., +12025550100
# Sinch API Base URL - adjust region if needed (e.g., us.sms.api.sinch.com)
# See: https://developers.sinch.com/docs/sms/api-reference/regions/
SINCH_API_BASE_URL="https://us.sms.api.sinch.com"
# Redis Connection URL (for BullMQ & Rate Limiting)
# Example for local Docker Redis: redis://localhost:6379
# Can include auth: redis://:yourpassword@localhost:6379
REDIS_URL="redis://localhost:6379"
# Database Connection URL (for Prisma)
# Example for local Docker PostgreSQL: postgresql://user:password@localhost:5432/mydb?schema=public
DATABASE_URL="postgresql://postgres:password@localhost:5432/bulk_sms_db?schema=public"
# Important Security Note: The example uses 'password' for the database.
# NEVER use default or weak passwords in production. Use strong, unique
# passwords and manage secrets securely (e.g., via environment variables
# injected by your deployment platform or a secrets manager), even locally.
# Server Configuration
PORT=3000
HOST="0.0.0.0"
# Queue Name
SMS_QUEUE_NAME="sms_sending_queue"
How to obtain Sinch Credentials:
- Log in to your Sinch Customer Dashboard.
- Navigate to SMS in the left-hand menu, then select APIs.
- You will find your Service plan ID and API token listed here. Click Show to reveal the API token.
- To find your Sinch Number, click on the Service plan ID link. Scroll down the service plan details page to find the phone numbers assigned to your account. Use one of these numbers (in E.164 format, e.g.,
+12xxxxxxxxxx
). - Note the Region your service plan is associated with (e.g., US, EU) and ensure the
SINCH_API_BASE_URL
reflects this.
7. Set up Local Database and Redis (Using Docker):
For local development, Docker Compose is an excellent way to manage dependencies like Redis and PostgreSQL. Create a docker-compose.yml
file in the project root:
# docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:15
container_name: bulk_sms_postgres
environment:
POSTGRES_DB: bulk_sms_db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password # Match DATABASE_URL password (use a better password!)
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: bulk_sms_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
restart: unless-stopped
volumes:
postgres_data:
redis_data:
Run docker-compose up -d
in your terminal to start the database and Redis containers in the background. Ensure your .env
file's REDIS_URL
and DATABASE_URL
match the credentials and ports defined here (and change the default password!).
2. Implementing core functionality (Queuing)
Directly sending SMS messages within the API request handler is inefficient and unreliable for bulk operations. A loop sending messages one by one would block the server, timeout easily, and offer no mechanism for retries or status tracking if the server crashes.
The solution is a background job queue. The API endpoint will quickly validate the request and add individual SMS sending tasks (jobs) to a Redis-backed queue (BullMQ). A separate worker process will then consume these jobs independently.
1. Configure BullMQ Queue:
Create a file to manage the queue instance.
// src/config/queue.js
import { Queue } from 'bullmq';
import { config } from 'dotenv';
config(); // Load .env variables
const redisUrl = process.env.REDIS_URL;
if (!redisUrl) {
throw new Error('REDIS_URL environment variable is not set.');
}
// Use URL constructor for robust parsing of the Redis URL
let connection;
try {
const parsedUrl = new URL(redisUrl);
connection = {
host: parsedUrl.hostname || 'localhost',
port: parseInt(parsedUrl.port || '6379', 10),
password: parsedUrl.password || undefined, // Handle optional password
// Note: BullMQ can often accept the URL string directly,
// but providing the object ensures clarity and compatibility.
};
} catch (error) {
console.error(`Invalid REDIS_URL format: ${redisUrl}`, error);
throw new Error(`Invalid REDIS_URL: ${error.message}`);
}
// Create a reusable queue instance
// We define the 'data' type expected for jobs in this queue
const smsQueue = new Queue(process.env.SMS_QUEUE_NAME || 'sms_sending_queue', {
connection, // Use the parsed connection object
defaultJobOptions: {
attempts: 3, // Retry failed jobs 3 times
backoff: {
type: 'exponential', // Exponential backoff strategy
delay: 1000, // Initial delay of 1 second
},
removeOnComplete: true, // Automatically remove jobs when completed
removeOnFail: 500, // Keep last 500 failed jobs for debugging
},
});
console.log(`SMS Queue '${smsQueue.name}' initialized.`);
console.log(`Connected to Redis at ${connection.host}:${connection.port}`);
// Optional: Event listeners for monitoring queue health
smsQueue.on('error', (error) => {
console.error(`Queue Error: ${error.message}`);
});
smsQueue.on('waiting', (jobId) => {
// console.log(`Job ${jobId} is waiting in the queue`);
});
smsQueue.on('active', (job) => {
// console.log(`Job ${job.id} has started`);
});
smsQueue.on('completed', (job, result) => {
// console.log(`Job ${job.id} has completed with result: ${JSON.stringify(result)}`);
});
smsQueue.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed with error: ${err.message}`);
});
export default smsQueue;
- We load environment variables using
dotenv
. - We parse the
REDIS_URL
using the built-inURL
constructor, which is more robust than basic string splitting and handles complex URLs (with auth, different ports, etc.). - We create a
Queue
instance, naming it based on the environment variable. - Crucially, we define
defaultJobOptions
:attempts: 3
: If a job fails (e.g., Sinch API is down), BullMQ will retry it up to 3 times.backoff
: Specifies how long to wait between retries.exponential
increases the delay after each failure (1s, 2s, 4s).removeOnComplete
: Cleans up successful jobs from Redis to save space.removeOnFail
: Keeps a history of the last 500 failed jobs for debugging.
- Basic event listeners are added for logging queue activity and errors.
3. Building the API layer (Fastify)
Now, let's create the Fastify server and the API endpoints.
1. Initialize Prisma:
Run the Prisma init command. This creates the prisma
directory and a basic schema.prisma
file.
npx prisma init --datasource-provider postgresql
Make sure the url
in prisma/schema.prisma
points to your DATABASE_URL
environment variable:
// prisma/schema.prisma
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL") // Ensures it reads from .env
}
// Add models in Section 6 (Note: Section 6 is not provided in the input, but this comment remains)
2. Set up Prisma Client:
Create a singleton instance of the Prisma client to be reused across the application.
// src/lib/prismaClient.js
import { PrismaClient } from '@prisma/client';
// Initialize Prisma Client
const prisma = new PrismaClient({
log: process.env.NODE_ENV === 'development' ? ['query', 'info', 'warn', 'error'] : ['error'],
});
console.log('Prisma Client initialized.');
export default prisma;
3. Create the Sinch Client:
Create a module to encapsulate interactions with the Sinch API.
// src/lib/sinchClient.js
import axios from 'axios';
import { config } from 'dotenv';
config(); // Load .env variables
const sinchApi = axios.create({
baseURL: process.env.SINCH_API_BASE_URL,
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.SINCH_API_TOKEN}`,
},
// Add timeout for resilience
timeout: 10000, // 10 seconds
});
/**
* Sends a single SMS message using the Sinch API.
* @param {string} to - The recipient phone number in E.164 format (e.g., +12025550100)
* @param {string} from - The Sinch virtual number in E.164 format
* @param {string} body - The message content
* @returns {Promise<object>} - The response data from Sinch API
* @throws {Error} - If the API request fails
*/
export const sendSms = async (to, from, body) => {
const servicePlanId = process.env.SINCH_SERVICE_PLAN_ID;
const endpoint = `/xms/v1/${servicePlanId}/batches`;
try {
console.log(`Attempting to send SMS via Sinch to: ${to}`);
const response = await sinchApi.post(endpoint, {
to: [to], // Sinch batch endpoint expects an array of recipients
from: from,
body: body,
// Optional: Add delivery report request
// delivery_report: 'summary' // or 'full'
});
console.log(`Sinch API response for ${to}:`, response.status, response.data);
// You might want to return specific fields like batch_id
return response.data;
} catch (error) {
console.error(`Error sending SMS to ${to}:`, error.response?.status, error.response?.data || error.message);
// Re-throw a structured error or the original error
const err = new Error(error.response?.data?.text || `Sinch API error: ${error.message}`);
err.statusCode = error.response?.status;
err.details = error.response?.data;
throw err;
}
};
- We use
axios.create
to configure a base URL and default headers (including the crucialAuthorization
bearer token). A timeout is added. - The
sendSms
function takes the recipient, sender number, and message body. - It constructs the correct endpoint using the
SINCH_SERVICE_PLAN_ID
. - It makes a
POST
request with the payload required by the Sinch/batches
endpoint. Note that even for a single message, theto
field expects an array. - Basic logging is included.
- Error handling catches potential issues with the API request and throws a more informative error if possible.
4. Define API Routes:
Create the route handler for submitting bulk SMS jobs.
// src/routes/smsRoutes.js
import smsQueue from '../config/queue.js';
import prisma from '../lib/prismaClient.js';
import { randomUUID } from 'crypto';
// Define JSON schema for request validation
const sendBulkSchema = {
body: {
type: 'object',
required: ['recipients', 'message'],
properties: {
recipients: {
type: 'array',
minItems: 1,
items: {
type: 'string',
// Basic E.164 format check (starts with +, followed by digits)
// Note: This regex is basic. Consider a dedicated library for production.
pattern: '^\\+[1-9]\\d{1,14}$', // Corrected: removed trailing comma, added end anchor $
},
},
message: {
type: 'string',
minLength: 1,
maxLength: 1600, // Adjust based on Sinch limits / concatenation needs
},
},
},
response: {
202: { // Use 202 Accepted status code
type: 'object',
properties: {
message: { type: 'string' },
batchId: { type: 'string', format: 'uuid' },
jobCount: { type: 'integer' },
},
},
// Add schema for potential error responses
500: {
type: 'object',
properties: {
message: { type: 'string' }
}
}
},
};
const getStatusSchema = {
params: {
type: 'object',
required: ['batchId'],
properties: {
batchId: { type: 'string', format: 'uuid' }
}
},
response: {
200: {
type: 'object',
properties: {
batchId: { type: 'string', format: 'uuid' },
status: { type: 'string', enum: ['PENDING', 'PROCESSING', 'COMPLETED', 'COMPLETED_WITH_ERRORS', 'FAILED'] }, // Added COMPLETED_WITH_ERRORS
totalJobs: { type: 'integer' },
processedJobs: { type: 'integer' },
successfulJobs: { type: 'integer' },
failedJobs: { type: 'integer' },
createdAt: { type: 'string', format: 'date-time'},
updatedAt: { type: 'string', format: 'date-time'},
jobs: {
type: 'array',
items: {
type: 'object',
properties: {
id: { type: 'string' }, // Job ID in DB
recipient: { type: 'string' },
status: { type: 'string', enum: ['PENDING', 'ACTIVE', 'COMPLETED', 'FAILED'] },
attemptsMade: { type: 'integer' },
failedReason: { type: ['string', 'null'] },
processedOn: { type: ['string', 'null'], format: 'date-time'},
finishedOn: { type: ['string', 'null'], format: 'date-time'}
}
}
}
}
},
404: {
type: 'object',
properties: {
message: { type: 'string'}
}
},
500: {
type: 'object',
properties: {
message: { type: 'string'}
}
}
}
};
async function smsRoutes(fastify, options) {
fastify.post('/send-bulk', { schema: sendBulkSchema }, async (request, reply) => {
const { recipients, message } = request.body;
const sinchNumber = process.env.SINCH_NUMBER;
const batchId = randomUUID(); // Generate a unique ID for this batch
if (!sinchNumber) {
fastify.log.error('Server configuration error: SINCH_NUMBER not set.');
reply.code(500).send({ message: 'Server configuration error: Sinch number not set.' });
return;
}
let batch;
try {
// 1. Create a Batch record in the database
batch = await prisma.smsBatch.create({
data: {
id: batchId,
totalJobs: recipients.length,
messageBody: message, // Optional: store message body with batch
status: 'PENDING', // Initial status
},
});
fastify.log.info(`Created batch ${batchId} with ${recipients.length} recipients.`);
} catch (dbError) {
fastify.log.error(`Database error creating batch ${batchId}: ${dbError.message}`);
reply.code(500).send({ message: 'Failed to initiate bulk SMS batch.' });
return;
}
// 2. Create individual job data and queue jobs
const jobPromises = recipients.map((recipient) => {
const jobId = randomUUID(); // Unique ID for the individual message job
const jobData = {
jobId, // Include our DB job ID in the data
batchId,
to: recipient,
from: sinchNumber,
body: message,
};
// Add job to the queue
// Use jobId as the BullMQ job ID for easier tracking if needed,
// though BullMQ generates its own internal ID too.
return smsQueue.add('send-single-sms', jobData, { jobId: jobId });
});
try {
const addedJobs = await Promise.all(jobPromises);
fastify.log.info(`Successfully added ${addedJobs.length} jobs to queue for batch ${batchId}`);
// Respond with 202 Accepted: The request is accepted for processing, but not yet complete.
reply.code(202).send({
message: 'Bulk SMS request accepted and queued for processing.',
batchId: batchId,
jobCount: addedJobs.length,
});
} catch (queueError) {
fastify.log.error(`Error adding jobs to queue for batch ${batchId}: ${queueError.message}`);
// Attempt to mark the batch as failed since jobs couldn't be queued
try {
await prisma.smsBatch.update({
where: { id: batchId },
data: { status: 'FAILED', failedReason: 'Failed to queue jobs' }
});
} catch (dbError) {
fastify.log.error(`Failed to update batch ${batchId} status to FAILED after queue error: ${dbError.message}`);
}
reply.code(500).send({ message: 'Failed to queue SMS jobs.' });
}
});
// --- Status Endpoint ---
fastify.get('/status/:batchId', { schema: getStatusSchema }, async (request, reply) => {
const { batchId } = request.params;
try {
const batch = await prisma.smsBatch.findUnique({
where: { id: batchId },
include: {
// Include associated message jobs
messageJobs: {
orderBy: { createdAt: 'asc' } // Order jobs if needed
},
}
});
if (!batch) {
reply.code(404).send({ message: 'Batch not found.' });
return;
}
// Calculate summary stats
const processedJobs = batch.messageJobs.filter(j => ['COMPLETED', 'FAILED'].includes(j.status)).length;
const successfulJobs = batch.messageJobs.filter(j => j.status === 'COMPLETED').length;
const failedJobs = batch.messageJobs.filter(j => j.status === 'FAILED').length;
// Determine overall batch status (potentially update DB if status changed)
let overallStatus = batch.status;
// Flag to track if DB needs update (not strictly needed for async update)
// const shouldUpdateDbStatus = false;
if (overallStatus !== 'FAILED') { // Don't override a batch-level failure
const totalJobs = batch.totalJobs; // Use stored total
if (processedJobs === totalJobs && totalJobs > 0) {
const newStatus = failedJobs > 0 ? 'COMPLETED_WITH_ERRORS' : 'COMPLETED';
if (batch.status !== newStatus) {
overallStatus = newStatus;
// Update batch status in DB asynchronously (fire and forget or await)
prisma.smsBatch.update({ where: {id: batchId}, data: { status: overallStatus, updatedAt: new Date() }})
.catch(err => fastify.log.error(`Failed to auto-update batch ${batchId} status to ${overallStatus}: ${err.message}`));
}
} else if (processedJobs > 0 || batch.messageJobs.some(j => j.status === 'ACTIVE')) {
const newStatus = 'PROCESSING';
if (batch.status === 'PENDING') { // Only transition from PENDING to PROCESSING automatically
overallStatus = newStatus;
prisma.smsBatch.update({ where: {id: batchId}, data: { status: overallStatus, updatedAt: new Date() }})
.catch(err => fastify.log.error(`Failed to auto-update batch ${batchId} status to ${overallStatus}: ${err.message}`));
} else if (batch.status !== 'PROCESSING') {
// If already COMPLETED/FAILED, don't revert to PROCESSING
// If already PROCESSING, keep it
overallStatus = batch.status;
} else {
overallStatus = 'PROCESSING'; // Stay in processing if already there
}
}
}
reply.code(200).send({
batchId: batch.id,
status: overallStatus, // Send the calculated current status
totalJobs: batch.totalJobs,
processedJobs: processedJobs,
successfulJobs: successfulJobs,
failedJobs: failedJobs,
createdAt: batch.createdAt,
updatedAt: batch.updatedAt, // Reflects last DB update time
jobs: batch.messageJobs.map(job => ({ // Format job details for response
id: job.id,
recipient: job.recipient,
status: job.status,
attemptsMade: job.attemptsMade,
failedReason: job.failedReason,
processedOn: job.processedOn,
finishedOn: job.finishedOn,
}))
});
} catch (error) {
fastify.log.error(`Error fetching status for batch ${batchId}: ${error.message}`);
reply.code(500).send({ message: 'Internal server error retrieving batch status.' });
}
});
}
export default smsRoutes;
- We import the
smsQueue
andprisma
client. sendBulkSchema
andgetStatusSchema
use Fastify's built-in JSON Schema validation. The regex pattern insendBulkSchema
is corrected to include an end anchor ($
) and remove the erroneous trailing comma./send-bulk
endpoint:- Generates a unique
batchId
. - Creates a
SmsBatch
record in the database first. - Loops through the
recipients
array. - For each recipient, it creates a
jobData
object containing all necessary info (to
,from
,body
,batchId
,jobId
). - It adds the job to the
smsQueue
usingsmsQueue.add()
. We give the job a name ('send-single-sms'
) and pass thejobData
. We also use our generatedjobId
as the BullMQ job ID. Promise.all
waits for all jobs to be added to the queue.- Responds with
202 Accepted
, indicating the task is queued, along with thebatchId
. - Includes error handling for database and queue operations.
- Generates a unique
/status/:batchId
endpoint:- Retrieves the
batchId
from the URL parameters. - Queries the database using Prisma to find the
SmsBatch
and its associatedSmsMessageJob
records. - Calculates summary statistics.
- Determines an overall status for the batch based on the status of individual jobs, potentially updating the database status asynchronously if it has changed (e.g., from PENDING to PROCESSING or PROCESSING to COMPLETED).
- Returns the batch details and the status of each individual message job.
- Handles the case where the
batchId
is not found (404).
- Retrieves the
5. Create the Fastify Server:
Set up the main server entry point.
// src/server.js
import Fastify from 'fastify';
import { config } from 'dotenv';
import smsRoutes from './routes/smsRoutes.js';
import prisma from './lib/prismaClient.js'; // Import prisma to ensure it connects on start
import rateLimit from '@fastify/rate-limit';
import helmet from '@fastify/helmet';
import Redis from 'ioredis'; // Needed for distributed rate limiting
config(); // Load .env variables
const fastify = Fastify({
logger: {
level: process.env.LOG_LEVEL || 'info', // Use env var for log level
// Use pino-pretty only in development for readability
transport: process.env.NODE_ENV !== 'production'
? { target: 'pino-pretty' }
: undefined,
},
// Enable trust proxy if running behind a load balancer or reverse proxy
// Example: await fastify.register(import('@fastify/trust-proxy'));
// trustProxy: true
});
// --- Register Plugins ---
// Security Headers
await fastify.register(helmet, {
// Example: Configure Content Security Policy if needed
// contentSecurityPolicy: false // Disable CSP if it causes issues initially
});
// Rate Limiting
// Ensure REDIS_URL is set for this to work effectively in scaled environments
if (process.env.REDIS_URL) {
try {
const redisClient = new Redis(process.env.REDIS_URL, {
// Optional: Add error handling for Redis connection used by rate limiter
maxRetriesPerRequest: 3, // Example option
showFriendlyErrorStack: process.env.NODE_ENV !== 'production',
});
redisClient.on('error', (err) => fastify.log.error('Rate Limit Redis Error:', err));
await fastify.register(rateLimit, {
max: 100, // Max 100 requests per window per IP address (adjust as needed)
timeWindow: '1 minute', // Time window duration
redis: redisClient, // Use Redis for distributed rate limiting
// keyGenerator: function (request) { /* custom key logic if needed */ },
// allowList: ['127.0.0.1'], // IPs to bypass rate limit
});
fastify.log.info('Rate limiting enabled with Redis backend.');
} catch (redisError) {
fastify.log.error('Failed to initialize Redis for rate limiting:', redisError);
// Decide if you want to fall back to in-memory rate limiting or exit
// Fallback (less effective when scaled):
// await fastify.register(rateLimit, { max: 100, timeWindow: '1 minute' });
// fastify.log.warn('Rate limiting falling back to in-memory store.');
// Or exit if Redis is critical:
// process.exit(1);
}
} else {
// Fallback to in-memory rate limiting if REDIS_URL is not set
await fastify.register(rateLimit, { max: 100, timeWindow: '1 minute' });
fastify.log.warn('REDIS_URL not set. Rate limiting falling back to in-memory store (less effective when scaled).');
}
// --- Register Routes ---
await fastify.register(smsRoutes, { prefix: '/api/sms' });
// --- Graceful Shutdown ---
const signals = ['SIGINT', 'SIGTERM'];
signals.forEach((signal) => {
process.on(signal, async () => {
fastify.log.info(`Received ${signal}, shutting down gracefully...`);
await fastify.close();
// Close Prisma client connection
await prisma.$disconnect();
// Optional: Close BullMQ queue connections if needed (often handled internally)
// await smsQueue.close(); // Example if direct queue instance is accessible
fastify.log.info('Server shut down.');
process.exit(0);
});
});
// --- Start Server ---
const start = async () => {
try {
const port = parseInt(process.env.PORT || '3000', 10);
const host = process.env.HOST || '0.0.0.0';
await fastify.listen({ port, host });
fastify.log.info(`Server listening on port ${port}`);
// Log routes only in development for clarity
if (process.env.NODE_ENV !== 'production') {
console.log(fastify.printRoutes());
}
// Ensure Prisma client is connected (optional check)
await prisma.$connect().then(() => {
fastify.log.info('Prisma client connected successfully.');
}).catch(err => {
fastify.log.error('Prisma client failed to connect:', err);
// Optionally exit if DB connection is critical
// process.exit(1);
});
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();