Frequently Asked Questions
Use a robust queuing system like BullMQ with Redis, along with a framework like Fastify and the Sinch SMS API. This architecture handles individual messages reliably in the background, preventing server overload and ensuring delivery even with temporary failures. The provided example uses a Fastify API endpoint to accept bulk requests and queue individual SMS sending tasks, processed by a dedicated Node.js worker.
Fastify is a high-performance Node.js web framework. It serves as the API layer, receiving bulk SMS requests, validating input, and adding individual message sending jobs to the BullMQ queue. Fastify's speed and efficiency are beneficial for handling a high volume of requests.
A message queue like BullMQ with Redis decouples the API request from the actual SMS sending. This allows the API to respond quickly without waiting for each message to be sent, improving performance and reliability. It also provides retry mechanisms and handles failures gracefully.
PostgreSQL, combined with the Prisma ORM, provides persistent storage for tracking the status of each individual message, the overall batch status, and any errors encountered during the sending process. Prisma simplifies database interaction and ensures type safety.
The system leverages BullMQ's built-in retry mechanism, allowing it to automatically retry failed messages multiple times with exponential backoff. Error logs are also stored to assist with identifying and resolving persistent issues. The system uses a database to track each message's status, supporting better error analysis and recovery.
Redis acts as the backend for the BullMQ job queue, storing the individual SMS sending tasks until the worker process can pick them up and send the messages via the Sinch API. Its in-memory nature ensures fast queue operations.
Yes, the provided system includes a dedicated /status/:batchId API endpoint. Clients can use this endpoint to retrieve information about a specific batch of SMS messages, including the overall status and the status of each individual message within the batch.
Use npm install fastify axios bullmq ioredis dotenv @prisma/client pino-pretty @fastify/rate-limit @fastify/helmet for production and npm install --save-dev prisma nodemon for development dependencies. This installs the web framework, API request library, queueing system, Redis client, and other essentials. You'll also need Docker for local Redis and PostgreSQL setup or access to standalone instances.
You'll need a Sinch account with a Service Plan ID, an API Token, and a configured Sinch virtual number. This information can be obtained from the Sinch Customer Dashboard under SMS -> APIs. The Sinch number should be in E.164 format (e.g. +12xxxxxxxxxx).
Consider using this system for applications requiring large-scale messaging, such as marketing campaigns, important notifications, or alerts. The queueing system and retry logic ensure reliable delivery, essential when reaching a wide audience. Don't use this approach for sending a small number of messages, as a simple API call would be more efficient in that case.
Use the provided docker-compose.yml to start PostgreSQL and Redis containers locally. This simplifies the setup process and ensures consistency across development environments. Ensure your .env file's connection URLs match the Docker configuration.
The worker process is dedicated to consuming jobs from the Redis queue. It fetches individual SMS sending tasks from the queue, sends the messages via the Sinch API, and updates the database with the status of each message. This asynchronous operation keeps the main API responsive and allows for high throughput.
The system uses the @fastify/rate-limit plugin. By default, it limits to 100 requests per minute per IP address using an in-memory store. For scaled environments, a Redis backend is highly recommended for distributed rate limiting. You can configure REDIS_URL in the .env file.
Prisma is a modern database toolkit that simplifies database operations and provides type safety. It serves as the ORM (Object-Relational Mapper) for interacting with PostgreSQL, managing database migrations, and generating a type-safe client for accessing data.
Build Scalable Bulk SMS Broadcasting with Fastify, Sinch API, and BullMQ
Sending bulk SMS messages – whether for marketing campaigns, notifications, or alerts – requires more than just a simple loop calling an API. A production-ready system needs to handle potential failures, manage rate limits, provide status tracking, and scale reliably.
<!-- DEPTH: Introduction lacks concrete use case examples showing real-world applications (Priority: Medium) -->
This comprehensive tutorial shows you how to send bulk SMS using Node.js with the high-performance Fastify framework, leveraging the Sinch SMS API for message delivery and Redis with BullMQ for robust background job queuing. Learn how to build a scalable SMS broadcasting system that handles thousands of messages with automatic retries, error handling, and real-time status tracking.
What you'll build:
<!-- GAP: Missing architecture decision rationale - why BullMQ over other queues, why PostgreSQL over other databases (Type: Substantive, Priority: Medium) -->
Core Technologies:
us.sms.api.sinch.com) and EU, chosen based on data protection requirements.<!-- EXPAND: Could benefit from comparison table showing alternative tech stack options (Type: Enhancement, Priority: Low) -->
System Architecture:
<!-- DEPTH: Architecture diagram lacks flow explanation showing request lifecycle and data flow patterns (Priority: High) -->
Prerequisites:
curlor a tool like Postman for testing the API.<!-- GAP: Missing hardware/system requirements (RAM, CPU, disk space) for development and production (Type: Substantive, Priority: Medium) -->
Final Outcome:
By the end of this guide, you'll have a scalable and reliable Node.js application capable of accepting requests to send thousands of SMS messages, processing them reliably in the background, handling errors gracefully, and allowing clients to check the status of their bulk sends.
1. Setting up the project
Initialize your Node.js project and install the necessary dependencies.
1. Initialize the project:
Open your terminal, create a project directory, and navigate into it.
This creates a basic
package.jsonfile.2. Install dependencies:
Install Fastify for the web server, Axios for HTTP requests, BullMQ for the queue, ioredis as the Redis client for BullMQ, dotenv for environment variables, Prisma for database interaction, and pino-pretty for human-readable logs during development.
fastify: The web framework.axios: To make requests to the Sinch API.bullmq: The job queue library.ioredis: Redis client required by BullMQ and potentially rate limiting.dotenv: To load environment variables from a.envfile.@prisma/client: Prisma's database client.pino-pretty: Makes Fastify's logs readable during development.@fastify/rate-limit: Plugin for API rate limiting.@fastify/helmet: Plugin for setting security headers.prisma(dev): The Prisma CLI for migrations and generation.nodemon(dev): Automatically restarts the server during development when files change.<!-- GAP: Missing explanation of version compatibility constraints and peer dependencies (Type: Critical, Priority: High) -->
3. Configure Development Scripts:
Open your
package.jsonand modify thescriptssection:start: Runs the main API server for production.dev: Runs the API server in development mode usingnodemonandpino-pretty.worker: Runs the background job worker for production.dev:worker: Runs the worker in development mode usingnodemon.prisma:migrate: Applies database schema changes.prisma:generate: Generates the Prisma Client based on your schema.<!-- DEPTH: Scripts section lacks explanation of when to run each script in workflow (Priority: Medium) -->
4. Set up Project Structure:
Create the following directory structure within your project root:
prisma/: Will contain your database schema and migrations.src/: Contains all the application source code.src/config/: Configuration files (e.g., queue setup).src/lib/: Shared libraries/utilities (e.g., Prisma client, Sinch client).src/routes/: Fastify route handlers.src/workers/: Background worker logic.src/server.js: Entry point for the Fastify API server.src/worker.js: Entry point for the BullMQ worker process..env: Stores environment variables (API keys, database URLs, etc.)..gitignore: Specifies files/directories to ignore in Git.docker-compose.yml: Defines local development services (Postgres, Redis).5. Create
.gitignore:Create a
.gitignorefile in the project root to avoid committing sensitive information and unnecessary files:6. Set up Environment Variables (
.env):Create a
.envfile in the project root. This file will hold your secrets and configuration. Never commit this file to version control.<!-- GAP: Missing validation instructions - how to verify env vars are set correctly before proceeding (Type: Critical, Priority: High) -->
How to obtain Sinch Credentials:
+12xxxxxxxxxx).SINCH_API_BASE_URLreflects this.<!-- DEPTH: Credential setup lacks troubleshooting guidance for common issues (Priority: Medium) -->
7. Set up Local Database and Redis (Using Docker):
For local development, Docker Compose is an excellent way to manage dependencies like Redis and PostgreSQL. Create a
docker-compose.ymlfile in the project root:<!-- GAP: Missing health check commands to verify Docker services are running correctly (Type: Substantive, Priority: High) -->
Run
docker-compose up -din your terminal to start the database and Redis containers in the background. Ensure your.envfile'sREDIS_URLandDATABASE_URLmatch the credentials and ports defined here (and change the default password!).2. Implementing core functionality (Queuing)
Directly sending SMS messages within the API request handler is inefficient and unreliable for bulk operations. A loop sending messages one by one would block the server, timeout easily, and offer no mechanism for retries or status tracking if the server crashes.
<!-- DEPTH: Section lacks concrete performance comparison showing why queuing is necessary (Priority: Medium) -->
The solution is a background job queue. The API endpoint will quickly validate the request and add individual SMS sending tasks (jobs) to a Redis-backed queue (BullMQ). A separate worker process will then consume these jobs independently.
1. Configure BullMQ Queue:
Create a file to manage the queue instance.
<!-- GAP: Missing explanation of job priority and concurrency configuration options (Type: Substantive, Priority: Medium) -->
dotenv.REDIS_URLusing the built-inURLconstructor, which is more robust than basic string splitting and handles complex URLs (with auth, different ports, etc.).Queueinstance, naming it based on the environment variable.defaultJobOptions:attempts: 3: If a job fails (e.g., Sinch API is down), BullMQ will retry it up to 3 times.backoff: Specifies how long to wait between retries.exponentialincreases the delay after each failure (1s, 2s, 4s).removeOnComplete: Cleans up successful jobs from Redis to save space.removeOnFail: Keeps a history of the last 500 failed jobs for debugging.<!-- EXPAND: Could benefit from configuration table showing all available defaultJobOptions (Type: Enhancement, Priority: Low) -->
3. Building the API layer (Fastify)
Now, let's create the Fastify server and the API endpoints.
1. Initialize Prisma:
Run the Prisma init command. This creates the
prismadirectory and a basicschema.prismafile.Make sure the
urlinprisma/schema.prismapoints to yourDATABASE_URLenvironment variable:<!-- GAP: Missing the actual database schema models definition - critical content gap (Type: Critical, Priority: High) -->
2. Set up Prisma Client:
Create a singleton instance of the Prisma client to be reused across the application.
<!-- DEPTH: Prisma client setup lacks connection pool configuration guidance (Priority: Medium) -->
3. Create the Sinch Client:
Create a module to encapsulate interactions with the Sinch API.
<!-- GAP: Missing Sinch API rate limit handling and backoff strategy (Type: Critical, Priority: High) -->
axios.createto configure a base URL and default headers (including the crucialAuthorizationbearer token). A timeout is added.sendSmsfunction takes the recipient, sender number, and message body.SINCH_SERVICE_PLAN_ID.POSTrequest with the payload required by the Sinch/batchesendpoint. Note that even for a single message, thetofield expects an array.<!-- DEPTH: Error handling lacks categorization of retriable vs non-retriable errors (Priority: High) -->
4. Define API Routes:
Create the route handler for submitting bulk SMS jobs.
<!-- GAP: Missing input sanitization details and validation error response examples (Type: Substantive, Priority: Medium) -->
smsQueueandprismaclient.sendBulkSchemaandgetStatusSchemause Fastify's built-in JSON Schema validation. The regex pattern insendBulkSchemais corrected to include an end anchor ($) and remove the erroneous trailing comma./send-bulkendpoint:batchId.SmsBatchrecord in the database first.recipientsarray.jobDataobject containing all necessary info (to,from,body,batchId,jobId).smsQueueusingsmsQueue.add(). We give the job a name ('send-single-sms') and pass thejobData. We also use our generatedjobIdas the BullMQ job ID.Promise.allwaits for all jobs to be added to the queue.202 Accepted, indicating the task is queued, along with thebatchId./status/:batchIdendpoint:batchIdfrom the URL parameters.SmsBatchand its associatedSmsMessageJobrecords.batchIdis not found (404).<!-- DEPTH: Status endpoint lacks pagination strategy for large batches (Priority: High) -->
5. Create the Fastify Server:
Set up the main server entry point.
<!-- GAP: Missing worker implementation section - critical missing content (Type: Critical, Priority: High) --> <!-- GAP: Missing testing section with example curl commands (Type: Substantive, Priority: High) --> <!-- GAP: Missing production deployment checklist and environment configuration (Type: Substantive, Priority: Medium) --> <!-- DEPTH: Server setup lacks health check endpoint implementation (Priority: High) -->
Frequently Asked Questions (FAQ)
How do I send bulk SMS messages with Fastify and Sinch API?
Build a Fastify API endpoint that accepts a list of recipients and a message body, then use BullMQ to queue individual SMS jobs for background processing. Install dependencies with
npm install fastify axios bullmq ioredis @prisma/client, configure Sinch API credentials (Service Plan ID, API Token, and phone number in E.164 format), set up Redis and PostgreSQL, and create a worker process that consumes jobs from the queue and sends SMS via Sinch's REST API. The system handles retries automatically using BullMQ's exponential backoff (3 attempts with 1-second initial delay) and tracks message status in PostgreSQL using Prisma ORM. Learn more about Sinch SMS API integration with Node.js in the official documentation.<!-- DEPTH: FAQ answer lacks link to specific implementation section (Priority: Low) -->
Which Node.js version should I use for a Fastify v5 bulk SMS system?
Use Node.js v20 LTS "Iron" (Maintenance LTS through April 2026) or Node.js v22 LTS "Jod" (Active LTS through October 2025, Maintenance LTS through April 2027). Fastify v5 requires Node.js v20+ minimum and offers 5 – 10% performance improvements over v4. Node.js v18 reached End of Life on April 30, 2025 and no longer receives security updates. Node.js v22 is recommended for new projects as it provides Active LTS support throughout 2025 with the latest features.
How do I configure BullMQ v5 for reliable SMS job processing?
Create a BullMQ Queue instance with explicit connection object (required in v5.x), set retry attempts to 3 with exponential backoff strategy (initial 1-second delay), enable automatic job cleanup (
removeOnComplete: true,removeOnFail: 500), and configure Redis connection using ioredis client. BullMQ v5.60.0 no longer supports integer job IDs – use string UUIDs instead. Set up event listeners for monitoring queue health (error,failed,completedevents) and implement graceful shutdown by closing queue connections on SIGINT/SIGTERM signals.What Sinch API credentials do I need for bulk SMS?
Obtain three credentials from your Sinch Customer Dashboard: (1) Service Plan ID found under SMS → APIs, (2) API Token (click "Show" to reveal), and (3) Sinch Virtual Number in E.164 format (e.g., +12025550100) from your service plan details. Configure the API base URL based on your region:
https://us.sms.api.sinch.comfor US or EU endpoint for Europe. Sinch provides direct connections to 600+ mobile operators worldwide with 95% US population coverage. Store credentials securely in.envfile and never commit to version control.How do I implement error handling and retries for bulk SMS?
Use BullMQ's built-in retry mechanism with
attempts: 3and exponential backoff (type: 'exponential',delay: 1000) to automatically retry failed jobs with increasing delays (1s, 2s, 4s). Wrap Sinch API calls in try-catch blocks, log errors with recipient context, store failure reasons in PostgreSQL via Prisma (failedReasonfield), and track attempt counts (attemptsMade). Implement job-level error handling in your worker processor to catch transient failures (network issues, rate limits) versus permanent failures (invalid phone numbers). Use BullMQ'sfailedevent listener to log jobs that exceed max retry attempts.<!-- EXPAND: Could benefit from error handling decision tree diagram (Type: Enhancement, Priority: Low) -->
How do I set up rate limiting for a Fastify bulk SMS API?
Install
@fastify/rate-limitand configure with Redis backend for distributed rate limiting across multiple server instances. Set max requests per time window (e.g.,max: 100,timeWindow: '1 minute') based on your API capacity and Sinch API rate limits. Use Redis connection for stateful tracking:redis: new Redis(process.env.REDIS_URL). Implement IP-based limiting by default or custom key generation for user-based limits. Add allowlist for trusted IPs, configure appropriate HTTP 429 responses, and log rate limit violations for monitoring.What database schema should I use for tracking bulk SMS batches?
Implement two main tables: (1) SmsBatch with
id(UUID),totalJobs,status(PENDING/PROCESSING/COMPLETED/COMPLETED_WITH_ERRORS/FAILED),messageBody,createdAt,updatedAt, andfailedReason, and (2) SmsMessageJob withid(UUID),batchId(foreign key),recipient,status(PENDING/ACTIVE/COMPLETED/FAILED),attemptsMade,failedReason,processedOn,finishedOn, and timestamps. Add indexes onbatchIdandstatusfor faster queries. Use Prisma ORM v6.x with PostgreSQL v12+ for type-safe database access, automatic migrations, and relationship management between batches and individual message jobs.<!-- DEPTH: Database schema lacks actual Prisma schema code example (Priority: High) -->
How do I deploy a Fastify bulk SMS system to production?
Use environment-specific configurations: set
NODE_ENV=production, configure robust logging (replace pino-pretty with structured JSON logs), enable Fastify's trust proxy option if behind a load balancer, use managed Redis (AWS ElastiCache, Redis Cloud) and PostgreSQL (AWS RDS, Supabase) services, implement health check endpoints (/health) for load balancer monitoring, configure SSL/TLS termination, set up monitoring with Prometheus/Datadog for queue metrics, implement graceful shutdown handlers for SIGTERM signals, use PM2 or Docker for process management, and run separate worker instances scaled independently from API servers.<!-- GAP: Missing actual deployment examples with Docker/Kubernetes configurations (Type: Substantive, Priority: Medium) -->
What are the cost considerations for sending bulk SMS with Sinch?
Sinch SMS pricing varies by destination country (US SMS typically $0.01 – $0.02 per message) with volume discounts available. Calculate infrastructure costs: Redis hosting ($10 – $50/month for managed instances), PostgreSQL hosting ($20 – $100/month depending on scale), Node.js hosting (serverless or container-based $20 – $200/month), and network egress fees. Optimize costs by batching messages efficiently, implementing intelligent retry logic to avoid wasted attempts on permanently failed numbers, using connection pooling for database and Redis, and monitoring queue metrics to right-size worker instances. Volume commitments with Sinch can reduce per-message costs significantly for high-volume senders.
<!-- EXPAND: Could benefit from cost comparison table for different volume tiers (Type: Enhancement, Priority: Low) -->
How do I scale a Fastify bulk SMS system for millions of messages?
Scale horizontally by running multiple worker instances consuming from the same BullMQ queue, use Redis Cluster for distributed queue backend handling high throughput, implement database connection pooling with Prisma (set
connection_limitin DATABASE_URL), partition large batches into smaller chunks (1,000 – 10,000 messages per batch) for better progress tracking, use database read replicas for status queries to reduce load on primary database, implement caching for frequently accessed batch status with Redis, monitor queue metrics (waiting jobs, processing rate, failed jobs) and auto-scale workers based on queue depth, and use CDN/edge caching for API documentation and status endpoints.<!-- DEPTH: Scaling section lacks concrete performance benchmarks (Priority: Medium) -->
What security measures should I implement for a bulk SMS API?
Implement multiple security layers: (1) Use
@fastify/helmetfor security headers (CSP, HSTS, X-Frame-Options), (2) Configure@fastify/rate-limitwith Redis backend to prevent abuse (100 requests per minute per IP), (3) Implement API key authentication middleware requiring valid tokens in Authorization header, (4) Validate all inputs using Fastify JSON Schema validation with E.164 phone number pattern (^\\+[1-9]\\d{1,14}$), (5) Store sensitive credentials in environment variables never in code, (6) Use HTTPS/TLS for all API communication, (7) Implement CORS policies restricting allowed origins, (8) Log security events (failed auth, rate limit violations) for monitoring, (9) Use prepared statements via Prisma to prevent SQL injection, and (10) Implement IP allowlisting for admin endpoints.<!-- GAP: Missing authentication/authorization implementation code (Type: Critical, Priority: High) -->
How do I monitor and debug BullMQ job failures?
Enable detailed logging in your worker processor with recipient context, use BullMQ's event listeners (
failed,error,stalled) to capture and log job failures, query PostgreSQL for failed jobs withstatus: 'FAILED'and reviewfailedReasonfield, implement structured logging with correlation IDs linking batch → job → API call, set up error alerting using monitoring tools (Sentry, Datadog) triggered on high failure rates, use BullMQ'sremoveOnFail: 500to retain recent failures for analysis, create admin endpoints to manually retry failed jobs by re-queuing with original data, monitor Redis memory usage to prevent queue overflow, and implement dead letter queue pattern for jobs exceeding max retry attempts requiring manual intervention.<!-- EXPAND: Could benefit from monitoring dashboard setup example with specific metrics (Type: Enhancement, Priority: Medium) -->