This guide details how to build a robust system within a RedwoodJS application to send bulk SMS messages using the Sinch SMS API. We'll cover everything from project setup and database modeling to secure Sinch integration, background job considerations, error handling, and deployment.
The final application will enable users (likely administrators) to manage a list of contacts and send custom broadcast messages to all of them efficiently and reliably via Sinch. This solves the common business need for targeted mass communication via SMS for alerts, notifications, or marketing updates.
This guide is intended for developers familiar with JavaScript and full-stack frameworks, looking to integrate reliable bulk SMS capabilities into their RedwoodJS application. Prerequisites include Node.js (v18+ recommended), Yarn, a Sinch account with API credentials, and a provisioned Sinch phone number.
Core Technologies:
- RedwoodJS: A full-stack JavaScript/TypeScript framework for startups. It provides structure via its API (GraphQL, Services), Web (React components), Prisma ORM, and tooling.
- Sinch SMS API: A powerful API for sending and receiving SMS messages globally. We'll use the official Sinch Node.js SDK.
- Node.js: The runtime environment for RedwoodJS's API side.
- Prisma: The default ORM in RedwoodJS, used for database modeling and access.
- PostgreSQL (or SQLite): The database to store contacts and broadcast information.
System Architecture:
[ User (Browser) ] <--> [ Redwood Web (React UI) ]
|
| (GraphQL Request: Send Broadcast)
V
[ Redwood API (GraphQL Server) ]
|
| (Calls Service Function)
V
[ Redwood Service (broadcasts.ts) ]
| |
| | (Reads Contacts/Writes Status)
| V
| [ Prisma ORM ] <--> [ Database (Contacts, Broadcasts) ]
|
| (Calls Sinch SDK)
V
[ Sinch Node.js SDK (@sinch/sdk-core) ]
|
| (Sends API Request)
V
[ Sinch SMS API ] --> [ SMS Delivered to Recipients ]
By the end of this guide, you will have a functional RedwoodJS application capable of:
- Managing a list of contacts (storing phone numbers).
- Creating broadcast messages.
- Initiating a bulk SMS send to all contacts via the Sinch API.
- Basic status tracking for broadcasts.
Note: This guide focuses primarily on the backend API implementation. While Section 11 outlines a potential frontend structure, detailed UI code is not provided.
1. Project Setup and Configuration
Let's initialize our RedwoodJS project and configure the necessary environment variables for Sinch.
1.1. Create RedwoodJS Project
Open your terminal and run:
yarn create redwood-app ./redwood-sinch-broadcast
cd redwood-sinch-broadcast
Follow the prompts. Choose TypeScript if preferred, though examples here will use JavaScript for broader accessibility. Select your preferred database (PostgreSQL recommended for production, SQLite for simplicity during development).
1.2. Install Sinch SDK
Navigate to the API workspace and add the Sinch Node.js SDK:
yarn workspace api add @sinch/sdk-core
1.3. Configure Environment Variables
RedwoodJS uses a .env
file in the project root for environment variables. Create or open it and add your Sinch credentials:
# .env
# Database URL (Redwood adds this automatically based on your choice)
# Example for PostgreSQL:
# DATABASE_URL="postgresql://postgres:password@localhost:5432/sinch_broadcast_dev?schema=public"
# Example for SQLite:
# DATABASE_URL="file:./dev.db"
# Sinch API Credentials
# Obtain these from your Sinch Dashboard under API Credentials or Access Keys
# Project ID: Found on your Sinch Dashboard homepage or project settings.
SINCH_PROJECT_ID="YOUR_SINCH_PROJECT_ID" # Replace with your actual Project ID
# Key ID (Access Key ID): Generated under Access Keys in your Sinch Dashboard.
SINCH_KEY_ID="YOUR_SINCH_KEY_ID" # Replace with your actual Key ID
# Key Secret (Access Key Secret): **Only shown once** upon generation. Store it securely.
SINCH_KEY_SECRET="YOUR_SINCH_KEY_SECRET" # Replace with your actual Key Secret
# Sinch Phone Number: The virtual number provisioned in your Sinch account for sending SMS.
SINCH_FROM_NUMBER="YOUR_SINCH_VIRTUAL_NUMBER" # Replace with your provisioned number (e.g., +12345678900)
-
How to Obtain Sinch Credentials:
- Log in to your Sinch Customer Dashboard.
- Note your Project ID displayed prominently.
- Navigate to Access Keys in the left-hand menu.
- Generate a new Access Key. You will be given a Key ID and a Key Secret. Crucially, the Key Secret is shown only once. Copy and save it immediately and securely.
- Navigate to Numbers > Your Virtual Numbers to find the Sinch Phone Number you wish to use as the sender ID (
SINCH_FROM_NUMBER
). Ensure it's SMS-enabled.
-
Security: Never commit your
.env
file (or any file containing secrets) to version control. Ensure.env
is listed in your.gitignore
file (Redwood adds this by default). Use your deployment provider's secure environment variable management for production.
1.4. Initialize Database
If you haven't already, set up your database connection string in .env
and run the initial migration command:
yarn rw prisma migrate dev
# Follow prompts to name the initial migration (e.g., "initial setup")
This ensures your database is ready for the schemas we'll define next.
2. Database Schema and Data Layer (Prisma)
We need to model our contacts and broadcasts in the database.
2.1. Define Prisma Schema
Open api/db/schema.prisma
and define the models:
// api/db/schema.prisma
datasource db {
provider = ""postgresql"" // Or ""sqlite""
url = env(""DATABASE_URL"")
}
generator client {
provider = ""prisma-client-js""
binaryTargets = ""native""
}
// Contact model to store recipient information
model Contact {
id Int @id @default(autoincrement())
phoneNumber String @unique // E.164 format recommended (e.g., +14155552671)
name String? // Optional name
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
// Relation to BroadcastRecipient (optional, for detailed tracking)
broadcastRecipients BroadcastRecipient[]
}
// Broadcast model to store message details and status
model Broadcast {
id Int @id @default(autoincrement())
message String
status String @default(""PENDING"") // PENDING, SENDING, SENT, FAILED
sentAt DateTime? // Timestamp when sending was initiated/completed
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
// Relation to BroadcastRecipient (optional, for detailed tracking)
recipients BroadcastRecipient[]
}
// Join table for detailed per-recipient status (Optional but recommended for large scale)
model BroadcastRecipient {
id Int @id @default(autoincrement())
broadcast Broadcast @relation(fields: [broadcastId], references: [id])
broadcastId Int
contact Contact @relation(fields: [contactId], references: [id])
contactId Int
status String @default(""PENDING"") // PENDING, SENT, FAILED
sinchBatchId String? // Store the Sinch Batch ID if available
deliveredAt DateTime? // Timestamp when delivery confirmed (requires webhooks)
failedReason String? // Reason for failure
@@unique([broadcastId, contactId]) // Ensure a contact is only linked once per broadcast
}
- Why these models?
Contact
: Stores the essential information for each recipient. Using E.164 format forphoneNumber
is crucial for international compatibility.Broadcast
: Tracks the message content and the overall status of the broadcast job.BroadcastRecipient
(Optional but Recommended): This join table allows tracking the status of each individual recipient within a broadcast. This is vital for retries, reporting, and understanding failures, especially with large lists. If you only need overall status, you could simplify and remove this model. We'll include it for robustness.
2.2. Apply Database Migrations
Run the migration command again to apply these schema changes to your database:
yarn rw prisma migrate dev
# Name the migration (e.g., ""add contacts and broadcasts"")
This updates your database schema and generates the corresponding Prisma Client types.
3. API Layer: GraphQL SDL and Services
Now, let's define the GraphQL interface and implement the backend logic in Redwood Services.
3.1. Define GraphQL Schema (SDL)
Create SDL files to define the types, queries, and mutations for managing contacts and broadcasts.
api/src/graphql/contacts.sdl.ts
:
// api/src/graphql/contacts.sdl.ts
export const schema = gql`
type Contact {
id: Int!
phoneNumber: String!
name: String
createdAt: DateTime!
updatedAt: DateTime!
}
type Query {
contacts: [Contact!]! @requireAuth
contact(id: Int!): Contact @requireAuth
}
input CreateContactInput {
phoneNumber: String!
name: String
}
input UpdateContactInput {
phoneNumber: String
name: String
}
type Mutation {
createContact(input: CreateContactInput!): Contact! @requireAuth
updateContact(id: Int!, input: UpdateContactInput!): Contact! @requireAuth
deleteContact(id: Int!): Contact! @requireAuth
}
`
api/src/graphql/broadcasts.sdl.ts
:
// api/src/graphql/broadcasts.sdl.ts
export const schema = gql`
type Broadcast {
id: Int!
message: String!
status: String!
sentAt: DateTime
createdAt: DateTime!
updatedAt: DateTime!
recipientCount: Int # Add a field to easily get the count
}
type Query {
broadcasts: [Broadcast!]! @requireAuth
broadcast(id: Int!): Broadcast @requireAuth
}
input CreateBroadcastInput {
message: String!
}
type Mutation {
# Mutation to create a broadcast record (doesn't send yet)
createBroadcast(input: CreateBroadcastInput!): Broadcast! @requireAuth
# Mutation to trigger the sending of a specific broadcast
sendBroadcast(id: Int!): Broadcast! @requireAuth
}
`
@requireAuth
: This Redwood directive ensures only authenticated users can access these operations. We'll set up basic auth later. If your app doesn't need user login (e.g., internal tool), you can remove this, but securing API endpoints is generally recommended.- Separation of Concerns:
createBroadcast
just saves the message content.sendBroadcast
actually triggers the interaction with Sinch.
3.2. Implement Services
Generate the corresponding service files:
yarn rw g service contact
yarn rw g service broadcast
Now, implement the logic within these services.
api/src/services/contacts/contacts.ts
:
// api/src/services/contacts/contacts.ts
import type { QueryResolvers, MutationResolvers } from 'types/graphql'
import { validate } from '@redwoodjs/api'
import { db } from 'src/lib/db'
export const contacts: QueryResolvers['contacts'] = () => {
return db.contact.findMany()
}
export const contact: QueryResolvers['contact'] = ({ id }) => {
return db.contact.findUnique({
where: { id },
})
}
export const createContact: MutationResolvers['createContact'] = ({
input,
}) => {
// Basic validation example
validate(input.phoneNumber, 'Phone Number', {
presence: true,
// TODO: Add robust E.164 format validation here (e.g., using a dedicated library)
// Ensure the number starts with '+' and contains only digits after that.
})
return db.contact.create({
data: input,
})
}
export const updateContact: MutationResolvers['updateContact'] = ({
id,
input,
}) => {
// Add validation if necessary (e.g., for phoneNumber format if changed)
return db.contact.update({
data: input,
where: { id },
})
}
export const deleteContact: MutationResolvers['deleteContact'] = ({ id }) => {
// Consider implications: Should deleting a contact remove them from past broadcasts?
// Add logic here if needed (e.g., anonymize, check dependencies).
// For simplicity, we just delete the contact record.
return db.contact.delete({
where: { id },
})
}
api/src/services/broadcasts/broadcasts.ts
:
// api/src/services/broadcasts/broadcasts.ts
import type { QueryResolvers, MutationResolvers, BroadcastResolvers } from 'types/graphql'
import { SinchClient } from '@sinch/sdk-core'
import { logger } from 'src/lib/logger'
import { db } from 'src/lib/db'
// Initialize Sinch Client
// TODO: Best practice - Move SinchClient initialization to a dedicated lib file (e.g., api/src/lib/sinch.ts) for better organization, reusability, and testability.
// Ensure SINCH_PROJECT_ID, SINCH_KEY_ID, SINCH_KEY_SECRET are set in .env
const sinchClient = new SinchClient({
projectId: process.env.SINCH_PROJECT_ID,
keyId: process.env.SINCH_KEY_ID,
keySecret: process.env.SINCH_KEY_SECRET,
})
// --- Queries ---
export const broadcasts: QueryResolvers['broadcasts'] = () => {
return db.broadcast.findMany({ orderBy: { createdAt: 'desc' } })
}
export const broadcast: QueryResolvers['broadcast'] = ({ id }) => {
return db.broadcast.findUnique({
where: { id },
})
}
// --- Mutations ---
export const createBroadcast: MutationResolvers['createBroadcast'] = ({
input,
}) => {
// Validate message length, content, etc.
if (!input.message || input.message.trim().length === 0) {
throw new Error('Broadcast message cannot be empty.')
}
// Add check for message length if needed (SMS limits)
return db.broadcast.create({
data: {
message: input.message,
status: 'PENDING', // Initial status
},
})
}
export const sendBroadcast: MutationResolvers['sendBroadcast'] = async ({
id,
}) => {
logger.info(`Attempting to send broadcast ID: ${id}`)
const broadcastToSend = await db.broadcast.findUnique({ where: { id } })
if (!broadcastToSend) {
throw new Error(`Broadcast with ID ${id} not found.`)
}
if (broadcastToSend.status !== 'PENDING' && broadcastToSend.status !== 'FAILED') {
logger.warn(`Broadcast ${id} is not in PENDING or FAILED state (current: ${broadcastToSend.status}). Skipping send.`)
throw new Error(`Broadcast already processed or in progress (Status: ${broadcastToSend.status}).`)
}
// Fetch all contacts.
// **WARNING:** This fetches *all* contacts at once and will cause performance issues or timeouts with large lists (> few hundred/thousand).
// Implement database-level batching (e.g., using Prisma's skip/take) or move to background jobs (See Section 7) for production scale.
const contacts = await db.contact.findMany({
select: { id: true, phoneNumber: true },
// TODO: Add filtering for opt-outs if implemented (e.g., where: { subscribed: true })
})
if (contacts.length === 0) {
logger.warn(`No contacts found to send broadcast ${id}.`)
await db.broadcast.update({
where: { id },
data: { status: 'FAILED', sentAt: new Date(), updatedAt: new Date() }, // Mark as failed if no recipients
})
throw new Error('No contacts available to send the broadcast.')
}
const recipientPhoneNumbers = contacts.map((c) => c.phoneNumber)
// Update broadcast status to SENDING
await db.broadcast.update({
where: { id },
data: { status: 'SENDING', sentAt: new Date() },
})
// --- Sinch API Call ---
try {
logger.info(`Sending broadcast ${id} to ${recipientPhoneNumbers.length} recipients via Sinch.`)
const sendRequest = {
sendSMSRequestBody: {
to: recipientPhoneNumbers,
from: process.env.SINCH_FROM_NUMBER, // Ensure this is set in .env and is E.164
body: broadcastToSend.message,
// Optional parameters (e.g., delivery_report, expire_at) can be added here
// delivery_report: 'summary', // Example: Request basic delivery report status via webhooks
},
}
// Validate FROM number format (basic check)
if (!process.env.SINCH_FROM_NUMBER || !/^\+\d+$/.test(process.env.SINCH_FROM_NUMBER)) {
throw new Error('Invalid or missing SINCH_FROM_NUMBER in environment variables. Must be E.164 format (e.g., +1234567890).');
}
const response = await sinchClient.sms.batches.send(sendRequest)
logger.info(`Sinch batch send initiated for broadcast ${id}. Batch ID: ${response.id}`)
// Update broadcast status to SENT
// IMPORTANT: 'SENT' status here means the batch was successfully *accepted* by the Sinch API.
// It does *not* guarantee delivery to the recipient's handset.
// Actual delivery status requires configuring Sinch Delivery Report Webhooks (See Section 9).
const updatedBroadcast = await db.broadcast.update({
where: { id },
data: { status: 'SENT' },
})
// OPTIONAL: Create BroadcastRecipient records for detailed tracking
const recipientData = contacts.map(contact => ({
broadcastId: id,
contactId: contact.id,
status: 'SENT', // Initial status after sending to Sinch API
sinchBatchId: response.id, // Store batch ID for potential future correlation
}));
await db.broadcastRecipient.createMany({
data: recipientData,
skipDuplicates: true, // In case of retries on the same broadcast ID
});
logger.info(`Broadcast ${id} marked as SENT.`)
return updatedBroadcast; // Return the updated broadcast record
} catch (error) {
logger.error(`Error sending broadcast ${id} via Sinch: ${error.message}`)
// Check for specific Sinch error details if available
if (error.response?.data) {
logger.error(`Sinch API Error Details: ${JSON.stringify(error.response.data)}`)
} else {
logger.error(error.stack) // Log full stack trace for non-API errors
}
// Update broadcast status to FAILED
await db.broadcast.update({
where: { id },
data: { status: 'FAILED' },
})
// OPTIONAL: Update BroadcastRecipient records to FAILED
// Note: This marks all recipients as failed if the batch *request* failed.
// Individual failures after successful submission require webhooks.
await db.broadcastRecipient.updateMany({
where: { broadcastId: id },
data: { status: 'FAILED', failedReason: `Batch send failed: ${error.message}` },
});
// Re-throw the error to be caught by GraphQL error handling
throw new Error(`Failed to send broadcast via Sinch: ${error.message}`)
}
}
// --- Field Resolver for recipientCount ---
// This calculates the count dynamically when querying a Broadcast
export const Broadcast: BroadcastResolvers = {
recipientCount: (_obj, { root }) => {
// Counts associated recipients using the join table
return db.broadcastRecipient.count({ where: { broadcastId: root.id } })
},
}
- Sinch Client Initialization: The
SinchClient
is instantiated using credentials from environment variables. (Recommendation: Move tolib
for larger projects). - Error Handling: Includes checks for existing broadcast status, fetching contacts, validating the
SINCH_FROM_NUMBER
, and atry...catch
block around thesinchClient.sms.batches.send
call. Failures update the broadcast status toFAILED
and log detailed errors. - Logging: Uses Redwood's built-in
logger
for informative messages. Logs Sinch API errors if available. - Batch Sending: Leverages
sinchClient.sms.batches.send
, which is designed for sending the same message to multiple recipients efficiently. - Status Updates: Updates the
Broadcast
status (PENDING
->SENDING
->SENT
/FAILED
). TheSENT
status indicates acceptance by Sinch, not final delivery. BroadcastRecipient
Creation (Optional): Creates records linking contacts to the broadcast, storing the Sinchbatch_id
.- Field Resolver (
recipientCount
): Dynamically calculates the recipient count for a broadcast query using the join table. - Scalability Warning: Added a strong warning about
findMany()
without batching for large lists.
4. Error Handling, Logging, and Retry Mechanisms
- Error Handling Strategy:
- Service-level validation (e.g., checking broadcast status, message content, phone number format).
try...catch
around the critical Sinch API call.- Update database status (
FAILED
) on errors. - Throw errors from services to let Redwood's GraphQL layer handle formatting the response to the client.
- Use specific error messages for clarity. Log detailed error info (including Sinch API responses if available).
- Logging:
- Use
logger.info
,logger.warn
,logger.error
within services. Redwood configures Pino logger by default. - Log key events: start of broadcast send, number of recipients, Sinch API call initiation, Sinch response (batch ID), success/failure status updates, and detailed error messages with stack traces or API error details.
- In production, configure log shipping to a centralized logging service (e.g., Datadog, Logtail, Papertrail) for easier analysis.
- Use
- Retry Mechanisms:
- Simple Retry: The current implementation allows manually retrying a
FAILED
broadcast by calling thesendBroadcast
mutation again. - Automated Retries (Advanced): For more robustness, especially against transient network issues or brief Sinch API hiccups, implement exponential backoff using background jobs:
- Modify the
sendBroadcast
mutation: Instead of sending directly, queue a background job (see Section 7). - Use a background job processor (like BullMQ with Redis, or Redwood's
exec
command with Faktory/Temporal) that supports automatic retries with backoff. - Configure the job queue to retry the Sinch API call a few times (e.g., 3 retries with delays of 10s, 60s, 300s) if it fails with specific error types (e.g., network errors, 5xx errors from Sinch, 429 rate limit errors).
- If all retries fail, mark the broadcast as
FAILED
.
- Modify the
- Note: Implementing full automated retries adds complexity. The manual retry capability covers basic needs.
- Simple Retry: The current implementation allows manually retrying a
5. Adding Security Features
- Authentication & Authorization:
- Setup: Use Redwood's auth generators.
dbAuth
is a simple starting point:yarn rw setup auth dbAuth # Follow prompts (generate User model, etc.) yarn rw prisma migrate dev # Apply auth-related schema changes
- Enforcement: The
@requireAuth
directive added to the SDLs in Section 3 now enforces that only logged-in users can perform contact/broadcast operations. - Role-Based Access (Optional): If needed, implement roles (e.g.,
ADMIN
,USER
) using Redwood's RBAC features (@requireAuth(roles: 'ADMIN')
) to restrict who can send broadcasts. See Redwood RBAC Docs.
- Setup: Use Redwood's auth generators.
- Input Validation & Sanitization:
- Services: Perform validation within service functions before database operations or API calls (as shown in
createContact
andcreateBroadcast
). - Phone Numbers: Use a dedicated library for robust E.164 phone number format validation and parsing to ensure correctness before saving or sending (as noted in
createContact
TODO). - Message Content: Sanitize message content if it includes user-generated input to prevent potential injection issues (though less common for SMS bodies compared to HTML). Limit message length based on SMS standards (typically 160 GSM-7 characters or 70 UCS-2 characters per segment). Add checks in
createBroadcast
.
- Services: Perform validation within service functions before database operations or API calls (as shown in
- Rate Limiting:
- API Gateway: Implement rate limiting at the infrastructure level (e.g., Vercel, Netlify edge functions, AWS API Gateway) if deploying there.
- Application Level (Advanced): For self-hosted or more granular control, use middleware with libraries like
rate-limiter-flexible
and Redis to limit requests per user or IP to your GraphQL endpoint, especially thesendBroadcast
mutation.
- Sinch Security: Rely on secure storage of Sinch API keys (environment variables). Do not expose keys on the client-side. Validate the
SINCH_FROM_NUMBER
format.
6. Handling Special Cases
- Phone Number Formatting: Strictly enforce E.164 format (
+
followed by country code and number, no spaces or dashes) for all phone numbers stored and sent to Sinch. Validate on input (see Section 5). - Character Encoding & Message Length: Be mindful of SMS character limits. Standard GSM-7 allows 160 chars per segment. Using non-GSM characters (like emojis, some accented letters) switches to UCS-2, reducing the limit to 70 chars per segment. Long messages are split into multiple segments by carriers, potentially increasing costs. Inform users or truncate messages if necessary (validate in
createBroadcast
). The Sinch API handles segmentation, but costs are per segment. - Opt-Outs/Consent Management: Regulations like TCPA (US) and GDPR (EU) require managing user consent and honoring opt-out requests.
- Implementation Suggestion: Add a
subscribed
boolean field (defaulting totrue
) to theContact
model. Modify thefindMany
query insendBroadcast
to filter contacts (where: { subscribed: true }
). Implement a mechanism (e.g., handling replies like ""STOP"" via Sinch Inbound SMS webhooks - outside the scope of this sending guide) to update thesubscribed
status tofalse
. (Note: Schema modification and query updates are not shown in the provided code but would be necessary steps.)
- Implementation Suggestion: Add a
- Duplicate Phone Numbers: The
Contact
model has@unique
onphoneNumber
to prevent duplicates. Handle potential errors duringcreateContact
if a number already exists (Prisma will throw an error; catch it and provide a user-friendly message). - Internationalization: E.164 format handles country codes. Be aware of varying regulations and costs for sending SMS to different countries. Ensure your Sinch account is enabled for the destination countries.
7. Implementing Performance Optimizations
- Database Queries:
- Use
select
in Prisma queries (findMany
,findUnique
) to fetch only the necessary fields (as done insendBroadcast
for contacts). - Add database indexes (
@@index([field])
inschema.prisma
) to frequently queried fields (e.g.,status
onBroadcast
). The@unique
onphoneNumber
already creates an index.
- Use
- Batching: The current implementation uses Sinch's batch send (
sms.batches.send
), which is efficient for the API call itself. The bottleneck for large lists is often the database query (db.contact.findMany()
). - Background Jobs (Crucial for Large Scale):
- Problem: Sending to thousands of contacts within a single serverless function invocation (common in Vercel/Netlify) can exceed timeout limits (e.g., 10-60 seconds) due to database fetch time and processing overhead, even if the Sinch API call itself is quick. The current
sendBroadcast
implementation fetching all contacts at once is not scalable. - Solution: Decouple the broadcast trigger from the actual sending process.
- Modify
sendBroadcast
mutation: Instead of calling Sinch directly, it should:- Mark the broadcast as
QUEUED
orPENDING
. - Enqueue a background job, passing the
broadcastId
.
- Mark the broadcast as
- Set up a background job processor:
- Redwood
exec
: Useyarn rw exec <script-name>
with a task queue like Faktory or Temporal. See the RedwoodJS documentation for background jobs/workers. - Standalone Queue: Use libraries like BullMQ (requires Redis) running in a separate Node.js process (e.g., on Render, Fly.io, or a small VM).
- Redwood
- Create a worker script/function that:
- Receives the
broadcastId
. - Fetches the broadcast details.
- Fetches contacts in batches from the database (e.g., using Prisma's
skip
andtake
). - For each batch of contacts, calls the Sinch API (or potentially aggregates batches if Sinch API limits allow).
- Updates overall broadcast status (
SENDING
,SENT
,FAILED
) and potentially individualBroadcastRecipient
statuses. - Handles retries (as discussed in Section 4).
- Receives the
- Modify
- (Note: This guide implements the simpler, direct-send approach which is not suitable for large lists. Implementing background jobs with database batching is the recommended next step for production systems expecting to send to more than a few hundred contacts. Refer to RedwoodJS documentation and community resources for setting up tools like BullMQ or Faktory.)
- Problem: Sending to thousands of contacts within a single serverless function invocation (common in Vercel/Netlify) can exceed timeout limits (e.g., 10-60 seconds) due to database fetch time and processing overhead, even if the Sinch API call itself is quick. The current
- Caching: Not typically a major factor for this specific workflow unless contact lists are extremely static and queried very frequently on the frontend.
8. Adding Monitoring, Observability, and Analytics
- Health Checks: Add a simple health check endpoint to your Redwood API (e.g., using a custom function or a simple GraphQL query) that verifies database connectivity. Monitor this endpoint using uptime monitoring services (e.g., UptimeRobot, Better Uptime).
- Performance Metrics:
- Logging: Log the duration of the
sendBroadcast
service function execution, especially the database query and the Sinch API call times. - APM: Integrate an Application Performance Monitoring tool (e.g., Datadog APM, Sentry Performance, New Relic) to automatically track transaction times (GraphQL resolvers, database queries, external API calls like Sinch).
- Logging: Log the duration of the
- Error Tracking:
- Use Sentry (Redwood Sentry Integration), Bugsnag, or similar services. They automatically capture unhandled exceptions, provide stack traces, context, and alerting.
- Ensure errors logged via
logger.error
(including Sinch API error details) are captured or shipped to your logging platform.
- Sinch Dashboard & Analytics:
- Monitor SMS delivery rates, costs, and potential errors directly within the Sinch Customer Dashboard. It provides analytics on sent messages, delivery status (if delivery reports are configured), and usage.
- Custom Dashboards (Optional): Use tools like Grafana (with Prometheus or Loki for logs/metrics) or your logging/APM provider's dashboarding features to visualize:
- Number of broadcasts sent over time.
- Broadcast success/failure rates (
SENT
vsFAILED
status in your DB). - Average broadcast processing time.
- Error counts and types.
- Number of contacts over time.
9. Troubleshooting and Caveats
- Common Errors:
401 Unauthorized
from Sinch: IncorrectSINCH_PROJECT_ID
,SINCH_KEY_ID
, orSINCH_KEY_SECRET
. Double-check values in.env
or production environment variables. Ensure the key is active in the Sinch dashboard.403 Forbidden
from Sinch: The API key might lack permissions for the SMS API, or theSINCH_FROM_NUMBER
might not be provisioned correctly, enabled for the destination country, or properly formatted (needs E.164). Check Sinch dashboard settings (API keys, Number configuration, Allowed Countries).400 Bad Request
from Sinch: Invalid phone number format in theto
list (ensure all are E.164), message too long, missing required parameters (from
,to
,body
), or invalidSINCH_FROM_NUMBER
format. Check the error details logged from the Sinch SDK/API response.- Database Connection Errors: Verify
DATABASE_URL
is correct and the database server is running and accessible. Check firewall rules. - Timeout Errors (Function/Server): If
sendBroadcast
takes too long (usually due to fetching/processing large contact lists), implement background jobs (Section 7). This is the most common scaling issue.
- Sinch Platform Limitations:
- Rate Limits: Sinch applies rate limits (messages per second per number/account). Check your account limits. High-volume sending might require contacting Sinch support for increases. The SDK/API might return
429 Too Many Requests
. Implement delays or use background job queues with rate limiting features if hitting limits.
- Rate Limits: Sinch applies rate limits (messages per second per number/account). Check your account limits. High-volume sending might require contacting Sinch support for increases. The SDK/API might return