Fix: SST Not Working — Deploy Failing, Bindings Not Linking, or Lambda Functions Timing Out
Quick Answer
How to fix SST (Serverless Stack) issues — resource configuration with sst.config.ts, linking resources to functions, local dev with sst dev, database and storage setup, and deployment troubleshooting.
The Problem
sst dev starts but the linked resource is undefined:
import { Resource } from 'sst';
export async function handler() {
const bucketName = Resource.MyBucket.name;
// Error: Cannot read properties of undefined (reading 'name')
}Or deployment fails with an AWS error:
npx sst deploy --stage production
# Error: User: arn:aws:iam::123456:user/dev is not authorized to perform: cloudformation:CreateStackOr the function deploys but times out:
Task timed out after 10.00 secondsOr sst dev can’t connect to the live environment:
Error: Could not connect to the IoT endpointWhy This Happens
SST (Ion) is an Infrastructure-as-Code framework that deploys to AWS using Pulumi under the hood. It provides a streamlined developer experience for building full-stack serverless apps:
- Resources must be linked to functions — SST’s
linkproperty connects infrastructure resources (buckets, databases, queues) to Lambda functions. Without linking, the function has no IAM permissions and no environment variables for the resource. TheResourceimport reads linked values from environment variables set at deploy time. - AWS credentials must have sufficient permissions — SST creates CloudFormation stacks, S3 buckets, Lambda functions, API Gateway endpoints, and more. The deploying user needs broad IAM permissions. Restricted IAM users get authorization errors.
sst devuses IoT for live Lambda — during local development, SST routes Lambda invocations to your local machine through AWS IoT Core. This requires IoT permissions and stable connectivity. Firewalls or VPNs can block the WebSocket connection.- Lambda defaults to 10s timeout — SST functions default to 10 seconds and 1024MB memory. Long-running operations (database migrations, file processing) need higher limits explicitly configured.
Fix 1: Configure sst.config.ts
npx sst@latest init// sst.config.ts — SST Ion configuration
export default $config({
app(input) {
return {
name: 'my-app',
removal: input?.stage === 'production' ? 'retain' : 'remove',
home: 'aws',
providers: {
aws: {
region: 'us-east-1',
},
},
};
},
async run() {
// S3 Bucket
const bucket = new sst.aws.Bucket('MyBucket', {
access: 'public', // Public read access
});
// DynamoDB Table
const table = new sst.aws.Dynamo('MyTable', {
fields: {
pk: 'string',
sk: 'string',
gsi1pk: 'string',
gsi1sk: 'string',
},
primaryIndex: { hashKey: 'pk', rangeKey: 'sk' },
globalIndexes: {
gsi1: { hashKey: 'gsi1pk', rangeKey: 'gsi1sk' },
},
});
// Secret values
const dbUrl = new sst.Secret('DatabaseUrl');
const apiKey = new sst.Secret('ApiKey');
// API with linked resources
const api = new sst.aws.ApiGatewayV2('MyApi');
api.route('GET /users', {
handler: 'src/functions/users.list',
link: [table, dbUrl], // Link resources to this function
});
api.route('POST /upload', {
handler: 'src/functions/upload.handler',
link: [bucket, apiKey],
timeout: '30 seconds',
memory: '512 MB',
});
// Next.js frontend
const site = new sst.aws.Nextjs('MySite', {
link: [api, bucket, table],
environment: {
NEXT_PUBLIC_API_URL: api.url,
},
});
return {
api: api.url,
site: site.url,
bucket: bucket.name,
};
},
});Fix 2: Link and Access Resources
// src/functions/users.ts — access linked resources
import { Resource } from 'sst';
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient, QueryCommand, PutCommand } from '@aws-sdk/lib-dynamodb';
const client = DynamoDBDocumentClient.from(new DynamoDBClient({}));
export async function list() {
// Resource.MyTable.name is available because of the `link` property
const result = await client.send(new QueryCommand({
TableName: Resource.MyTable.name,
KeyConditionExpression: 'pk = :pk',
ExpressionAttributeValues: { ':pk': 'USER' },
}));
return {
statusCode: 200,
body: JSON.stringify(result.Items),
};
}
export async function create(event: any) {
const body = JSON.parse(event.body);
await client.send(new PutCommand({
TableName: Resource.MyTable.name,
Item: {
pk: 'USER',
sk: `USER#${body.id}`,
name: body.name,
email: body.email,
createdAt: new Date().toISOString(),
},
}));
return { statusCode: 201, body: JSON.stringify({ created: true }) };
}// src/functions/upload.ts — S3 upload
import { Resource } from 'sst';
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
const s3 = new S3Client({});
export async function handler(event: any) {
const { filename, contentType } = JSON.parse(event.body);
// Generate pre-signed upload URL
const command = new PutObjectCommand({
Bucket: Resource.MyBucket.name,
Key: `uploads/${filename}`,
ContentType: contentType,
});
const uploadUrl = await getSignedUrl(s3, command, { expiresIn: 3600 });
return {
statusCode: 200,
body: JSON.stringify({ uploadUrl }),
};
}// Access secrets
import { Resource } from 'sst';
const dbUrl = Resource.DatabaseUrl.value; // Secret value
const apiKey = Resource.ApiKey.value;# Set secret values
npx sst secret set DatabaseUrl "postgres://user:pass@host/db"
npx sst secret set ApiKey "sk_live_abc123"
# Set per stage
npx sst secret set DatabaseUrl "postgres://..." --stage productionFix 3: Local Development with sst dev
# Start local development
npx sst dev
# This:
# 1. Deploys infrastructure to AWS (real DynamoDB, S3, etc.)
# 2. Routes Lambda invocations to your local machine
# 3. Watches for code changes and hot-reloads
# Start with a specific stage
npx sst dev --stage dev
# With a specific AWS profile
npx sst dev --profile my-aws-profile// sst.config.ts — dev-specific configuration
export default $config({
async run() {
const isProd = $app.stage === 'production';
const table = new sst.aws.Dynamo('MyTable', {
fields: { pk: 'string', sk: 'string' },
primaryIndex: { hashKey: 'pk', rangeKey: 'sk' },
// Remove on non-prod stage deletion
transform: {
table: {
deletionProtection: isProd,
},
},
});
// Different config per stage
const api = new sst.aws.ApiGatewayV2('MyApi');
api.route('GET /health', 'src/functions/health.handler');
// Custom domain in production
if (isProd) {
api.addRoute('GET /users', {
handler: 'src/functions/users.list',
link: [table],
});
}
},
});Fix 4: Database Integration
// sst.config.ts — RDS (Postgres or MySQL)
export default $config({
async run() {
const vpc = new sst.aws.Vpc('MyVpc');
const database = new sst.aws.Postgres('MyDatabase', {
vpc,
scaling: {
min: '0.5 ACU', // Scale to zero when idle
max: '4 ACU',
},
});
const api = new sst.aws.ApiGatewayV2('MyApi');
api.route('GET /users', {
handler: 'src/functions/users.list',
link: [database],
vpc, // Function must be in the same VPC
});
},
});// src/functions/users.ts — access RDS
import { Resource } from 'sst';
import { drizzle } from 'drizzle-orm/aws-data-api/pg';
import { RDSDataClient } from '@aws-sdk/client-rds-data';
import * as schema from '../db/schema';
const client = new RDSDataClient({});
const db = drizzle(client, {
database: Resource.MyDatabase.database,
secretArn: Resource.MyDatabase.secretArn,
resourceArn: Resource.MyDatabase.clusterArn,
schema,
});
export async function list() {
const users = await db.select().from(schema.users);
return {
statusCode: 200,
body: JSON.stringify(users),
};
}Fix 5: Queues and Cron Jobs
// sst.config.ts
export default $config({
async run() {
// SQS Queue with subscriber
const queue = new sst.aws.Queue('MyQueue');
queue.subscribe('src/functions/worker.handler', {
link: [table],
timeout: '5 minutes',
});
// API route that publishes to queue
const api = new sst.aws.ApiGatewayV2('MyApi');
api.route('POST /jobs', {
handler: 'src/functions/enqueue.handler',
link: [queue],
});
// Cron job
new sst.aws.Cron('DailyReport', {
schedule: 'rate(1 day)', // Or: cron(0 9 * * ? *)
job: {
handler: 'src/functions/report.handler',
link: [table],
timeout: '5 minutes',
},
});
},
});// src/functions/enqueue.ts — publish to queue
import { Resource } from 'sst';
import { SQSClient, SendMessageCommand } from '@aws-sdk/client-sqs';
const sqs = new SQSClient({});
export async function handler(event: any) {
const body = JSON.parse(event.body);
await sqs.send(new SendMessageCommand({
QueueUrl: Resource.MyQueue.url,
MessageBody: JSON.stringify({
type: 'process-image',
imageKey: body.imageKey,
}),
}));
return { statusCode: 202, body: JSON.stringify({ queued: true }) };
}
// src/functions/worker.ts — process queue messages
export async function handler(event: any) {
for (const record of event.Records) {
const message = JSON.parse(record.body);
console.log('Processing:', message.type, message.imageKey);
await processImage(message.imageKey);
}
}Fix 6: Deploy to Production
# Deploy to production
npx sst deploy --stage production
# Deploy with specific profile
npx sst deploy --stage production --profile prod-aws
# Remove a stage (deletes all resources)
npx sst remove --stage dev
# View outputs (URLs, resource names)
npx sst output --stage production
# Open the SST console (web dashboard)
npx sst consoleCI/CD deployment (GitHub Actions):
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npx sst deploy --stage production
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}Still Not Working?
Resource.X is undefined — the resource isn’t linked to the function. Add it to the link array: link: [bucket, table]. Every resource accessed via Resource.* must be explicitly linked. Linking sets environment variables and IAM permissions automatically.
“User is not authorized” on deploy — SST needs broad IAM permissions to create CloudFormation stacks, Lambda functions, API Gateway, S3, DynamoDB, etc. For initial setup, use an IAM user with AdministratorAccess. For production, create a scoped policy based on the resources SST creates.
sst dev hangs or can’t connect — SST uses AWS IoT Core for live Lambda. Ensure your AWS credentials are valid and have IoT permissions. VPNs or corporate firewalls often block WebSocket connections to IoT endpoints. Try disconnecting from VPN.
Lambda timeout at 10 seconds — increase the timeout in the route or function config: timeout: '60 seconds'. For long-running tasks, use a queue pattern instead — publish to SQS and process asynchronously with a higher timeout subscriber.
For related serverless issues, see Fix: Wrangler Not Working and Fix: Inngest Not Working.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Trigger.dev Not Working — Tasks Not Running, Runs Failing, or Dev Server Not Connecting
How to fix Trigger.dev issues — task definition and triggering, dev server setup, scheduled tasks with cron, concurrency and queues, retries, idempotency, and deployment to Trigger.dev Cloud.
Fix: Inngest Not Working — Functions Not Triggering, Steps Failing, or Events Not Received
How to fix Inngest issues — function and event setup, step orchestration, retries and error handling, cron scheduling, concurrency control, fan-out patterns, and local development with the Dev Server.
Fix: AWS Lambda Environment Variable Not Set — undefined or Missing at Runtime
How to fix AWS Lambda environment variables not available — Lambda console config, CDK/SAM/Terraform setup, secrets from SSM Parameter Store, encrypted variables, and local testing.
Fix: AWS CDK Not Working — Bootstrap Error, ROLLBACK_COMPLETE, and Deploy Failures
How to fix AWS CDK errors — cdk bootstrap required, stack in ROLLBACK_COMPLETE, asset bundling failed, CLI/library version mismatch, VPC lookup failing, and cross-stack export conflicts.