Why AWS Lambda Doesn't Support ValueFrom for Environment Variables (And How to Deal With It)
If you’ve ever looked at ECS task definitions with envy from the Lambda side, you’re not alone.
If you use AWS Lambda for background workers, listeners, or scheduled tasks, you’ve likely dealt with secrets management. Secrets are typically stored in AWS Parameter Store, but CI/CD pipelines often fetch them at build time and bake them into .env files that ship with each Lambda package. When you decide to move to runtime Parameter Store loading—like ECS does natively with valueFrom—you discover a fundamental difference between how ECS and Lambda handle environment variables. One that can cause weeks of debugging throttling errors.
This is the story of that migration, the rate limiting nightmares, and how to ultimately find a balance.
The Starting Point: CI/CD-Generated .env Files
Before migration, secrets are already in AWS Parameter Store, but CI/CD pipelines fetch them at build time and generate .env files dynamically:
How It Works
The CI/CD pipeline:
- Authenticates to AWS
- Runs a script to fetch all parameters from Parameter Store
- Generates a
.envfile with all the values - Builds the Lambda package with the
.envfile included - Deploys the package
The Problems
- Build-time vs runtime - Variables are baked into Lambda packages at build time, not fetched at runtime
- Redeployment for rotation - Changing a secret in Parameter Store requires redeploying every Lambda
- Long CI/CD pipelines - The parameter fetching step adds time to every build
- Inconsistent state - If a deployment fails halfway, some Lambdas have new secrets, others have old ones
- No dynamic updates - Can’t update configuration without a full deployment cycle
The Dream: ECS-Style ValueFrom
When you look at how ECS handles this, you find the elegant solution you want:
# ECS Task Definition - Environment variables from Parameter Store
resource "aws_ecs_task_definition" "api" {
container_definitions = jsonencode([{
secrets = [
{
name = "DATABASE_PASSWORD"
valueFrom = "arn:aws:ssm:us-west-2:123456789012:parameter/myapp/prod/DATABASE_PASSWORD"
},
{
name = "API_KEY"
valueFrom = "arn:aws:ssm:us-west-2:123456789012:parameter/myapp/prod/API_KEY"
}
# ... 130+ more variables
]
}])
}
What happens with ECS:
- Task starts
- ECS agent fetches all parameters from Parameter Store
- Parameters are injected as environment variables
- Container starts with
process.env.DATABASE_PASSWORDalready set - The application code doesn’t know or care about Parameter Store
This is beautiful. The container has zero awareness of where its environment variables come from. It’s pure environment variable injection, handled by the platform.
The Reality: Lambda Doesn’t Support ValueFrom
Here’s the painful truth: AWS Lambda does not support valueFrom or secrets in its environment variable configuration.
# Lambda Function - This is ALL you can do
resource "aws_lambda_function" "worker" {
function_name = "my-worker"
environment {
variables = {
# Only static values allowed
NODE_ENV = "production"
RELEASE_STAGE = "production"
# Cannot do: DATABASE_PASSWORD = valueFrom("arn:aws:ssm:...")
}
}
}
Lambda environment variables are:
- Static strings only - No references to Parameter Store, Secrets Manager, or any external source
- Set at deployment time - Not fetched at runtime
- Visible in the AWS Console - Anyone with Lambda access can see them
- 4KB total limit - All environment variables combined can’t exceed 4KB
Why This Matters
If you want Lambda to use Parameter Store secrets:
- Your application code must fetch them
- You need to handle authentication to SSM
- You need to handle caching (or not)
- You need to handle errors and retries
- Cold starts get slower - Every cold start potentially means API calls to Parameter Store
The Solution: Runtime Parameter Store Loading
Since Lambda can’t do valueFrom, you build it yourself in the application layer:
The Application Code
Create an env loader that fetches parameters during Lambda initialization:
// env.aws.loader.ts
import { SSMClient, GetParametersByPathCommand } from "@aws-sdk/client-ssm"
import { from, expand, reduce, EMPTY, of } from "rxjs"
const AWS_ENV_MAP: Record<ReleaseStage, string> = {
sandbox: "sandbox",
development: "dev",
staging: "staging",
production: "prod",
}
export const getParameterPath = (releaseStage: ReleaseStage): string =>
`/myapp/${AWS_ENV_MAP[releaseStage]}/`
export const loadEnvVarsFromParameterStore: EnvVarsLoader = () => {
// Skip in test environments
if (isTestNodeEnv) return of({})
const ssmClient = createSsmClient()
const releaseStage = getReleaseStage()
const paramPath = getParameterPath(releaseStage)
console.log("loadEnvVarsFromParameterStore:", { releaseStage, paramPath })
console.time("loadedEnvVarsFromParameterStore")
return from(getParametersByPath(ssmClient, paramPath)).pipe(
// Handle pagination - GetParametersByPath returns max 10 at a time
expand(({ NextToken, Parameters }, index) => {
const count = Parameters?.length || 0
console.log("loadedEnvVarsFromParameterStore:", { index, count })
return NextToken ? getParametersByPath(ssmClient, paramPath, NextToken) : EMPTY
}),
// Merge all pages into a single object
reduce((acc, output) => {
return { ...acc, ...mapParametersToEnvVars(output) }
}, {} as EnvVars),
tap(() => {
console.timeEnd("loadedEnvVarsFromParameterStore")
}),
)
}
The Terraform Side
Every Lambda module automatically includes Parameter Store permissions:
# Lambda module - Required policies for all Lambdas
locals {
required_policies = [
# Parameter Store access policy
{
actions = [
"ssm:GetParametersByPath",
"ssm:GetParameters",
"ssm:GetParameter"
]
resources = [
"arn:aws:ssm:${local.aws_region}:${local.aws_account_id}:parameter/${local.app_parameters_path[var.release_stage]}/*"
]
},
# KMS decrypt policy for SecureString parameters
{
actions = ["kms:Decrypt"]
resources = ["arn:aws:kms:${local.aws_region}:${local.aws_account_id}:key/alias/aws/ssm"]
}
]
}
The Disaster: Rate Exceeded
Everything works great in development. Then you deploy to production with 25+ Lambda functions, and chaos ensues:
ThrottlingException: Rate exceeded
Understanding Parameter Store Rate Limits
AWS Parameter Store has very low default throughput limits:
| Tier | GetParameter / GetParameters | GetParametersByPath |
|---|---|---|
| Standard | 40 TPS shared | 40 TPS shared |
| Advanced | 100 TPS shared | 100 TPS shared |
TPS = Transactions Per Second, and it’s shared across all API calls in the account.
The Math Problem
Let’s do the math:
- You have 130+ parameters per environment
GetParametersByPathreturns max 10 parameters per call (AWS limit)- So each Lambda cold start needs 13+ API calls to load all parameters
- You have 25+ Lambda functions
- During a deployment, all 25 functions cold start simultaneously
25 functions × 13 API calls = 325 API calls in seconds
With a 40 TPS limit, you’re 8x over the quota during deployments.
Symptoms
- Random Lambda timeouts during deployment
- Intermittent failures across all functions
- Some functions starting fine, others failing
- No clear pattern—whichever function hits the rate limit fails
The Fix: Reducing Parameter Store API Calls
Several options exist:
Option 1: Enable High Throughput in Parameter Store Settings
AWS allows you to increase the throughput limit directly in the Parameter Store console:
- Go to AWS Systems Manager → Parameter Store → Settings
- Under Parameter Store throughput, select High throughput limit
- This increases your limit from 40 TPS to 1,000 TPS
This is a quick win and something you should enable immediately, but it doesn’t solve the fundamental issue of making too many API calls.
Option 2: Add Retry Logic with Adaptive Mode
Configure your SSM client with adaptive retry mode and increased max attempts:
import { SSMClient } from "@aws-sdk/client-ssm"
const ssmClient = new SSMClient({
region: process.env.AWS_REGION,
retryMode: "adaptive",
maxAttempts: 5,
})
Why adaptive mode?
- Automatically adjusts retry delays based on error responses
- Uses exponential backoff with jitter
- Handles throttling errors (429) gracefully
- Better than the default “standard” retry mode for high-concurrency scenarios
This helps significantly during deployment stampedes, but you may still see occasional throttling with many concurrent cold starts.
Option 3: Pass Non-Secrets as Lambda Environment Variables
This is the recommended approach. The key insight:
Not all 130+ parameters are secrets.
Many are just configuration:
- API endpoints (
EXTERNAL_API_BASE_URL,WEBHOOK_URL) - Feature flags (
FEATURE_X_ENABLED,API_V2_THRESHOLD) - Resource identifiers (
SENTRY_DSN,ANALYTICS_APP_ID) - Queue URLs (
PROCESSING_QUEUE_URL,NOTIFICATION_QUEUE_URL)
These can safely be Lambda environment variables because:
- They’re not sensitive
- They’re already visible in Terraform code
- They don’t need rotation
Lambda environment variables have a 4KB total limit for all variables combined. Before moving parameters to environment variables, calculate your total size:
# Check the size of your env vars in bytes
echo -n "KEY1=value1\nKEY2=value2\n..." | wc -c
If you’re close to the limit, you may need to be selective about which variables to pass directly.
A Note on Shared Code and Simplification
In our case, we pass all non-secret environment variables to every Lambda function. Why? Because we share a common environment validation method across all functions—the same code that validates required variables runs in every Lambda.
This is a simplification that trades some efficiency for consistency:
- ✅ Pros: Single source of truth, easier to maintain, no “missing variable” surprises
- ❌ Cons: Every Lambda gets variables it may not need, uses more of the 4KB budget
Future improvement: Split the validation logic and pass only the variables each Lambda actually needs. This requires more Terraform configuration but is more efficient for Lambda functions with specific, limited requirements.
The New Math
- Before: 130 params ÷ 10 per call = 13 API calls per cold start
- After: 40 secrets ÷ 10 per call = 4 API calls per cold start
25 functions × 4 API calls = 100 API calls - Well under the 40 TPS limit spread over several seconds.
Implementation Changes
Terraform Lambda Module:
resource "aws_lambda_function" "this" {
function_name = var.lambda_name
environment {
variables = merge(
{
NODE_PATH = "/opt/nodejs/node_modules"
NODE_ENV = var.node_env
RELEASE_STAGE = var.release_stage
},
var.non_secret_env_vars # New: Pass non-secrets directly
)
}
}
Application Code:
export const loadEnvVarsFromParameterStore: EnvVarsLoader = () => {
if (isTestNodeEnv) return of({})
const ssmClient = createSsmClient()
const releaseStage = getReleaseStage()
// Only fetch the secrets path now
const secretsPath = `/myapp/${AWS_ENV_MAP[releaseStage]}/secrets/`
// ... rest of the loading logic
}
Bonus: Deployment Script with Retry Logic
For local development and debugging, create a script that handles Parameter Store throttling gracefully:
#!/bin/bash
# get-env-vars.sh - Fetches all env vars with exponential backoff
function GetParameterStoreValues() {
local max_retries=5
local backoff=2
while true; do
attempt=1
while [[ $attempt -le $max_retries ]]; do
params=$(aws ssm get-parameters-by-path \
--path "$paramNamePrefix" \
--region $REGION \
--recursive \
--with-decryption \
$tokenParam)
if [[ $? -eq 0 ]]; then
break
fi
# Add jitter to prevent thundering herd
local backoff_with_jitter=$(add_jitter $backoff)
echo "🔄 Throttling detected, retrying in $backoff_with_jitter seconds..." >&2
sleep $backoff_with_jitter
backoff=$((backoff * 2))
attempt=$((attempt + 1))
done
# Handle pagination...
done
}
The jitter is crucial—without it, multiple parallel processes retry at exactly the same time and hit the rate limit again.
What We Wish AWS Would Add
If you could ask AWS for one Lambda feature, it would be:
# DREAM: Lambda with valueFrom support
resource "aws_lambda_function" "worker" {
function_name = "my-worker"
environment {
variables = {
NODE_ENV = "production"
}
# Please, AWS, add this:
secrets = [
{
name = "DATABASE_PASSWORD"
valueFrom = "arn:aws:ssm:us-west-2:123456789012:parameter/myapp/prod/DATABASE_PASSWORD"
}
]
}
}
This would:
- Eliminate application-level SSM code for Lambda
- Move the API calls to Lambda’s init phase (AWS’s concern, not ours)
- Allow AWS to optimize and cache across function instances
- Provide parity with ECS, EKS, and other compute services
Lessons Learned
1. Build-Time vs Runtime: A Fundamental Shift
Moving from CI/CD-generated .env files to runtime Parameter Store loading isn’t just a technical change—it’s a different operational model. Runtime loading means faster secret rotation but adds cold start latency.
2. ECS and Lambda Are Not Equals
Despite both being “serverless” (in that you don’t manage servers), they have fundamentally different capabilities. ECS gets valueFrom for free; Lambda makes you build it yourself.
3. Rate Limits Compound with Scale
40 TPS sounds reasonable until you have 25 functions doing 13 API calls each. Always calculate your worst-case scenario (deployment stampede).
4. Not Everything Needs to Be a Secret
Separating secrets from configuration reduces API calls and simplifies debugging (you can see non-secret config in the Lambda console).
5. Build Resilience for AWS API Limits
Exponential backoff with jitter isn’t optional—it’s required for any production system using AWS APIs at scale.
The Comparison: ECS vs Lambda Environment Variables
| Capability | ECS | Lambda |
|---|---|---|
| Static environment variables | ✅ | ✅ |
valueFrom Parameter Store | ✅ | ❌ |
valueFrom Secrets Manager | ✅ | ❌ |
| Automatic secret injection | ✅ | ❌ |
| Application code for secrets | Not needed | Required |
| Cold start impact | None | +200-500ms |
| API call rate limits | AWS handles | You handle |
Wrapping Up
Moving from build-time .env generation to runtime Parameter Store loading is the right decision for operational flexibility—secrets can now be rotated without redeploying Lambdas. But Lambda’s lack of valueFrom support makes it more complex than expected.
If you’re planning a similar migration:
- Audit your parameters - Separate secrets from configuration
- Calculate your API call math - Parameters ÷ 10 × function count
- Implement retry with backoff - You will hit rate limits
- Consider passing non-secrets as Lambda env vars - Reduces API calls dramatically
- Watch your cold start times - SSM calls add latency
The Lambda team, if you’re reading this: please add valueFrom support. ECS has had it for years. We’d love to stop writing SSM loading code in every Lambda-based project.
Have you dealt with similar challenges? I’d love to hear your solutions. Find me on LinkedIn.