- Published on
AWS Lambda Cheat Sheet - Complete Serverless Guide 2025
- Authors
- Name
- QuizCld
AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers.
Key Features
- Runs on demand - functions execute only when invoked
- Pay only for what you use - charged per invocation and compute time (measured in milliseconds)
- Automatic scaling - Lambda scales automatically based on incoming requests. No manual configuration required
Free Tier Benefits
- 1 million requests per month
- 400,000 GB-seconds per month
Supported Programming Languages
- Node.js
- Python
- Java
- C#
- Ruby
- Go
- Rust (via Custom Runtime API)
- Container images (up to 10 GB) via Amazon ECR
Memory and Performance
- Memory allocation: Up to 10 GB RAM per function
- More RAM means better CPU and network performance
Components of a Lambda Application
Function
The core component which your code packaged with its runtime (e.g., Python, Node.js). This is where the actual code that performs the task lives.
Resource Configuration
Specifies how the function is to be executed:
- Memory: 128 MB to 10 GB
- Timeout: up to 15 minutes
- Environment variables
- Ephemeral /tmp storage: 512 MB default, up to 10 GB
Event Source
The event that triggers the function and can be triggered by multiple AWS services or a third-party service.
Trigger
An AWS service or resource that automatically invokes your Lambda function when an event occurs. For example, uploading a file to S3 can trigger a Lambda to process it. Triggers enable event-driven and automated architectures without manual intervention.
Layers
Let you include external libraries, dependencies, or custom runtimes without packaging them directly into your function code. They help keep your function clean, modular, and easier to manage or update.
Execution Role
An IAM role that gives your Lambda function permission to access AWS services (e.g., S3, DynamoDB). It ensures your function has only the minimum required access, keeping your environment secure and well-controlled.
Runtime
The execution environment where your Lambda function runs. AWS Lambda supports multiple languages like Python, Node.js, Java, Go, Ruby, and .NET, allowing you to choose the best one for your project. You can also create custom runtimes for other languages.
Log Stream
AWS Lambda automatically integrates with Amazon CloudWatch Logs, creating a log stream for each function execution. It helps you monitor performance, debug issues, and track events with detailed logs - all in a secure and centralized way.
Environment Variables
Let you pass configuration settings to your Lambda function without changing the code. They provide flexibility, allowing you to adapt behavior based on different environments (e.g., dev, test, prod).
Concurrency Controls
Let you manage how many instances of your Lambda function can run at the same time. They help you prevent system overload, optimize performance, and control costs effectively.
Lambda Functions Deep Dive
An AWS Lambda function is your code that runs in response to events which are fully managed and serverless. You can write functions in languages like Python, Node.js, or Java.
Each Function Includes
Function Code
The logic you write to process events. For example, reading an S3 file, transforming data, or calling an API.
Configuration
- Memory: 128MB–10GB
- Timeout: up to 15 mins
- Ephemeral storage: 512MB–10GB
- Environment variables: encrypted at rest and optionally in transit
Permissions
IAM execution role to access other AWS services.
Auto Scaling
Runs only when triggered, scaling automatically per request.
Deployment Package
Includes your code and dependencies.
Versions
Immutable snapshots of your function with unique version numbers. Example ARN: arn:aws:lambda:us-east-1:123456789012:function:my-function:1
Aliases
Friendly pointers to specific function versions (e.g., :PROD, :STAGING). Example ARN: arn:aws:lambda:us-east-1:123456789012:function:my-function:PROD
Layers
ZIP archives containing shared code, libraries, or custom runtimes. You can attach up to 5 layers to a function. This keeps your main deployment package smaller and promotes reuse across multiple functions.
Amazon EFS Integration
You can mount an Amazon Elastic File System (EFS) to your Lambda function, enabling persistent, shared storage. This is ideal for large files, machine learning models, or any workload needing high concurrency with stateful data access.
Invoking Lambda Functions
You can invoke a function synchronously (and wait for the response), or asynchronously.
Synchronous Invocation
Client receives response and error details in body and headers. Logs and traces go to CloudWatch. Example: API Gateway → Lambda → return HTTP response
Asynchronous Invocation
Lambda adds events to a queue. If an error occurs, Lambda retries up to 2 more times.
- May result in duplicate events, even without errors
- To avoid data loss, use a Dead-Letter Queue (DLQ) to retain failed events Example: Upload file to S3 → triggers Lambda → processes background
Event Source Mapping
Event Source Mapping (ESM) is a resource that connects a Lambda function to a stream or queue-based event source, enabling the function to process records from that source. It acts as an intermediary, polling the event source for new data and then invoking the Lambda function with batches of records.
Supported Services
- Amazon DynamoDB Streams
- Amazon Kinesis
- Amazon MQ
- Amazon MSK (Kafka)
- Self-managed Kafka
- Amazon DocumentDB (MongoDB-compatible)
- Amazon Simple Queue Service (Amazon SQS)
Deploying Code with External Dependencies
AWS Lambda supports multiple ways to deploy your function code beyond the console:
- AWS CloudFormation / CDK / SAM: Define and deploy Lambda functions as Infrastructure as Code (IaC)
- Terraform: A popular IaC tool to manage Lambda deployments declaratively
- CI/CD Pipelines: Use GitHub Actions, GitLab CI/CD, or AWS CodePipeline for automated testing and deployment
- Container Images: Package your code as a Docker image and deploy from Amazon ECR, great for custom runtimes or complex dependencies
Concurrency Management
Concurrent Executions
- AWS Lambda can automatically scale to handle multiple function executions at the same time
- Default limit: 1,000 concurrent executions per AWS Region
- You can request a higher concurrency limit if needed
Reserved Concurrency
- You can set a reserved concurrency value for a specific function (e.g., 50)
- This guarantees that the function can scale up to that number of concurrent executions
- Prevents a single function from consuming all available concurrency in your AWS account
- Without reserved concurrency, a high-traffic function could throttle other functions
Throttling Behavior
If the number of concurrent executions exceeds the limit:
- For synchronous invocations: Lambda returns HTTP 429 "Too Many Requests" error
- For asynchronous invocations: Lambda automatically retries execution for up to 6 hours using exponential backoff (starting from 1 second up to 5 minutes between retries)
Cold Starts
- A cold start occurs when Lambda creates a new instance of the function
- This may cause high initial latency due to code initialization, dependency loading, or database connection setup
- Cold starts are more noticeable in low-traffic or infrequently used functions
Provisioned Concurrency
- You can pre-initialize a number of Lambda instances using provisioned concurrency
- This eliminates cold starts and ensures low-latency responses
- You can manage provisioned concurrency using Application Auto Scaling
Lambda Function URL
A Lambda Function URL is a dedicated HTTPS endpoint for a Lambda function. You can configure it via the AWS Management Console, AWS CLI, or Lambda API.
Lambda provides two options to invoke functions over HTTP:
- Function URLs (simple and direct)
- Amazon API Gateway (advanced features)
Function URL Format
https://<url-id>.lambda-url.<region>.on.aws
Key Notes
- The generated URL is permanent and unique to each function
- You can only attach Function URLs to $LATEST version or aliases (not specific published versions)
- Function URLs are publicly accessible via the Internet only and do not support AWS PrivateLink
- They support IPv4 and IPv6 (dual stack-enabled)
- Access is controlled using resource-based policies
- CORS (Cross-Origin Resource Sharing) is supported
Unsupported AWS Regions
- ap-south-2 (Hyderabad)
- ap-southeast-4 (Melbourne)
- ap-southeast-5 (Malaysia)
- ap-east-2 (Taipei)
- ca-west-1 (Calgary)
- eu-south-2 (Spain)
- eu-central-2 (Zurich)
- il-central-1 (Tel Aviv)
- me-central-1 (UAE)
Configuring Lambda Function to Access VPC Resources
Required IAM Permissions
To allow a Lambda function to connect to resources inside a VPC, it needs permissions to create and manage Hyperplane ENIs (Elastic Network Interfaces).
Attach this managed policy: AWSLambdaVPCAccessExecutionRole
Or define these actions manually:
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DescribeSubnets
ec2:DeleteNetworkInterface
ec2:AssignPrivateIpAddresses
ec2:UnassignPrivateIpAddresses
These are only needed for setup, not for invoking the function.
Additional permissions for IAM user to configure the function in console:
ec2:DescribeSecurityGroups
ec2:DescribeSubnets
ec2:DescribeVpcs
ec2:GetSecurityGroupsForVpc
Attaching Lambda Functions to an Amazon VPC
Specify:
- Subnet IDs (must be in the same VPC)
- Security Group IDs
Lambda creates ENIs in the subnets you specify. The function uses these ENIs for network access during execution.
Note: Choose private subnets if accessing internal resources, or public subnets with a NAT Gateway if external internet access is needed.
IPv6 Support
AWS Lambda supports IPv6 for functions that are configured in dual-stack subnets (IPv4 + IPv6).
Make sure:
- Your VPC and subnet support IPv6
- Security groups and route tables allow IPv6 traffic as needed
- Lambda automatically assigns IPv6 addresses to the ENIs if the subnet supports IPv6
Internet Access When Attached to a VPC
By default, Lambda functions lose internet access when attached to a VPC unless explicitly configured.
To enable internet access:
Option 1: For IPv4
Use public subnets with:
- Route to an Internet Gateway (IGW)
- Auto-assign public IPs enabled
- Or use private subnets + NAT Gateway to access internet securely
Option 2: For IPv6
- Use dual-stack subnets
- Route IPv6 traffic to the Internet Gateway
- Ensure IPv6 security group rules allow outbound traffic
Lambda@Edge
Lambda@Edge lets you run functions at AWS edge locations triggered by CloudFront events, improving performance without managing global infrastructure.
Lambda@Edge Features
Supported Languages
- Node.js
- Python
Runs at
All 4 CloudFront lifecycle stages:
- Viewer Request
- Origin Request
- Origin Response
- Viewer Response
Deployment
- Deployed in us-east-1, replicated globally
- Latency: Up to 5–10 seconds, supports complex logic
Common Use Cases
- A/B testing
- Authentication & authorization
- Bot mitigation
- Personalized content
- Integrating AWS services (S3, DynamoDB, etc.)
- Real-time image processing
AWS Lambda SnapStart
SnapStart is a free optimization feature for AWS Lambda reducing cold start latency by up to 10x.
Supported Runtimes
- Java
- Python
- .NET
How Lambda Normally Works
- Initialize phase: Lambda sets up runtime, loads code and dependencies (often slow for Java)
- Invoke phase: Executes your function handler logic
- Shutdown phase: Releases resources
How SnapStart Works
When a new Lambda version is published:
- Lambda initializes the function once
- Takes a snapshot of memory and disk state
- Stores the pre-initialized snapshot
On future cold starts:
- Lambda restores from the snapshot
- Skips the initialization phase
Result
- Faster cold starts
- Lower latency for end users
- Ideal for latency-sensitive applications like APIs and real-time data processing
AWS Lambda Pricing
Category | Pricing | Details |
---|---|---|
Free Tier | ✅ First 1M requests/month: Free | ✅ First 400,000 GB-seconds/month: Free |
After Free Tier | ✅ $0.20 per 1M additional requests | ✅ $1 per 600,000 GB-seconds |
Billing Factors | ✅ Memory allocated and execution time | - |