Metadata-Version: 2.4
Name: deployless
Version: 0.1.3
Summary: Compile Flask/FastAPI apps to AWS SAM serverless
Author-email: Antonio Rodriguez <contact@antoniorodriguez.dev>
License-Expression: Apache-2.0
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: click>=8.0
Requires-Dist: pyyaml>=6.0
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Requires-Dist: flask; extra == "dev"
Requires-Dist: boto3; extra == "dev"
Requires-Dist: build>=1.4.0; extra == "dev"
Requires-Dist: twine>=6.2.0; extra == "dev"
Requires-Dist: ruff; extra == "dev"
Requires-Dist: mypy; extra == "dev"
Dynamic: license-file

# deployless

**deployless** is a compiler that converts Flask applications (and in the future FastAPI) into AWS SAM templates ready to deploy as serverless Lambda functions. It does not require rewriting your app: simply add configuration annotations to your `routes.py` files and run `deployless build`.

---

## Table of Contents

1. [What is deployless](#what-is-deployless)
2. [Installation](#installation)
3. [deployless.yaml reference](#deploylessyaml-reference)
4. [dpl.configure() in routes.py](#dplconfigure-in-routespy)
5. [AWS Resources](#aws-resources)
   - [DynamoDB](#dynamodb)
   - [S3](#s3)
   - [SQS](#sqs)
   - [KMS](#kms)
   - [SSM Parameter Store](#ssm-parameter-store)
6. [@dpl.cron() — Scheduled Lambdas](#dplcron--scheduled-lambdas)
7. [@dpl.route() — Split Lambdas per route](#dplroute--split-lambdas-per-route)
8. [@dpl.lambda_function() — Standalone Lambdas](#dpllambda_function--standalone-lambdas)
9. [Auto-detection of resources](#auto-detection-of-resources)
10. [.env file and secrets](#env-file-and-secrets)
11. [Flask app initialization (init_app)](#flask-app-initialization-init_app)
12. [CLI commands](#cli-commands) (`init`, `build`, `check`, `validate`, `deploy`, `clean`, `info`, `secrets`)
13. [Project structure](#project-structure)
14. [Full example](#full-example)

---

## What is deployless

deployless takes your Flask project structured by features and generates:

- A `template.yaml` for AWS SAM with one Lambda function per feature (and optionally one per specific route).
- A `.dist/` folder with the packaged code for each Lambda, including an auto-generated `bootstrap.py` and a merged `requirements.txt`.
- CloudWatch Log Groups with configurable retention for each function.

### Mental model

```
app/features/users/routes.py   →   UsersFunction (Lambda)
app/features/auth/routes.py    →   AuthFunction  (Lambda)
app/features/tenant/routes.py  →   TenantFunction (Lambda)
```

Each feature lives in its own Lambda. If a specific endpoint needs a different configuration (more memory, longer timeout), you can "split" it into its own Lambda with `@dpl.route()`.

### Compilation flow

```
deployless build
  │
  ├── 1. Reads deployless.yaml
  ├── 2. Discovers app/features/*/routes.py
  ├── 3. Imports each routes.py (extracts Blueprints and routes)
  ├── 4. Reads metadata from dpl.configure(), @dpl.cron(), @dpl.route()
  ├── 5. Validates (memory, timeout, duplicate routes, schedules, etc.)
  ├── 6. Generates .dist/{Feature}Function/ for each Lambda
  └── 7. Writes template.yaml
```

---

## Installation

```bash
pip install deployless

# Or with uv
uv add deployless
```

### Runtime dependency in each Lambda

Each generated Lambda needs `aws-wsgi` to adapt Flask to the API Gateway event format. deployless adds it automatically to the `requirements.txt` of each `.dist/` package — you do not need to install it manually.

---

## Quick Start

```bash
# 1. Initialize a new project (interactive wizard + app scaffolding)
deployless init --app

# 2. Create your first feature
mkdir -p app/features/hello
cat > app/features/hello/routes.py << 'EOF'
from flask import Blueprint

hello_bp = Blueprint("hello", __name__)

@hello_bp.route("/hello")
def hello():
    return {"message": "Hello from deployless!"}
EOF

# 3. Deploy
deployless deploy
```

`deployless init` asks for project name, stage, and runtime (with sensible defaults — just press Enter). The `--app` flag also creates the app structure (`app/__init__.py`, `app/features/`, `app/shared/`), `run.py` for local development, and a `.gitignore`.

### Expected project structure

This is what your project should look like for deployless to work. `deployless init --app` generates the base structure — you only need to add your features.

```
my-project/
├── deployless.yaml              # Project config (created by deployless init)
├── requirements.txt             # Global dependencies
├── run.py                       # Local development: python run.py
├── .gitignore
├── app/
│   ├── __init__.py              # App factory: create_app() → Flask
│   ├── features/                # One folder per feature = one Lambda per feature
│   │   ├── hello/
│   │   │   └── routes.py        # REQUIRED — Flask Blueprint with routes
│   │   ├── users/
│   │   │   ├── routes.py        # Blueprint + dpl.configure() + resources
│   │   │   ├── use_cases/       # Business logic (optional subdirectories)
│   │   │   ├── repositories/
│   │   │   ├── schemas/
│   │   │   └── requirements.txt # Optional — extra deps for this feature only
│   │   └── orders/
│   │       └── routes.py
│   └── shared/                  # Shared code — copied into ALL Lambdas
│       ├── decorators/
│       ├── errors/
│       └── config.py
```

**Key rules:**
- Each feature **must** have a `routes.py` with at least one Flask Blueprint
- **Features cannot import from each other.** Each feature is packaged into its own Lambda and has no access to other features' code. If two features need the same function, model, or utility, put it in `app/shared/`
- `app/shared/` is copied into every Lambda — use it for code shared across features
- `app/__init__.py` with `create_app()` is recommended — deployless uses it automatically for CORS, error handlers, middleware, etc.
- Directories starting with `_` (e.g. `__pycache__`) are ignored during discovery

---

## deployless.yaml reference

Create this file at the project root (at the same level as `requirements.txt`). All fields are optional; default values are indicated.

```yaml
# Project name
name: mi-app

# Cloud provider — only "aws" is supported for now
provider: aws

# Deployment stage. Can be overridden with --stage in the CLI.
# Used for: API Gateway StageName, APP_STAGE env var in all Lambdas,
# and samconfig.toml stack prefix.
# Example: deployless deploy --stage prod  (overrides this value without editing the file)
stage: dev

# Tags applied to all CloudFormation resources
tags:
  Project: mi-app
  Environment: production

# Paths to the key directories of the project
paths:
  features: app/features    # Directory where features live
  shared: app/shared         # Shared code (copied into each Lambda)

# Global config for all Lambda functions
globals:
  runtime: python3.13        # Lambda runtime
  memory: 256                # MB (128–10240)
  timeout: 30                # Seconds (1–900)
  log_retention: 14          # Retention in CloudWatch (days)
                             # Valid values: 1,3,5,7,14,30,60,90,120,
                             # 150,180,365,400,545,731,1096,1827,3653

# API Gateway configuration
api:
  endpoint_type: REGIONAL    # REGIONAL | EDGE | PRIVATE

  # CORS — only needed if you don't use create_app() in app/__init__.py.
  # When create_app() is present, CORS is inherited automatically from it.
  # This config is used as fallback when the app factory is unavailable.
  cors:
    allow_origin: "*"          # Or a list: ["https://mi-app.com"]
    allow_methods: [GET, POST, PUT, DELETE, OPTIONS]
    allow_headers: [Content-Type, Authorization, X-API-Key]
    max_age: 3600              # Seconds the browser caches the preflight
    # allow_credentials: true  # Not compatible with allow_origin: "*"

  # Global API Gateway authentication
  # (see "API Gateway Authentication" section for details)
  auth:
    type: cognito              # cognito | lambda | iam
    user_pool_arn: "arn:aws:cognito-idp:us-east-1:123456789:userpool/us-east-1_ABC"
    name: CognitoAuthorizer    # Optional
    scopes: []                 # Optional

  # API Keys
  api_keys: true               # true = generate a new key | "key-id" = use existing

  # Rate limiting (requires api_keys)
  usage_plan:
    rate: 10000                # Requests/second
    burst: 2000                # Maximum peak
    quota: 1000000             # Optional — total requests
    period: DAY                # DAY | WEEK | MONTH (required if quota is set)

  # Custom domain (see "Custom Domain" section below for details)
  domain:
    domain_name: api.mi-app.com
    base_path: /v1             # Optional

  # MIME types that API Gateway treats as binary (non-UTF-8)
  binary_media_types:
    - image/png
    - image/jpeg
    - application/octet-stream

  # Compress responses larger than N bytes
  minimum_compression_size: 1024

# Global environment variables injected into ALL functions
env:
  APP_ENV: production
  LOG_LEVEL: INFO

# .env file — environment variables and secrets
# Normal variables are injected as env vars in all Lambdas.
# Variables with the SECRET_ prefix are pushed to SSM Parameter Store as String
# and injected as dynamic references {{resolve:ssm:...}}.
env_file: .env.production

# KMS key to encrypt secrets in SSM (optional).
# If not specified, SSM uses the AWS-managed key (aws/ssm).
# Accepts alias ("mi-app/secrets") or key ID / ARN.
secrets_kms: mi-app/secrets

# Flask app initialization hooks (fallback only).
# Used when create_app() is not found in app/__init__.py.
# If create_app() exists, it is called directly and init_app is ignored.
# Each entry is a dotted "module.function" path.
init_app:
  - app.shared.errors.register_error_handlers
  - app.shared.middleware.register_middleware
```

---

## Environment variable interpolation

Use `${VAR_NAME}` syntax inside **string values** of `deployless.yaml` to inject values from environment variables at build time. This keeps sensitive or environment-specific values (ARNs, domain names, keys) out of the YAML, so you can safely commit it to public repositories.

### Syntax

| Syntax | Behavior |
|---|---|
| `${VAR}` | Replaced with the value of `VAR`. **Error if not defined.** |
| `${VAR:-default}` | Replaced with `VAR` if defined, otherwise uses `default`. |

Variables are resolved in **string values only** — integers, booleans, lists, and dict keys are never interpolated. A single string can contain multiple references: `"https://${HOST}:${PORT}/v1"`.

### Source

Variables are resolved **exclusively from `os.environ`** — the shell environment where `deployless` runs. The `.env` file configured via `env_file:` is **not** used for interpolation; that file is reserved for Lambda runtime environment variables and SSM secrets.

### Local development

Set variables in your shell before running deployless:

```bash
# Option 1: export (persists in current shell session)
export API_DOMAIN=api.myapp.com
export COGNITO_POOL_ARN=arn:aws:cognito-idp:us-east-1:123456789:userpool/us-east-1_ABC
deployless deploy

# Option 2: inline (one-off, does not persist)
API_DOMAIN=api.myapp.com COGNITO_POOL_ARN=arn:aws:... deployless deploy

# Option 3: direnv (.envrc file, auto-loaded per directory)
# echo 'export API_DOMAIN=api.myapp.com' >> .envrc && direnv allow
```

### GitHub Actions

Store sensitive values as repository secrets, then pass them via the `env:` block:

```yaml
# deployless.yaml (committed to the repo — no secrets)
name: my-app
stage: ${DEPLOY_STAGE:-dev}
api:
  domain:
    domain_name: ${API_DOMAIN}
  auth:
    type: cognito
    user_pool_arn: ${COGNITO_POOL_ARN}
```

```yaml
# .github/workflows/deploy.yml
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: pip install deployless
      - run: deployless deploy
        env:
          API_DOMAIN: ${{ secrets.API_DOMAIN }}
          COGNITO_POOL_ARN: ${{ secrets.COGNITO_POOL_ARN }}
          DEPLOY_STAGE: production
```

If any `${VAR}` reference cannot be resolved (not in the environment and no default), the build fails with error **E32** before any resources are created.

---

## API Gateway Authentication - Soon

### Cognito User Pool - Soon

```yaml
api:
  auth:
    type: cognito
    user_pool_arn: "arn:aws:cognito-idp:us-east-1:123456789:userpool/us-east-1_ABC"
    name: CognitoAuthorizer    # Optional, default: "CognitoAuthorizer"
    scopes:                    # Optional — required OAuth2 scopes
      - email
      - profile
```

### Lambda Authorizer (custom function) - Soon

```yaml
api:
  auth:
    type: lambda
    function_arn: "arn:aws:lambda:us-east-1:123456789:function:my-authorizer"
    name: LambdaAuthorizer     # Optional, default: "LambdaAuthorizer"
    ttl: 300                   # Seconds before re-authorizing (0 = no cache)
    identity:
      header: Authorization    # Header where the token is located
```

### IAM - Soon

```yaml
api:
  auth:
    type: iam
```

### Override auth per feature - Soon

From `routes.py`, you can override the global auth for an entire feature:

```python
import deployless as dpl

# All endpoints in this feature are public (no auth)
dpl.configure(auth=None)

# All endpoints in this feature require an API key
dpl.configure(auth="api_key")
```

### Override auth per individual route (split Lambda) - Soon

```python
@dpl.route(memory=512, auth=None)      # This endpoint is public
@bp.route('/health', methods=['GET'])
def health_check():
    return {"status": "ok"}

@dpl.route(memory=1024, auth="api_key")  # This endpoint requires an API key
@bp.route('/export', methods=['POST'])
def export_data():
    ...
```

### Auth hierarchy (highest priority first) - Soon

```
@dpl.route(auth=...)        ← Individual route (split lambdas only)
dpl.configure(auth=...)     ← Entire feature
api.auth in deployless.yaml   ← Global
```

---

## API Keys and Rate Limiting - Soon

```yaml
api:
  api_keys: true        # Generates a new API key
  usage_plan:
    rate: 10000         # 10k requests/second
    burst: 2000         # Peak of 2k simultaneous
    quota: 1000000      # Maximum 1M requests per day
    period: DAY
```

The generated API Key ID appears in the stack Outputs:

```bash
# View the key value (not shown in Outputs for security)
aws apigateway get-api-key --api-key <ApiKeyId> --include-value
```

To use an existing key instead of creating a new one:

```yaml
api:
  api_keys: "abc123existingkeyid"
```

---

## Custom Domain

deployless can automatically provision an ACM certificate and configure a custom domain for your API Gateway. There are three ways to set it up:

### Option 1: Route 53 (fully automatic)

If your DNS is managed by Route 53, deployless creates the ACM certificate and validates it automatically — zero manual steps.

```yaml
# Auto-detect hosted zone
api:
  domain:
    domain_name: api.myapp.com
    route53: true

# Or specify the hosted zone ID explicitly
api:
  domain:
    domain_name: api.myapp.com
    route53:
      hosted_zone_id: Z1234567890ABC
```

With `route53: true`, deployless uses boto3 to find the Route 53 hosted zone that matches your domain. If you have multiple hosted zones for the same domain, provide the `hosted_zone_id` explicitly.

**Cost**: ACM certificate is free. Route 53 hosted zone costs ~$0.50/month (if you already have it, there is no additional cost).

### Option 2: External DNS (Cloudflare, GoDaddy, Namecheap, etc.)

If your DNS is managed outside AWS, deployless guides you through a step-by-step flow:

```yaml
api:
  domain:
    domain_name: api.myapp.com
```

**Deploy flow:**

1. Run `deployless deploy` — deployless requests an ACM certificate and shows the DNS validation records:
   ```
   [deployless] ACM certificate requested. Add these DNS records at your provider:

     CNAME  _acme.api.myapp.com  →  xxx.acm-validations.aws

   Then run 'deployless deploy' again.
   ```

2. Add the CNAME record at your DNS provider (e.g. Cloudflare).

3. Run `deployless deploy` again — deployless verifies the certificate is validated, saves the `certificate_arn` to `deployless.yaml`, and completes the deploy.

4. After deploy, add the final CNAME to point your domain to API Gateway:
   ```
   CNAME  api.myapp.com  →  d-abc123.execute-api.us-east-1.amazonaws.com
   ```

**Cost**: $0 — ACM public certificates are free, and there is no additional charge for custom domains in API Gateway.

### Option 3: Existing certificate (manual)

If you already have an ACM certificate, provide the ARN directly:

```yaml
api:
  domain:
    domain_name: api.myapp.com
    certificate_arn: "arn:aws:acm:us-east-1:123456789:certificate/abc-123"
    base_path: /v1                    # Optional
    route53:                          # Optional — auto-configure DNS
      hosted_zone_id: Z1234567890ABC
```

> **Note**: EDGE endpoints require the ACM certificate to be in `us-east-1`. REGIONAL endpoints require the certificate to be in the same region as the API Gateway.

### Important: URL changes with custom domains

When using a custom domain, the **stage prefix is removed** from the URL. API Gateway's base path mapping handles the stage routing automatically:

```
# Without custom domain (stage prefix required)
https://xxx.execute-api.us-east-1.amazonaws.com/dev/todos

# With custom domain (no stage prefix)
https://api.myapp.com/todos
```

Update your frontend API base URL accordingly. If you use the wrong URL (e.g. `/dev/todos` on a custom domain), API Gateway returns `"Missing Authentication Token"` without CORS headers, causing preflight failures.

---

### Validation rules

| Code | Rule |
|--------|-------|
| E00 | Resource validations: DynamoDB (key types, GSI, projection INCLUDE), S3 (bucket name DNS-compliant, 3–63 chars, no underscores), SQS (queue name, visibility_timeout, message_retention, max_receive_count), KMS (alias format, valid key_usage/key_spec, ECC/SIGN_VERIFY incompatibilities), SSMParameter (name starts with /, valid chars, valid type, non-empty value) |
| E01 | `stage` can only contain alphanumeric characters |
| E02 | `api.endpoint_type` must be REGIONAL, EDGE, or PRIVATE |
| E03 | `globals.log_retention` must be a valid CloudWatch value |
| E04 | `allow_credentials: true` is not compatible with `allow_origin: "*"` |
| E05 | Feature memory out of range (128–10240 MB) |
| E06 | Feature timeout out of range (1–900 s) |
| E07 | No routes found in feature's `routes.py` |
| E08 | Invalid cron schedule format |
| E09 | Cron memory out of range (128–10240 MB) |
| E10 | Cron timeout out of range (1–900 s) |
| E11 | `api.auth.type` must be cognito, lambda, or iam |
| E12 | `api.auth` (cognito): `user_pool_arn` is required |
| E13 | `api.auth` (lambda): `function_arn` is required |
| E14 | `api.usage_plan`: `rate` and `burst` are required |
| E15 | `api.usage_plan`: `period` is required if `quota` is set |
| E16 | `api.usage_plan.period` must be DAY, WEEK, or MONTH |
| E17 | `api.domain`: `domain_name` is required |
| E18 | `api.minimum_compression_size` must be an integer >= 0 |
| E19 | `ephemeral_storage` out of range (512–10240 MB) |
| E20 | `reserved_concurrency` must be >= 0 |
| E21 | `provisioned_concurrency` must be >= 1 |
| E22 | `log_retention` per feature must be a valid CloudWatch value |
| E23 | `alarms.sns_topic_arn` must be a valid ARN (starts with `arn:`) |
| E24 | `alarms.duration.threshold_pct` must be between 1 and 100 |
| E25 | `lambda_function` memory out of range (128–10240 MB) |
| E26 | `lambda_function` timeout out of range (1–900 s) |
| E27 | Specified `env_file` does not exist |
| E28 | `SECRET_` variable with empty value |
| E29 | Invalid `secrets_kms` format |
| E30 | Invalid resource permission level (must be one of the valid levels for that resource type) |
| E31 | `init_app` entry is not a string or is not in `module.function` format |
| E32 | Unresolved environment variable in `deployless.yaml` — referenced `${VAR}` is not defined and has no default |
| E33 | Duplicate route: same HTTP method + path defined in two different features |
| W01 | Replacement-causing change detected — a resource property that is immutable after creation was modified. Applies to: DynamoDB (key schema, table name), S3 (bucket name), SQS (queue name, FIFO flag), KMS (key usage, key spec), SSM (parameter name, type). CloudFormation cannot replace custom-named resources in-place; the build is halted before deploy. |

---

## dpl.configure() in routes.py

`dpl.configure()` is called at module level in `routes.py` to configure the **AWS Lambda function** that deployless generates for that feature. Every parameter you pass here maps directly to a property on the `AWS::Serverless::Function` resource in the generated `template.yaml` (memory → `MemorySize`, timeout → `Timeout`, architectures → `Architectures`, etc.).

It is a **no-op at runtime**: when your Flask app starts normally, this call does nothing visible. Only the deployless compiler reads it.

deployless automatically detects which feature is being compiled using a context variable set by the compiler.

### Full parameter reference

```python
import deployless as dpl

dpl.configure(
    # ── Basic ───────────────────────────────────────────────────────────────
    memory=512,                  # int — MB. Overrides globals.memory (128–10240)
    timeout=30,                  # int — Seconds. Overrides globals.timeout (1–900)
    description="Mi feature",    # str — Description visible in CloudFormation

    # ── Environment ──────────────────────────────────────────────────────────
    env={"FLAG": "true"},        # dict — Additional env vars for this Lambda
    layers=["arn:aws:lambda:..."],# list — Lambda Layer ARNs

    # ── IAM ──────────────────────────────────────────────────────────────────
    policies=[                   # list — Inline IAM policies (SAM format)
        "AmazonDynamoDBReadOnlyAccess",          # Managed policy by name
        {"DynamoDBCrudPolicy": {"TableName": dpl.Ref(mi_tabla)}},  # SAM policy
        {"Version": "2012-10-17", "Statement": [...]},            # Inline policy
    ],

    # ── AWS Resources (permission overrides only) ──────────────────────────────
    # Resources are auto-detected (see "Auto-detection of resources" section).
    # Use resources= ONLY to override the default "crud" permission level:
    resources={
        "uploads_bucket": (uploads_bucket, "read"),   # restrict to read-only
        "jobs_queue":     (jobs_queue, "send"),        # restrict to send-only
    },
    # deployless auto-generates IAM policies from each detected resource.
    # Use policies= only for additional or non-standard permissions.
    # policies=[...]

    # ── Architecture ──────────────────────────────────────────────────────────
    architectures=["arm64"],     # list — ["x86_64"] or ["arm64"] (Graviton, ~20% cheaper)
    tracing=True,                # bool — Enables AWS X-Ray distributed tracing

    # ── Concurrency ───────────────────────────────────────────────────────────
    reserved_concurrency=10,     # int >= 0 — Maximum simultaneous execution limit.
                                 #   0 = full throttle (useful for temporarily disabling)
    provisioned_concurrency=3,   # int >= 1 — Pre-warmed instances (eliminates cold starts).
                                 #   Implies AutoPublishAlias: live in the template.

    # ── Temporary storage ─────────────────────────────────────────────────────
    ephemeral_storage=1024,      # int — Size of /tmp in MB (512–10240, default 512)

    # ── Reliability ───────────────────────────────────────────────────────────
    dlq=True,                    # bool — Creates an SQS Dead Letter Queue for
                                 #   failed asynchronous invocations

    # ── Observability ─────────────────────────────────────────────────────────
    log_retention=30,            # int — Retention days in CloudWatch (overrides global)

    alarms=True,                 # Enables CloudWatch Alarms with default thresholds
    # alarms=False,              # Disables alarms for this feature
    # alarms={...},              # Custom config (see Alarms section)

    # ── Auth (API Gateway) ────────────────────────────────────────────────────
    auth=None,                   # None = public routes | "api_key" = requires API key
                                 # (not specified = inherits global auth from deployless.yaml)
)
```

### Full example

```python
# app/features/user/routes.py
from flask import Blueprint
import deployless as dpl

# Resource auto-detected → DynamoDBCrudPolicy auto-generated, USERS_TABLE env var injected
users_table = dpl.DynamoDB(
    "users-table",
    pk="tenant_id",
    sk="user_id",
    gsi=[{"name": "EmailIndex", "pk": "email"}],
    ttl_attribute="expires_at",
    deletion_policy="Retain",
)

dpl.configure(
    memory=512,
    timeout=30,
    description="User Management API",
    # No need to declare resources= here — users_table is auto-detected with "crud"
    architectures=["arm64"],
    dlq=True,
    alarms=True,
    log_retention=30,
)

user_bp = Blueprint("user_bp", __name__, url_prefix="/users")

@user_bp.route("", methods=["GET"])
def list_users():
    ...
```

---

## AWS Resources

Resources are **auto-detected** by the compiler. Any `Resource` instance (DynamoDB, S3, SQS, KMS, SSMParameter) is automatically registered with `"crud"` permissions — deployless adds them to `template.yaml`, auto-generates IAM policies, and injects environment variables.

**Detection rules:**

| Resource location | Detection | What you need to do |
|---|---|---|
| Inside `features/auth/` (any `.py` file) | Automatic | Nothing — detected by scanning the feature directory |
| In `app/shared/` | **Only if imported** | You must `import` the resource in your `routes.py` for the Lambda to get permissions |

```python
# app/shared/resources.py
import deployless as dpl
shared_table = dpl.DynamoDB("shared-table", pk="id")

# app/features/auth/routes.py
from app.shared.resources import shared_table  # ← this import triggers auto-detection
# Without this import, the auth Lambda will NOT have permissions for shared_table
```

> See the [Auto-detection of resources](#auto-detection-of-resources) section for the full details.

### DynamoDB

```python
dpl.DynamoDB(
    table_name: str,                      # Table name in AWS
    pk: str = "id",                       # Partition key
    pk_type: str = "S",                   # "S" (String) | "N" (Number) | "B" (Binary)
    sk: str = None,                       # Optional sort key. If defined → AWS::DynamoDB::Table
    sk_type: str = "S",                   # "S" | "N" | "B"
    gsi: list = None,                     # Global Secondary Indexes (see format below)
    billing_mode: str = "PAY_PER_REQUEST",# "PAY_PER_REQUEST" | "PROVISIONED"
    read_capacity: int = None,            # Only for billing_mode="PROVISIONED" (default: 5)
    write_capacity: int = None,           # Only for billing_mode="PROVISIONED" (default: 5)
    ttl_attribute: str = None,            # Time-To-Live attribute (DynamoDB expires it automatically)
    stream: str = None,                   # "NEW_IMAGE" | "OLD_IMAGE" | "NEW_AND_OLD_IMAGES" | "KEYS_ONLY"
    point_in_time_recovery: bool = False, # Enables PITR (point-in-time recovery)
    sse_enabled: bool = True,             # Encryption at rest with AWS-managed KMS
    deletion_policy: str = "Delete",      # "Delete" | "Retain" | "Snapshot"
    existing: bool = False,               # True = table already exists, do not create (only injects env var)
)
```

#### CloudFormation type

deployless always generates `AWS::DynamoDB::Table` regardless of whether a sort key or GSI is defined. This avoids CloudFormation replacement (and data loss) when you later add a sort key or GSI to a table that started with only a partition key.

#### Auto-generated environment variable

The `-table` / `_table` suffix is removed to avoid redundancy:

| `table_name` | Environment variable |
|---|---|
| `users-table` | `USERS_TABLE` |
| `orders_table` | `ORDERS_TABLE` |
| `sessions` | `SESSIONS_TABLE` |

#### GSI format

Each element of the `gsi` list accepts:

```python
{
    "name": "StatusIndex",           # Required — index name
    "pk": "status",                  # Required — index partition key
    "pk_type": "S",                  # Optional, default "S"
    "sk": "created_at",              # Optional — index sort key
    "sk_type": "S",                  # Optional, default "S"
    "projection": "ALL",             # "ALL" | "KEYS_ONLY" | "INCLUDE" (default "ALL")
    "non_key_attributes": ["email"], # Required only if projection="INCLUDE"
}
```

#### Examples

**Simple table (PK only):**
```python
dpl.DynamoDB("sessions-table", pk="session_id", ttl_attribute="expires_at")
# → AWS::DynamoDB::Table
# → Variable: SESSIONS_TABLE
```

**Table with SK and multiple GSIs:**
```python
dpl.DynamoDB(
    "orders-table",
    pk="tenant_id",
    sk="order_id",
    gsi=[
        {
            "name": "StatusIndex",
            "pk": "status",
            "sk": "created_at",
        },
        {
            "name": "CustomerIndex",
            "pk": "customer_id",
            "projection": "INCLUDE",
            "non_key_attributes": ["total", "status"],
        },
    ],
    ttl_attribute="expires_at",
    point_in_time_recovery=True,
    deletion_policy="Retain",
)
# → AWS::DynamoDB::Table with SSEEnabled=True
# → Variable: ORDERS_TABLE
```

**Table with provisioned capacity:**
```python
dpl.DynamoDB(
    "high-traffic-table",
    pk="pk",
    sk="sk",
    billing_mode="PROVISIONED",
    read_capacity=100,
    write_capacity=50,
)
```

**Table with DynamoDB Streams:**
```python
dpl.DynamoDB(
    "events-table",
    pk="event_id",
    stream="NEW_AND_OLD_IMAGES",  # Triggers a Lambda on every change
)
```

**Existing table (do not create, only inject env var):**
```python
dpl.DynamoDB("prod-users-table", existing=True)
# Does not generate a CloudFormation resource
# Injects: PROD_USERS_TABLE = "prod-users-table" (literal string)
```

#### Permission levels

| Level | Auto-generated SAM policy |
|-------|--------------------------|
| `"crud"` (default) | `DynamoDBCrudPolicy` |
| `"read"` | `DynamoDBReadPolicy` |
| `"write"` | `DynamoDBWritePolicy` |

```python
# Auto-detected resources get "crud" by default.
# Use resources={} only to restrict:
dpl.configure(
    resources={
        "catalog": (dpl.DynamoDB("catalog-table"), "read"),  # restrict to read-only
    },
)
```

---

### S3 - Soon

```python
dpl.S3(
    bucket_name: str,
    versioning: bool = False,
    encryption: bool = True,        # SSE-S3 (AES256) enabled by default
    cors: list = None,              # List of CORS rules (CloudFormation CorsRule format)
    lifecycle_rules: list = None,   # List of lifecycle rules (CloudFormation format)
    public_access_block: bool = True,  # Blocks public access by default
    deletion_policy: str = "Delete",
    existing: bool = False,
)
```

**Auto-generated environment variable:**
- `uploads-bucket` → `UPLOADS_BUCKET`
- `my_files_bucket` → `MY_FILES_BUCKET` (the `-bucket` / `_bucket` suffix is removed)

**Compile-time validations (E00):**
- `bucket_name` cannot be empty
- Length between 3 and 63 characters
- Cannot contain underscores (S3 is DNS-compliant)
- Lowercase only, digits, hyphens, and dots — starts and ends with alphanumeric

**Basic example:**

```python
dpl.S3("user-uploads")
# → SSE-S3 AES256 enabled, public access blocked
# → Variable: UPLOADS_BUCKET
```

**Permission levels:**

| Level | Auto-generated SAM policy |
|-------|--------------------------|
| `"crud"` (default) | `S3CrudPolicy` |
| `"read"` | `S3ReadPolicy` |
| `"write"` | `S3WritePolicy` |

```python
# Auto-detected resources get "crud" by default.
# Use resources={} only to restrict:
dpl.configure(
    resources={
        "assets": (dpl.S3("static-assets"), "read"),  # restrict to read-only
    },
)
```

**Example with all options:**

```python
dpl.S3(
    "user-uploads",
    versioning=True,
    encryption=True,           # AES256 by default — pass False only if using external KMS
    public_access_block=True,
    deletion_policy="Retain",
    cors=[
        {
            "AllowedOrigins": ["https://mi-app.com"],
            "AllowedMethods": ["GET", "PUT"],
            "AllowedHeaders": ["*"],
            "MaxAge": 3600,
        }
    ],
    lifecycle_rules=[
        {
            "Id": "expire-tmp",
            "Status": "Enabled",
            "ExpirationInDays": 7,
            "Prefix": "tmp/",
        }
    ],
)
```

---

### SQS - Soon

```python
dpl.SQS(
    queue_name: str,
    fifo: bool = False,               # True = FIFO queue. Adds .fifo to the name automatically.
    dlq: bool = False,                # True = also creates a Dead Letter Queue
    visibility_timeout: int = 30,     # seconds (0–43200)
    message_retention: int = 345600,  # seconds (60–1209600, default 4 days)
    max_receive_count: int = 3,       # Attempts before sending to DLQ (1–1000)
    encryption: bool = True,          # SqsManagedSseEnabled — SSE-SQS enabled by default
    deletion_policy: str = "Delete",
    existing: bool = False,
)
```

**Note:** SQS and KMS return **multiple** CloudFormation resources (the main queue + DLQ, or the key + alias). deployless inserts them all correctly into the template.

**Auto-generated environment variable:**
- `notifications-queue` → `NOTIFICATIONS_QUEUE_URL`

**Compile-time validations (E00):**
- `queue_name` cannot be empty or exceed 80 characters
- Alphanumeric only, `-` and `_` (the `.fifo` suffix is excluded from validation)
- `visibility_timeout` must be in range `[0, 43200]`
- `message_retention` must be in range `[60, 1209600]`
- `max_receive_count` must be in range `[1, 1000]`

**Basic example:**

```python
dpl.SQS("email-notifications")
# → SSE-SQS enabled, 4-day retention, 30s visibility
# → Variable: EMAIL_NOTIFICATIONS_QUEUE_URL
```

**Example with DLQ:**

```python
dpl.SQS(
    "email-notifications",
    dlq=True,
    visibility_timeout=60,
    message_retention=86400,   # 1 day
    max_receive_count=5,
)
# → Main queue + DLQ with 14-day retention
# → Both with SSE-SQS enabled
```

**FIFO example:**

```python
dpl.SQS(
    "orders",
    fifo=True,       # → queue_name becomes "orders.fifo" automatically
    dlq=True,        # → DLQ will also be FIFO: "orders-dlq.fifo"
)
```

**Permission levels:**

| Level | Auto-generated SAM policies |
|-------|----------------------------|
| `"crud"` (default) | `SQSSendMessagePolicy` + `SQSPollerPolicy` |
| `"send"` | `SQSSendMessagePolicy` |
| `"poll"` | `SQSPollerPolicy` |

```python
# Auto-detected resources get "crud" by default.
# Use resources={} only to restrict:
dpl.configure(
    resources={
        "tasks": (dpl.SQS("tasks-queue"), "send"),  # restrict to send-only
    },
)
```

---

### KMS

```python
dpl.KMS(
    alias: str = None,                      # e.g. "alias/mi-app" or simply "mi-app"
    description: str = None,
    key_usage: str = "ENCRYPT_DECRYPT",     # "ENCRYPT_DECRYPT" | "SIGN_VERIFY" | "GENERATE_VERIFY_MAC"
    key_spec: str = "SYMMETRIC_DEFAULT",    # "SYMMETRIC_DEFAULT" | "RSA_2048/3072/4096"
                                            # | "ECC_NIST_P256/P384/P521" | "ECC_SECG_P256K1"
                                            # | "HMAC_224/256/384/512"
    enable_rotation: bool = None,           # None → auto: True for SYMMETRIC_DEFAULT, False otherwise
    deletion_policy: str = "Retain",        # KMS uses Retain by default (security)
    existing_key_id: str = None,            # ID or ARN of an existing key (does not create resource)
    env_var: str = None,                    # Forces the name of the generated env var
)
```

**Auto-generated environment variable:**
- `env_var="MY_KEY"` → `MY_KEY` (takes priority over any automatic derivation)
- `alias="myapp/encryption"` → `MYAPP_ENCRYPTION_KEY_ID`
- No alias or env_var → `KMS_KEY_ID`

**Generated CloudFormation resources:**
- `AWS::KMS::Key` — with `Enabled: True`, `KeyUsage`, `KeySpec`, and a basic key policy (root account)
- `AWS::KMS::Alias` — optional alias to identify the key by name
- `EnableKeyRotation` is only added when `key_spec="SYMMETRIC_DEFAULT"` (asymmetric keys do not support automatic rotation)

**Compile-time validations (E00):**
- `alias` can only contain alphanumeric characters, `-`, `_`, `/`
- `key_usage` must be one of the valid values
- `key_spec` must be one of the valid values
- `enable_rotation=True` is not valid for asymmetric keys (RSA, ECC, HMAC)
- ECC `key_spec` is not compatible with `key_usage="ENCRYPT_DECRYPT"`
- `key_spec="SYMMETRIC_DEFAULT"` is not compatible with `key_usage="SIGN_VERIFY"`

**Auto-detected permissions:** When a `dpl.KMS()` is created inside a feature directory, deployless auto-detects it and generates IAM policies with `"crud"` permissions (`kms:Encrypt`, `kms:Decrypt`, `kms:GenerateDataKey`, `kms:DescribeKey`). No need to declare it in `resources={}` or write manual `policies=[]`.

#### Basic example (auto-detected)

```python
# app/features/tenant/routes.py
import deployless as dpl

kms_key = dpl.KMS(
    alias="mi-app/datos",
    description="Encryption key for sensitive data",
    enable_rotation=True,
    deletion_policy="Retain",
)

# No need to declare resources= or policies= — kms_key is auto-detected
# with "crud" permissions (Encrypt, Decrypt, GenerateDataKey, DescribeKey)
dpl.configure(description="Tenant Service")
```

#### Restricting permissions

Use `resources={}` only when you need to restrict the default `"crud"` level:

```python
# This feature only needs to decrypt — restrict from the default "crud"
dpl.configure(
    resources={"datos_key": (kms_key, "decrypt")},
)
```

#### How to use the key in app code

The `KMS_KEY_ID` environment variable (or `{ALIAS}_KEY_ID` if using an alias) is automatically injected into the Lambda. Use it in your encryption services:

```python
# app/features/tenant/services/kms_service.py
import boto3
import base64
import os
from botocore.exceptions import ClientError

kms_client = boto3.client('kms')

def encrypt_with_kms(plaintext: str) -> str:
    """Encrypts a string and returns the ciphertext in base64."""
    response = kms_client.encrypt(
        KeyId=os.getenv('KMS_KEY_ID'),
        Plaintext=plaintext.encode('utf-8'),
    )
    return base64.b64encode(response['CiphertextBlob']).decode('utf-8')

def decrypt_with_kms(ciphertext_b64: str) -> str:
    """Decrypts a base64 ciphertext and returns the plaintext."""
    ciphertext_blob = base64.b64decode(ciphertext_b64)
    response = kms_client.decrypt(CiphertextBlob=ciphertext_blob)
    return response['Plaintext'].decode('utf-8')
```

> `kms:Decrypt` does not need to specify `KeyId` because the ciphertext already embeds the ID of the key that encrypted it.

#### Full example — RSA key encryption per tenant

A real pattern: the tenant feature encrypts the RSA private key when creating the tenant, and the auth feature decrypts it on each login.

```python
# app/features/tenant/routes.py
import deployless as dpl

tenant_key = dpl.KMS(
    alias="ums/tenant-keys",
    description="Encryption of RSA private keys per tenant",
    enable_rotation=True,
    deletion_policy="Retain",
)

tenants_table = dpl.DynamoDB("ums-tenants", pk="tenant_id", deletion_policy="Retain")

# Both resources are auto-detected with "crud" permissions.
# We restrict tenant_key to "encrypt" only (this feature doesn't need decrypt).
dpl.configure(
    resources={"tenant_key": (tenant_key, "encrypt")},
)
```

```python
# app/features/auth/routes.py
import deployless as dpl

# Import the shared KMS key — auto-detected with "crud" by default.
# We restrict to "decrypt" only (this feature doesn't need encrypt).
from app.shared.services.kms_service import tenant_key

dpl.configure(
    resources={"tenant_key": (tenant_key, "decrypt")},
)
```

**Auto-injected environment variables:**

| Alias | Variable |
|---|---|
| `ums/tenant-keys` | `UMS_TENANT_KEYS_KEY_ID` |
| `mi-app` | `MI_APP_KEY_ID` |
| No alias | `KMS_KEY_ID` |

#### Asymmetric key for digital signing (RSA)

```python
signing_key = dpl.KMS(
    alias="mi-app/signing",
    description="RSA key for signing JWTs or documents",
    key_usage="SIGN_VERIFY",
    key_spec="RSA_2048",
    # enable_rotation does not apply — automatically ignored for asymmetric keys
)

# Auto-detected with "crud" permissions.
# For SIGN_VERIFY keys, you may need additional actions (kms:Sign, kms:Verify,
# kms:GetPublicKey) that are not covered by the default "crud" level.
# Add them via policies= when needed:
dpl.configure(
    policies=[
        {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": ["kms:Sign", "kms:Verify", "kms:GetPublicKey"],
                    "Resource": dpl.Ref(signing_key),
                }
            ],
        }
    ],
)
```

#### Existing key (do not create, only inject env var)

```python
dpl.KMS(existing_key_id="arn:aws:kms:us-east-1:123456789:key/abc-123")
# Does not generate a CloudFormation resource
# KMS_KEY_ID = "arn:aws:kms:us-east-1:123456789:key/abc-123"
```

#### Permission levels

| Level | IAM actions granted |
|-------|-------------------|
| `"crud"` (default) | `kms:Encrypt`, `kms:Decrypt`, `kms:GenerateDataKey`, `kms:DescribeKey` |
| `"encrypt"` | `kms:Encrypt` |
| `"decrypt"` | `kms:Decrypt` |

```python
# Auto-detected resources get "crud" by default.
# Use resources={} only to restrict:
dpl.configure(
    resources={
        "read_key": (dpl.KMS(alias="mi-app/read"), "decrypt"),  # restrict to decrypt only
    },
)
```

---

### SSM Parameter Store

deployless provides two tools for SSM: `dpl.SSMParameter` to **create** a parameter as a CloudFormation resource, and `dpl.SSMParam` to **reference** an existing parameter as a dynamic reference in env vars.

#### dpl.SSMParameter — create a parameter

```python
dpl.SSMParameter(
    name: str,                  # Parameter path, must start with "/"
    value: str,                 # Parameter value
    type: str = "String",       # "String" | "StringList" | "SecureString"
    description: str = None,
    existing: bool = False,     # True = do not create, only inject env var
)
```

**Auto-generated environment variable** — last segment of the path:
- `/myapp/db/host` → `HOST`
- `/myapp/api/secret-key` → `SECRET_KEY`

**Compile-time validations (E00):**
- `name` must start with `/`
- Alphanumeric only, `.`, `-`, `_`, `/`
- `type` must be `String`, `StringList`, or `SecureString`
- `value` cannot be empty (except for `SecureString`)

**Example:**

```python
db_host = dpl.SSMParameter(
    "/myapp/db/host",
    value="db.example.com",
    description="RDS endpoint",
)

# Auto-detected — SSMParameterReadPolicy is auto-generated
dpl.configure(description="My Service")
# → Variable: HOST = {"Ref": "MyappDbHostParameter"}
```

#### dpl.SSMParam — reference an existing parameter

Does not generate a CloudFormation resource. Produces a CloudFormation **dynamic reference** directly in the env var value.

```python
dpl.SSMParam(
    name: str,              # Path of the existing parameter
    secure: bool = False,   # True → "{{resolve:ssm-secure:/path}}" (SecureString)
    version: int = None,    # Optional — pin to a specific version
)
```

**Usage in env vars:**

```python
dpl.configure(
    env={
        "DB_HOST":   dpl.SSMParam("/prod/db/host"),
        "API_KEY":   dpl.SSMParam("/prod/api/key", secure=True),
        "DB_PASS":   dpl.SSMParam("/prod/db/password", secure=True, version=3),
    }
)
```

This generates in the template:

```yaml
Environment:
  Variables:
    DB_HOST:  "{{resolve:ssm:/prod/db/host}}"
    API_KEY:  "{{resolve:ssm-secure:/prod/api/key}}"
    DB_PASS:  "{{resolve:ssm-secure:/prod/db/password:3}}"
```

> `{{resolve:ssm-secure:...}}` only works with `SecureString` parameters and requires the Lambda to have `ssm:GetParameter` + `kms:Decrypt` permission on the parameter's KMS key.

#### Permission levels (SSMParameter only)

| Level | Auto-generated SAM policy / IAM action |
|-------|---------------------------------------|
| `"crud"` / `"read"` (default) | `SSMParameterReadPolicy` |
| `"write"` | Inline `ssm:PutParameter` |

```python
# Auto-detected resources get "crud"/"read" by default.
# Use resources={} only to change to "write":
dpl.configure(
    resources={
        "counter": (dpl.SSMParameter("/app/counter", value="0"), "write"),  # write access
    },
)
```

---

---

## CloudWatch Alarms - Soon

deployless can automatically generate 3 alarms per Lambda: errors, throttles, and duration.

### Activation

```python
# In routes.py — enables alarms with default thresholds
dpl.configure(alarms=True)

# With custom thresholds
dpl.configure(alarms={
    "errors": {
        "threshold": 1,      # Trigger when Errors >= 1 in the period
        "period": 300,        # Evaluation period in seconds
    },
    "throttles": {
        "threshold": 1,
        "period": 300,
    },
    "duration": {
        "threshold_pct": 80,  # Trigger when Duration > 80% of the configured timeout
        "period": 300,        # (if timeout=30s → alarm at 24000ms)
    },
    "sns_topic_arn": "arn:aws:sns:us-east-1:123456789:my-alerts",  # Optional
})

# Disable alarms for this feature even if globally active
dpl.configure(alarms=False)
```

### Global alarms (for all features)

In `deployless.yaml`, you can activate alarms for the entire project:

```yaml
alarms:
  errors:
    threshold: 1
    period: 300
  throttles:
    threshold: 1
    period: 300
  duration:
    threshold_pct: 80
    period: 300
  sns_topic_arn: "arn:aws:sns:us-east-1:123456789:my-alerts"
```

### Generated resources

For each feature with `alarms` active, deployless generates in the template:

```yaml
UserFunctionErrorsAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    MetricName: Errors
    Namespace: AWS/Lambda
    Statistic: Sum
    Period: 300
    Threshold: 1
    ComparisonOperator: GreaterThanOrEqualToThreshold
    TreatMissingData: notBreaching

UserFunctionThrottlesAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    MetricName: Throttles
    # ...

UserFunctionDurationAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    MetricName: Duration
    Statistic: Maximum
    Threshold: 24000    # 80% of 30s = 24000ms
    # ...
```

---

### dpl.Ref() and dpl.GetAtt() — Referencing resources

Use `dpl.Ref(resource)` to get the logical ID of a resource (generates `{"Ref": "LogicalId"}`), and `dpl.GetAtt(resource, attr)` to get a specific attribute (generates `{"Fn::GetAtt": ["LogicalId", "Attr"]}`).

They are typically used in `policies=` for non-standard permissions that deployless cannot auto-generate (e.g. `dynamodb:Query` on a specific index ARN):

```python
tabla = dpl.DynamoDB("users-table")
bucket = dpl.S3("uploads")

dpl.configure(
    resources={"users": tabla, "uploads": bucket},  # standard policies auto-generated
    policies=[
        # Additional non-standard policy: access a specific index ARN
        {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": ["dynamodb:Query"],
                    "Resource": {"Fn::Sub": "${%s.Arn}/index/EmailIndex" % dpl.Ref(tabla)},
                },
            ]
        }
    ],
)
```

`dpl.Ref()` and `dpl.GetAtt()` accept both a resource object and a string with the CloudFormation logical ID.

---

## @dpl.cron() — Scheduled Lambdas

Decorate any function with `@dpl.cron()` to have deployless deploy it as a separate Lambda triggered by EventBridge (CloudWatch Events) on the indicated schedule.

```python
@dpl.cron(
    schedule: str,          # Schedule expression (required)
    memory: int = None,     # MB. If None, uses globals.memory
    timeout: int = None,    # Seconds. If None, uses globals.timeout
    env: dict = None,       # Additional environment variables
    description: str = None,
)
```

**Schedule formats:**
- `"rate(5 minutes)"` — every 5 minutes
- `"rate(1 hour)"` — every hour
- `"rate(24 hours)"` — daily
- `"cron(0 9 * * ? *)"` — every day at 9:00 UTC

**The function must have the Lambda signature `(event, context)`.**

**Example:**

```python
# app/features/user/routes.py
import deployless as dpl

@dpl.cron(
    schedule="rate(24 hours)",
    memory=128,
    timeout=300,
    description="Daily cleanup of expired users",
)
def cleanup_expired_users(event, context):
    # Your logic here
    deleted = delete_expired_users()
    return {"status": "ok", "deleted": deleted}
```

This generates in `template.yaml`:

```yaml
CleanupExpiredUsersFunction:
  Type: AWS::Serverless::Function
  Properties:
    CodeUri: .dist/CleanupExpiredUsersFunction/
    Handler: bootstrap.handler
    MemorySize: 128
    Timeout: 300
    Description: Limpieza diaria de usuarios expirados
    Events:
      Schedule:
        Type: Schedule
        Properties:
          Schedule: rate(24 hours)
```

---

## @dpl.route() — Split Lambdas per route - Soon

By default, all routes in a feature share a single Lambda. With `@dpl.route()` you can isolate a specific endpoint into its own Lambda (useful for endpoints that consume many resources or have different timeouts).

```python
@dpl.route(
    memory: int = None,
    timeout: int = None,
    description: str = None,
    auth = <not specified>,   # None = public | "api_key" = requires API key
                              # (not specified = inherits auth from feature or global)
)
```

**The `@dpl.route()` decorator must go above the Flask decorator.**

```python
# app/features/user/routes.py
import deployless as dpl
from flask import Blueprint

user_bp = Blueprint("user_bp", __name__, url_prefix="/users")

@dpl.route(memory=1024, timeout=120, description="Heavy data export")
@user_bp.route("/export", methods=["POST"])
def export_users():
    # This endpoint will have its own Lambda with 1 GB and 2-minute timeout
    ...

@user_bp.route("", methods=["GET"])
def list_users():
    # This endpoint goes in the feature's shared Lambda
    ...
```

This generates two separate Lambda functions:
- `UserFunction` — contains `GET /users` (and all other endpoints without `@dpl.route()`)
- `ExportUsersFunction` — contains only `POST /users/export`

---

## @dpl.lambda_function() — Standalone Lambdas - Soon

For Lambda functions that have no HTTP routes or schedules — for example, SQS consumers, S3 event handlers, or Step Functions steps — use `@dpl.lambda_function()`.

```python
@dpl.lambda_function(
    memory: int = None,       # MB. If None, uses globals.memory
    timeout: int = None,      # Seconds. If None, uses globals.timeout
    env: dict = None,         # Additional environment variables
    description: str = None,
)
```

**The function must have the Lambda signature `(event, context)`.**

**Example:**

```python
# app/features/orders/routes.py
import deployless as dpl

@dpl.lambda_function(memory=512, timeout=60, description="Processes messages from the orders queue")
def process_order_queue(event, context):
    for record in event.get("Records", []):
        body = record["body"]
        print(f"Procesando pedido: {body}")
    return {"processed": len(event.get("Records", []))}
```

This generates in `template.yaml`:

```yaml
ProcessOrderQueueFunction:
  Type: AWS::Serverless::Function
  Properties:
    CodeUri: .dist/ProcessOrderQueueFunction/
    Handler: bootstrap.handler
    MemorySize: 512
    Timeout: 60
    Description: Procesa mensajes de la cola de pedidos
```

> **Note:** Unlike HTTP features, standalone lambdas have no API Gateway events. You can connect them to SQS, S3, DynamoDB Streams, etc. manually in the template or via event source mappings.

---

## Auto-detection of resources

deployless **automatically detects** `Resource` instances (DynamoDB, S3, SQS, KMS, SSMParameter) without requiring explicit declaration in `dpl.configure(resources={...})`. There are two detection mechanisms:

### 1. Feature-local resources

Any `Resource` created inside a feature's directory (in any `.py` file, not just `routes.py`) is auto-detected for that feature's Lambda with `"crud"` permissions.

```python
# app/features/auth/services.py
import deployless as dpl

auth_table = dpl.DynamoDB("auth-table", pk="PK", sk="SK")
# ↑ Auto-detected for the auth Lambda — no need to import in routes.py
```

### 2. Shared resources (imported from `shared/`)

Resources defined in `app/shared/` are only injected into features that **explicitly import** them in `routes.py` (least privilege).

```python
# app/shared/services/dynamo.py
import deployless as dpl

ums_table = dpl.DynamoDB("ums-table", pk="PK", sk="SK")
```

```python
# app/features/auth/routes.py
from app.shared.services.dynamo import ums_table  # ← import = auto-detect with "crud"

dpl.configure(description="Auth Service")
# ums_table is auto-detected because it was imported
```

```python
# app/features/tenant/routes.py
# Does NOT import ums_table → does NOT get permissions for it
dpl.configure(description="Tenant Service")
```

### Permission overrides

Use `dpl.configure(resources={...})` **only** when you need to restrict the default `"crud"` permission:

```python
# app/features/auth/routes.py
from app.shared.services.kms_service import kms_key

dpl.configure(
    resources={"kms_key": (kms_key, "decrypt")},  # restrict to decrypt only
)
```

### How it works

| Resource location | Detection | Default permission | `configure(resources=...)` needed? |
|---|---|---|---|
| Inside `features/X/` (any file) | Automatic via `sys.modules` scan | `crud` | Only to restrict permissions |
| In `shared/`, imported in `routes.py` | Automatic via namespace scan | `crud` | Only to restrict permissions |
| External (`existing=True`) | Automatic (same rules) | `crud` | Only to restrict permissions |

### Deduplication

If the same resource appears in multiple features, the CloudFormation definition is emitted **only once** in the template, but each feature gets its own IAM policies and environment variables.

---

## .env file and secrets

deployless can read a `.env` file to inject environment variables and manage secrets automatically.

### Configuration in deployless.yaml

```yaml
env_file: .env.production       # Path to the .env file

# Optional — KMS key to encrypt secrets in SSM
secrets_kms: mi-app/secrets     # Alias, key ID, or ARN
```

### .env file format

```env
# Normal variables — injected directly as env vars in all Lambdas
APP_ENV=production
LOG_FORMAT=json

# Secrets — the SECRET_ prefix indicates they are pushed to SSM Parameter Store
SECRET_DB_PASSWORD=mysecretpassword
SECRET_API_KEY=sk_live_xxxx
```

### Behavior

| Type | Example | Destination | Value in Lambda |
|------|---------|---------|-----------------|
| Normal | `APP_ENV=production` | Direct env var | `production` |
| Secret | `SECRET_DB_PASSWORD=xxx` | SSM Parameter Store | `{{resolve:ssm:/mi-app/SECRET_DB_PASSWORD}}` |

**For `SECRET_` variables:**

1. The name is kept in full with the prefix: `SECRET_DB_PASSWORD` → `/mi-app/SECRET_DB_PASSWORD`
2. The value is stored as `String` in SSM Parameter Store under the path `/{app_name}/{VAR_NAME}`
3. The Lambda receives a **dynamic reference** `{{resolve:ssm:...}}` that CloudFormation resolves when creating/updating the stack
4. The env var in the Lambda also keeps the full name: `SECRET_DB_PASSWORD`

> **Note:** `String` (not `SecureString`) is used because CloudFormation does not support `{{resolve:ssm-secure:...}}` in Lambda environment variables. The value is still protected by IAM — only roles with `ssm:GetParameter` permission can read it.

### Validations

| Code | Rule |
|--------|-------|
| E27 | The specified `env_file` does not exist |
| E28 | `SECRET_` variable with empty value |
| E29 | Invalid `secrets_kms` format (alias can only contain alphanumeric characters, `-`, `_`, `/`) |

---

## Flask app initialization

deployless automatically uses your `create_app()` factory to initialize each Lambda — no extra configuration needed. CORS, error handlers, middleware, and any other setup you do inside `create_app()` is inherited automatically.

### Zero-config: app factory auto-detection

If your project has a `create_app()` function in `app/__init__.py`, the generated bootstrap calls it directly:

```python
# app/__init__.py — your existing code, unchanged
def create_app() -> Flask:
    app = Flask(__name__)
    CORS(app, origins=settings.CORS_ORIGINS)
    register_error_handlers(app)
    app.register_blueprint(todo_bp)
    return app
```

deployless generates a bootstrap that calls `create_app()` transparently:

```python
# .dist/TodoFunction/bootstrap.py — auto-generated
try:
    from app import create_app as _deployless_factory
    flask_app = _deployless_factory()  # ← CORS, error handlers, etc. all applied
    del _deployless_factory
except (ImportError, AttributeError) as _e:
    # Fallback: manual bootstrap (see below)
    ...
```

For **multi-feature projects**, deployless also generates minimal stubs for the other features so `create_app()` can import all blueprints without errors — without including their actual code in the package:

```
.dist/TodoFunction/
  app/
    __init__.py              ← real (with create_app)
    features/
      todo/                  ← real code
      auth/routes.py         ← stub (empty Blueprint, ~5 lines)
      user/routes.py         ← stub (empty Blueprint, ~5 lines)
    shared/
```

### Fallback: manual bootstrap

If `create_app()` is not found or fails to import, deployless falls back to the manual bootstrap, registering blueprints directly. In this case, `cors:` and `init_app:` from `deployless.yaml` are used.

A warning is printed to stderr when the fallback is active:
```
[deployless] create_app() unavailable (...). Using manual bootstrap.
```

### init_app (legacy / override)

`init_app` in `deployless.yaml` is the explicit alternative for projects that do not use an `app/__init__.py` factory. It is only applied in the fallback path, so it is a no-op when `create_app()` succeeds.

```yaml
# deployless.yaml — only needed if you don't use create_app()
init_app:
  - app.shared.errors.register_error_handlers
  - app.shared.middleware.register_middleware
```

Each entry is a dotted `module.function` path. deployless imports and calls `function(flask_app)`:

```python
from app.shared.errors import register_error_handlers
register_error_handlers(flask_app)
```

### Validation

| Code | Rule |
|------|------|
| E31 | Each `init_app` entry must be a string in `module.function` format (e.g. `app.shared.errors.register_error_handlers`) |

---

## CLI commands

### `deployless init`

Initializes a new deployless project with an interactive wizard.

```bash
deployless init             # Creates only deployless.yaml
deployless init --app       # Also creates app/, run.py, and .gitignore
```

Prompts for project name, stage, and runtime (with sensible defaults). The `--app` flag scaffolds the full project structure ready to start coding.

### `deployless build`

Generates `template.yaml` and builds the `.dist/` packages.

```bash
deployless build

# Options:
deployless build --stage prod            # Overrides the stage
deployless build -o infra/template.yaml  # Template output path
deployless build --dry-run               # Validates without writing files
deployless build --verbose               # Detailed output
```

### `deployless validate`

Validates the project without generating any files. Equivalent to `build --dry-run` but with cleaner output.

```bash
deployless validate
deployless validate --stage prod
deployless validate --check-existing   # Verifies that resources with existing=True exist in AWS
deployless validate --verbose
```

### `deployless check`

Runs pre-flight checks before deploying: validates env vars, verifies the SAM CLI is installed, checks AWS credentials, and verifies that resources declared with `existing=True` actually exist in AWS.

```bash
deployless check
deployless check --stage prod
deployless check --verbose
```

### `deployless deploy`

Chains `deployless check` + `deployless build` + `sam build` + `sam deploy`. Requires the [AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html) to be installed.

If `samconfig.toml` does not exist (first deployment), `--guided` is added automatically so SAM prompts for the initial configuration.

```bash
deployless deploy
deployless deploy --stage prod
deployless deploy --guided          # Force wizard mode
deployless deploy --push-secrets    # Push SECRET_ vars to SSM before deploying
deployless deploy --verbose         # Detailed output
```

#### The `--stage` flag

The `--stage` flag overrides the `stage` value in `deployless.yaml` without editing the file. It affects:

- **API Gateway `StageName`** — the URL prefix (`/dev/`, `/prod/`)
- **`APP_STAGE` env var** — injected into all Lambda functions
- **`samconfig.toml`** — the stack prefix for SAM

This is especially useful in CI/CD pipelines where a single `deployless.yaml` serves multiple environments:

```yaml
# GitHub Actions — same deployless.yaml, different stage per branch
- run: deployless deploy --stage ${{ github.ref == 'refs/heads/main' && 'prod' || 'dev' }}
```

In local development you typically don't need it — `stage: dev` is the default.

### `deployless clean`

Removes the generated files (`.dist/` and `template.yaml`).

```bash
deployless clean
deployless clean -o infra/template.yaml  # If you used a different output path
```

### `deployless info`

Shows a summary of the detected project.

```bash
deployless info
```

Example output:

```
Project  : mi-ums-api
Provider : aws
Stage    : dev
Runtime  : python3.13

Features (3):
  - auth    (app/features/auth/routes.py)
  - tenant  (app/features/tenant/routes.py)
  - user    (app/features/user/routes.py)
```

### `deployless secrets push`

Pushes the `SECRET_*` variables from the `.env` file to AWS SSM Parameter Store.

```bash
deployless secrets push
deployless secrets push --stage prod
deployless secrets push --env-file .env.prod   # Overrides the env_file path from deployless.yaml
deployless secrets push --verbose
```

**Process:**
1. Reads the `.env` file (from `deployless.yaml` or `--env-file`)
2. Filters variables with the `SECRET_` prefix
3. Creates/updates SSM parameters: `/{app_name}/{VAR_NAME}` (type `String`)

> **Note:** `deployless build` does **not** push secrets automatically. Use `deployless secrets push` explicitly, or pass `--push-secrets` to `deployless build` / `deployless deploy`.

**Example:**

```env
# .env.prod
SECRET_DB_PASSWORD=mysecretpassword
SECRET_API_KEY=sk_live_xxx
```

```bash
deployless secrets push --env-file .env.prod
# Creates in SSM:
#   /mi-app/SECRET_DB_PASSWORD  (String)
#   /mi-app/SECRET_API_KEY      (String)
```

### `deployless secrets sync`

Push + removes orphaned parameters in SSM. Useful for keeping SSM in sync when secrets are removed from the `.env`.

```bash
deployless secrets sync
deployless secrets sync --stage prod
deployless secrets sync --env-file .env.prod
deployless secrets sync --yes              # Auto-confirms deletion of orphans
deployless secrets sync --verbose
```

**Behavior:**
1. Pushes all `SECRET_*` variables (same as `secrets push`)
2. Lists existing parameters under `/{app_name}/` in SSM
3. Detects parameters that are no longer in the `.env`
4. Asks for confirmation before deleting them (unless `--yes` is used)

---

## Project structure

deployless expects the following directory structure (configurable in `deployless.yaml`):

```
mi-proyecto/
├── deployless.yaml             # deployless configuration
├── requirements.txt         # Global project dependencies
├── app/
│   ├── features/            # One folder per feature
│   │   ├── auth/
│   │   │   ├── routes.py    # REQUIRED — Flask Blueprint + dpl.configure()
│   │   │   ├── use_cases/
│   │   │   ├── repositories/
│   │   │   └── schemas/
│   │   ├── user/
│   │   │   ├── routes.py
│   │   │   ├── requirements.txt  # OPTIONAL — extra dependencies for this feature
│   │   │   └── ...
│   │   └── tenant/
│   │       └── routes.py
│   └── shared/              # Shared code — copied into ALL Lambdas
│       ├── decorators/
│       ├── errors/
│       └── config.py
└── .dist/                   # Generated by deployless build (do not commit to git)
    ├── AuthFunction/
    │   ├── app/
    │   │   ├── __init__.py
    │   │   ├── features/
    │   │   │   ├── __init__.py
    │   │   │   └── auth/        # Only this feature's code
    │   │   │       ├── routes.py
    │   │   │       ├── use_cases/
    │   │   │       └── ...
    │   │   └── shared/          # Copy of app/shared/
    │   ├── bootstrap.py         # Auto-generated
    │   ├── deployless.py           # Runtime stub (no-ops)
    │   └── requirements.txt     # Global + feature + aws-wsgi requirements.txt
    ├── UserFunction/
    └── TenantFunction/
```

### Discovery rules

- deployless scans `app/features/` looking for subdirectories with a valid entry point.
- Directories starting with `_` (e.g. `__pycache__`) are ignored.
- They are processed in alphabetical order.
- The entry point must define at least one Flask Blueprint with at least one route.

**Entry point resolution** — deployless tries these conventions in order:

| Priority | Convention | Example |
|----------|-----------|---------|
| 1 | `routes.py` in the feature root | `features/auth/routes.py` |
| 2 | `routes/{feature}_routes.py` (screaming architecture) | `features/auth/routes/auth_routes.py` |
| 3 | Single `.py` file in `routes/` | `features/auth/routes/handler.py` |

This means you can organize your features in different ways without changing any configuration:

```
# Classic (default)
features/auth/routes.py

# Screaming architecture
features/auth/routes/auth_routes.py

# Custom name (only works if there's a single .py file in routes/)
features/auth/routes/endpoints.py
```

### The generated bootstrap

For each Lambda a `bootstrap.py` is generated that:

1. Creates a Flask app.
2. Optionally injects CORS headers via `@after_request` (if `api.cors` is configured and `flask-cors` is not in the dependencies).
3. Registers all Flask Blueprints found in `routes.py`.
4. Calls any `init_app` hooks declared in `deployless.yaml`.
5. Wraps the app with `aws_wsgi.response()` to convert API Gateway events into WSGI requests.

```python
# .dist/UserFunction/bootstrap.py — auto-generated, do not edit
import sys, os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))

from flask import Flask
import app.features.user.routes as _routes_module
import inspect

flask_app = Flask(__name__)

# Injected when api.cors is configured and flask-cors is not in requirements
@flask_app.after_request
def _deployless_cors(response):
    if 'Access-Control-Allow-Origin' not in response.headers:
        response.headers['Access-Control-Allow-Origin'] = '*'
        response.headers['Access-Control-Allow-Methods'] = 'GET,POST,PUT,DELETE,OPTIONS'
        response.headers['Access-Control-Allow-Headers'] = 'Content-Type,Authorization'
    return response

for _name, _obj in inspect.getmembers(_routes_module):
    _klass = type(_obj)
    if _klass.__name__ == "Blueprint" and "flask" in _klass.__module__:
        flask_app.register_blueprint(_obj)

# Injected when init_app is configured in deployless.yaml
from app.shared.errors import register_error_handlers
register_error_handlers(flask_app)

try:
    import awsgi
    def handler(event, context):
        return awsgi.response(flask_app, event, context, base64_content_types={"image/png", "image/jpeg"})
except ImportError:
    raise ImportError("aws-wsgi is required.")
```

**CORS auto-injection:** if `api.cors` is configured, deployless always injects the `@after_request` hook. The hook has a guard — it only sets the headers if `Access-Control-Allow-Origin` is not already present. This means it coexists safely with `flask-cors`: if the user registers `CORS(flask_app, ...)` via `init_app`, flask-cors runs first (LIFO hook order) and the deployless hook becomes a no-op.

> **Important:** `flask-cors` in `requirements.txt` or `pyproject.toml` is not enough. The bootstrap creates its own Flask app and never calls `create_app()`, so any CORS setup in the app factory is not applied. To use `flask-cors` in Lambda, register it explicitly via `init_app`.

Each Lambda also includes a `deployless.py` with no-op implementations of all deployless functions (`configure`, `KMS`, `DynamoDB`, etc.), so that `import deployless as dpl` statements in `routes.py` do not fail at runtime without needing to install the full package.

---

## Full example

This example uses the real app in this repository (`app/features/auth`, `user`, `tenant`).

### 1. deployless.yaml

```yaml
name: ums-api
provider: aws
stage: dev

paths:
  features: app/features
  shared: app/shared

globals:
  runtime: python3.13
  memory: 256
  timeout: 30
  log_retention: 14

api:
  endpoint_type: REGIONAL
  cors:
    allow_origin: "*"
    allow_methods: [GET, POST, PUT, DELETE, OPTIONS]
    allow_headers: [Content-Type, Authorization, X-Api-Key]

env:
  LOG_LEVEL: INFO
```

### 2. app/features/user/routes.py

```python
from flask import Blueprint, request, g, jsonify
import deployless as dpl

from app.features.user.schemas import CreateUserRequest, UpdateUserRequest
from app.features.user.use_cases import create_user, list_users, get_user, update_user, delete_user
from app.shared.decorators import require_auth, require_scopes

# ---- Lambda configuration for the "user" feature ----
dpl.configure(
    memory=512,
    timeout=30,
    description="User Management Service",
    resources={
        "users": dpl.DynamoDB(
            "ums-users",
            pk="tenant_id",
            pk_type="S",
            sk="user_id",
            sk_type="S",
            gsi=[
                {
                    "name": "EmailIndex",
                    "pk": "email",
                    "pk_type": "S",
                }
            ],
            ttl_attribute="expires_at",
            deletion_policy="Retain",
        ),
        "sessions": dpl.DynamoDB(
            "ums-sessions",
            pk="session_id",
            ttl_attribute="expires_at",
        ),
    },
    env={
        "TOKEN_EXPIRY": "3600",
    },
)

# ---- Cron: daily cleanup of expired sessions ----
@dpl.cron(
    schedule="rate(24 hours)",
    memory=128,
    timeout=60,
    description="Limpieza de sesiones expiradas",
)
def cleanup_sessions(event, context):
    # Cleanup logic
    return {"status": "ok"}

# ---- Flask Blueprint ----
user_bp = Blueprint("user_bp", __name__, url_prefix="/users")

@user_bp.route("", methods=["POST"])
@require_auth
@require_scopes(["ums:users:create"])
def create_user_route():
    data = request.get_json()
    req = CreateUserRequest(
        email=data.get("email"),
        password=data.get("password"),
        scopes=data.get("scopes", []),
    )
    response = create_user(req, g.user["tenant_id"])
    return jsonify(response.to_dict()), 201

@user_bp.route("", methods=["GET"])
@require_auth
@require_scopes(["ums:users:read"])
def list_users_route():
    response = list_users(g.user["tenant_id"])
    return jsonify(response.to_dict()), 200

@user_bp.route("/<user_id>", methods=["GET"])
@require_auth
@require_scopes(["ums:users:read"])
def get_user_route(user_id):
    response = get_user(g.user["tenant_id"], user_id)
    return jsonify(response.to_dict()), 200

@user_bp.route("/<user_id>", methods=["PUT"])
@require_auth
@require_scopes(["ums:users:update"])
def update_user_route(user_id):
    data = request.get_json()
    req = UpdateUserRequest(
        email=data.get("email"),
        password=data.get("password"),
        scopes=data.get("scopes"),
    )
    response = update_user(g.user["tenant_id"], user_id, req)
    return jsonify(response.to_dict()), 200

@user_bp.route("/<user_id>", methods=["DELETE"])
@require_auth
@require_scopes(["ums:users:delete"])
def delete_user_route(user_id):
    delete_user(g.user["tenant_id"], user_id)
    return "", 204

# ---- Split Lambda: heavy export ----
@dpl.route(memory=1024, timeout=120, description="Exportación masiva de usuarios")
@user_bp.route("/export", methods=["POST"])
@require_auth
@require_scopes(["ums:users:export"])
def export_users_route():
    # This endpoint will have its own Lambda
    ...
    return jsonify({"url": "https://..."}), 200
```

### 3. Run the build

```bash
deployless build --verbose
```

Expected output:

```
[deployless] Project: ums-api | Stage: dev | Provider: aws
[deployless] Features found: ['auth', 'tenant', 'user']
[deployless]   auth: 3 routes, 0 split
[deployless]   tenant: 2 routes, 0 split
[deployless]   user: 5 routes, 1 split
[deployless] Crons: ['cleanup_sessions']
[deployless] Validation passed.
[deployless]   Built: .dist/AuthFunction
[deployless]   Built: .dist/TenantFunction
[deployless]   Built: .dist/UserFunction
[deployless]   Built split route: .dist/ExportUsersRouteFunction
[deployless]   Built cron: .dist/CleanupSessionsFunction
[deployless] Template generated: /path/to/project/template.yaml
```

### 4. Deploy

```bash
# First time (SAM interactive wizard)
deployless deploy --guided --stage prod

# Subsequent deployments
deployless deploy --stage prod
```

### 5. Generated template.yaml (summary)

```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: ums-api — Generated by deployless

Globals:
  Function:
    Runtime: python3.13
    MemorySize: 256
    Timeout: 30
    Environment:
      Variables:
        LOG_LEVEL: INFO
        APP_STAGE: dev

Resources:
  Api:
    Type: AWS::Serverless::Api
    Properties:
      StageName: dev
      EndpointConfiguration: REGIONAL
      Cors:
        AllowOrigin: "'*'"
        AllowMethods: "'GET,POST,PUT,DELETE,OPTIONS'"
        AllowHeaders: "'Content-Type,Authorization,X-Api-Key'"

  UserFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .dist/UserFunction/
      Handler: bootstrap.handler
      MemorySize: 512
      Timeout: 30
      Description: User Management Service
      Environment:
        Variables:
          UMS_USERS_TABLE:
            Ref: UmsUsersTable
          UMS_SESSIONS_TABLE:
            Ref: UmsSessionsTable
          TOKEN_EXPIRY: '3600'
      Events:
        UserPostGet:
          Type: Api
          Properties:
            RestApiId:
              Ref: Api
            Path: /users
            Method: get
        # ... more events

  UmsUsersTable:
    Type: AWS::DynamoDB::Table
    DeletionPolicy: Retain
    Properties:
      TableName: ums-users
      BillingMode: PAY_PER_REQUEST
      # ... attributes, GSI, TTL

  CleanupSessionsFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .dist/CleanupSessionsFunction/
      Handler: bootstrap.handler
      MemorySize: 128
      Timeout: 60
      Events:
        Schedule:
          Type: Schedule
          Properties:
            Schedule: rate(24 hours)

Outputs:
  ApiUrl:
    Description: API Gateway endpoint URL
    Value:
      Fn::Sub: https://${Api}.execute-api.${AWS::Region}.amazonaws.com/dev
  UserFunctionArn:
    Value:
      Fn::GetAtt: [UserFunction, Arn]
  # ...
```

---

## Known notes and limitations

- **Only Flask is supported** for now. FastAPI support is planned.
- **The full feature directory tree is copied** into each Lambda (including subdirectories like `use_cases/`, `repositories/`, etc.).
- **`app/shared/` is copied in full** into each Lambda under `app/shared/`. Imports like `from app.shared.x import y` work without changes in Lambda.
- **Dependencies are not installed** during `deployless build`. `sam build` (run by `deployless deploy`) is what installs the `requirements.txt` of each package.
- **SQS and KMS resources** return multiple CloudFormation entries (queue + DLQ, key + alias). deployless inserts them all correctly into the template.
