feat(infra-aws): provision cms stage/prod foundations#135
Conversation
Implement Terraform CMS AWS foundations for stage/prod with VPC, ECS Fargate, ALB+WAF, RDS Postgres, and S3+CloudFront assets. Made-with: Cursor
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughConvert infra/aws into a parameterized, multi-environment Terraform platform that provisions per-environment VPC, ECS (Fargate) CMS, ALB (optional HTTPS), RDS Postgres, S3+CloudFront (OAC), IAM, WAF, Route53, provider config, lockfile, and root module instantiation for target environments. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant WAF as "AWS WAF"
participant ALB as "ALB (HTTP/HTTPS)"
participant ECS as "ECS (Fargate) CMS"
participant RDS as "RDS Postgres"
participant CloudFront as "CloudFront"
participant S3 as "S3 (Assets)"
Client->>WAF: Request (HTTP/HTTPS)
WAF->>ALB: Forward if allowed
ALB->>ECS: Route to CMS target group
ECS->>RDS: DB query
RDS-->>ECS: Query response
ECS-->>ALB: HTTP response
ALB-->>Client: Serve page
Client->>CloudFront: Asset request
CloudFront->>S3: Get via OAC
S3-->>CloudFront: Asset
CloudFront-->>Client: Cached asset
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (5)
infra/aws/modules/forge-platform/main.tf (3)
391-428: CloudFront distribution lacks WAF protection.Static analysis flags that the CloudFront distribution doesn't have a WAF (AWS-0011). While the ALB has WAF protection, adding WAF to CloudFront would provide defense-in-depth for the assets CDN. This is optional depending on threat model—assets are typically public anyway.
If you want to add WAF to CloudFront, you'd need a
CLOUDFRONT-scoped WAF ACL (must be inus-east-1) and associate it viaweb_acl_idon the distribution.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/main.tf` around lines 391 - 428, The CloudFront distribution resource aws_cloudfront_distribution.assets is missing WAF protection; create or reference a CLOUDFRONT-scoped WAFv2 Web ACL (aws_wafv2_web_acl) in us-east-1 and associate it by adding its id to the distribution's web_acl_id attribute (ensure the WAF ACL scope = "CLOUDFRONT" and is created/managed in us-east-1 or referenced from there), then update aws_cloudfront_distribution.assets to include web_acl_id = aws_wafv2_web_acl.<your_acl>.id so the distribution is protected by the WAF.
454-491: Consider adding AWS Managed Rules for known vulnerabilities.The WAF rate-limiting rule is well-configured. Checkov flags missing Log4j protection (CKV_AWS_192). Consider adding the
AWSManagedRulesKnownBadInputsRuleSetmanaged rule group for defense against known attack patterns including Log4Shell.🛡️ Example: Add managed rule group
rule { name = "aws-known-bad-inputs" priority = 2 override_action { none {} } statement { managed_rule_group_statement { vendor_name = "AWS" name = "AWSManagedRulesKnownBadInputsRuleSet" } } visibility_config { cloudwatch_metrics_enabled = true metric_name = "${local.name_prefix}-waf-known-bad-inputs" sampled_requests_enabled = true } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/main.tf` around lines 454 - 491, Add an AWS managed rule group to the aws_wafv2_web_acl "alb" resource to protect against known bad inputs (e.g., Log4Shell): create a new rule (e.g., name "aws-known-bad-inputs", priority 2) inside the resource with an override_action of none and a statement using managed_rule_group_statement with vendor_name "AWS" and name "AWSManagedRulesKnownBadInputsRuleSet", and include a visibility_config using local.name_prefix for metric_name and enabling cloudwatch_metrics_enabled and sampled_requests_enabled.
104-160: Security group egress rules are permissive but typical for application workloads.The
0.0.0.0/0egress rules (flagged by Trivy AWS-0104) are common for ECS tasks that need to pull images, send logs to CloudWatch, and potentially call external APIs. If stricter controls are desired in prod, consider using VPC endpoints for AWS services and restricting egress to known CIDR ranges.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/main.tf` around lines 104 - 160, The egress rules on aws_security_group.alb and aws_security_group.ecs_service currently allow 0.0.0.0/0; to harden this, replace the broad cidr_blocks entries in both resources with more restrictive egress controls: either restrict to your known outbound CIDRs/NAT or internal subnets, or create and use VPC endpoints (ECR, S3, CloudWatch Logs) and allow only those endpoint prefix lists/security group targets; update the egress blocks in aws_security_group.alb and aws_security_group.ecs_service to reference those tighter CIDRs or endpoint-managed prefix lists (or security_group_ids) and add a short comment explaining why each allowed destination is needed.infra/aws/modules/forge-platform/outputs.tf (1)
9-31: Consider adding descriptions to new outputs.The new outputs are correctly wired to their respective resources. Adding
descriptionattributes would improve documentation and align with the PR objective of ensuring "variables and outputs are documented and complete."📝 Example descriptions
output "alb_dns_name" { + description = "DNS name of the Application Load Balancer" value = aws_lb.this.dns_name } output "cms_assets_bucket_name" { + description = "Name of the S3 bucket for CMS assets" value = aws_s3_bucket.assets.bucket } output "cloudfront_distribution_domain_name" { + description = "Domain name of the CloudFront distribution for assets" value = aws_cloudfront_distribution.assets.domain_name } output "db_instance_endpoint" { + description = "RDS PostgreSQL instance endpoint address" value = aws_db_instance.cms.address } output "vpc_id" { + description = "ID of the VPC" value = aws_vpc.this.id } output "private_subnet_ids" { + description = "List of private subnet IDs" value = [for subnet in aws_subnet.private : subnet.id] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/outputs.tf` around lines 9 - 31, The outputs (alb_dns_name, cms_assets_bucket_name, cloudfront_distribution_domain_name, db_instance_endpoint, vpc_id, private_subnet_ids) lack description attributes; add a brief description attribute to each output that explains what the value represents and its intended use (e.g., "ALB DNS name used by the frontend", "S3 bucket name for CMS assets", "CloudFront distribution domain for assets", "Database instance endpoint for CMS", "VPC ID for the Forge platform", "List of private subnet IDs for internal resources") so the outputs are documented and align with the PR goal of complete variables/outputs documentation.infra/aws/modules/forge-platform/variables.tf (1)
8-88: Consider adding descriptions to variables for documentation completeness.Per the PR objectives, variables should be documented. Most new variables lack
descriptionattributes. This is a minor polish item given variables are otherwise well-typed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/variables.tf` around lines 8 - 88, Add descriptive `description` attributes to each Terraform variable declaration (e.g., tags, cms_container_image, cms_container_port, cms_desired_count, cms_cpu, cms_memory, cms_environment_variables, db_name, db_username, db_password, db_instance_class, db_allocated_storage, db_engine_version, db_multi_az, waf_rate_limit, assets_bucket_name_override) so the module is self-documenting; for each variable include a concise sentence about its purpose and expected values/defaults, retain `sensitive = true` on db_password, and keep existing types and defaults unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@infra/aws/modules/forge-platform/main.tf`:
- Around line 257-266: The ALB currently exposes only an HTTP listener
(aws_lb_listener.http) which is insecure for production; update the Terraform to
add an HTTPS listener using an ACM certificate (create/reference
aws_acm_certificate and add aws_lb_listener for port 443 forwarding to
aws_lb_target_group.cms) and/or change the existing HTTP listener to a redirect
action to HTTPS (type = "redirect") until the ACM cert is provisioned; if this
is intentionally bootstrap-only, add a clear TODO comment and/or create a
follow-up issue documenting that aws_lb_listener.http must be replaced with an
HTTPS listener using an ACM certificate and port 443 listener forwarding to
aws_lb_target_group.cms.
- Around line 226-233: The ALB resource aws_lb.this must enable dropping invalid
HTTP header fields; update the aws_lb "this" resource to include the load
balancer attribute that turns on header validation (add a
load_balancer_attributes block with drop_invalid_header_fields = true) so the
ALB will drop invalid headers and mitigate request smuggling.
- Around line 335-353: The aws_db_instance "cms" resource is missing storage
encryption; add storage_encrypted = true to enable encryption at rest and
(optionally) set kms_key_id = var.db_kms_key_id so a customer-managed KMS key
can be used; update variable definitions (e.g., var.db_kms_key_id) and
documentation as needed and ensure deletion_protection/apply_immediately logic
remains unchanged.
In `@infra/aws/modules/forge-platform/variables.tf`:
- Around line 54-58: The variable "db_password" in variables.tf currently has a
default "replace-me" which is unsafe; either remove the default from variable
"db_password" so callers must pass an explicit value, or add a validation block
on variable "db_password" (rejecting values like "replace-me" or empty strings)
to prevent accidental deployment with the placeholder; update any
modules/consumers that assume a default to supply the password and run terraform
validate to ensure the change is handled across uses of the "db_password"
variable.
---
Nitpick comments:
In `@infra/aws/modules/forge-platform/main.tf`:
- Around line 391-428: The CloudFront distribution resource
aws_cloudfront_distribution.assets is missing WAF protection; create or
reference a CLOUDFRONT-scoped WAFv2 Web ACL (aws_wafv2_web_acl) in us-east-1 and
associate it by adding its id to the distribution's web_acl_id attribute (ensure
the WAF ACL scope = "CLOUDFRONT" and is created/managed in us-east-1 or
referenced from there), then update aws_cloudfront_distribution.assets to
include web_acl_id = aws_wafv2_web_acl.<your_acl>.id so the distribution is
protected by the WAF.
- Around line 454-491: Add an AWS managed rule group to the aws_wafv2_web_acl
"alb" resource to protect against known bad inputs (e.g., Log4Shell): create a
new rule (e.g., name "aws-known-bad-inputs", priority 2) inside the resource
with an override_action of none and a statement using
managed_rule_group_statement with vendor_name "AWS" and name
"AWSManagedRulesKnownBadInputsRuleSet", and include a visibility_config using
local.name_prefix for metric_name and enabling cloudwatch_metrics_enabled and
sampled_requests_enabled.
- Around line 104-160: The egress rules on aws_security_group.alb and
aws_security_group.ecs_service currently allow 0.0.0.0/0; to harden this,
replace the broad cidr_blocks entries in both resources with more restrictive
egress controls: either restrict to your known outbound CIDRs/NAT or internal
subnets, or create and use VPC endpoints (ECR, S3, CloudWatch Logs) and allow
only those endpoint prefix lists/security group targets; update the egress
blocks in aws_security_group.alb and aws_security_group.ecs_service to reference
those tighter CIDRs or endpoint-managed prefix lists (or security_group_ids) and
add a short comment explaining why each allowed destination is needed.
In `@infra/aws/modules/forge-platform/outputs.tf`:
- Around line 9-31: The outputs (alb_dns_name, cms_assets_bucket_name,
cloudfront_distribution_domain_name, db_instance_endpoint, vpc_id,
private_subnet_ids) lack description attributes; add a brief description
attribute to each output that explains what the value represents and its
intended use (e.g., "ALB DNS name used by the frontend", "S3 bucket name for CMS
assets", "CloudFront distribution domain for assets", "Database instance
endpoint for CMS", "VPC ID for the Forge platform", "List of private subnet IDs
for internal resources") so the outputs are documented and align with the PR
goal of complete variables/outputs documentation.
In `@infra/aws/modules/forge-platform/variables.tf`:
- Around line 8-88: Add descriptive `description` attributes to each Terraform
variable declaration (e.g., tags, cms_container_image, cms_container_port,
cms_desired_count, cms_cpu, cms_memory, cms_environment_variables, db_name,
db_username, db_password, db_instance_class, db_allocated_storage,
db_engine_version, db_multi_az, waf_rate_limit, assets_bucket_name_override) so
the module is self-documenting; for each variable include a concise sentence
about its purpose and expected values/defaults, retain `sensitive = true` on
db_password, and keep existing types and defaults unchanged.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
infra/aws/.terraform.lock.hclinfra/aws/main.tfinfra/aws/modules/forge-platform/main.tfinfra/aws/modules/forge-platform/outputs.tfinfra/aws/modules/forge-platform/variables.tfinfra/aws/outputs.tfinfra/aws/variables.tf
Address review findings by hardening ALB and RDS defaults, making DB password an explicit required input, and adding optional HTTPS listener support. Made-with: Cursor
Review feedback addressed (
|
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
infra/aws/modules/forge-platform/main.tf (1)
382-383:⚠️ Potential issue | 🟠 MajorFix RDS snapshot/deletion asymmetry for production safety.
The hardcoded
skip_final_snapshot = truecreates a dangerous mismatch with the conditionaldeletion_protection. If deletion protection is later disabled on the prod database,terraform destroywould delete it without a final snapshot—risking irreversible data loss. Non-prod environments have no safeguards at all.Proposed fix
- skip_final_snapshot = true + skip_final_snapshot = var.environment != "prod" + final_snapshot_identifier = var.environment == "prod" ? "${local.name_prefix}-final" : null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/main.tf` around lines 382 - 383, The current Terraform sets skip_final_snapshot = true while deletion_protection is conditional, which risks deleting prod DBs without a final snapshot; update the aws_db_instance configuration so skip_final_snapshot is false for prod (e.g., skip_final_snapshot = var.environment != "prod") and ensure a final_snapshot_identifier is provided for prod (use a deterministic name or timestamped var) while keeping deletion_protection = var.environment == "prod"; this ensures production DBs always get a final snapshot and non-prod can skip it.
🧹 Nitpick comments (1)
infra/aws/modules/forge-platform/main.tf (1)
61-102: Single NAT gateway is a prod availability bottleneck.Line 61 through Line 102 route all private subnets through one NAT in a single AZ. For prod, consider one NAT per AZ (or a configurable NAT strategy) to avoid cross-AZ egress dependency and AZ-level egress outage risk.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/main.tf` around lines 61 - 102, Current config creates a single aws_nat_gateway.this and routes all private subnets through it, creating an AZ-level single point of failure; update to provision one NAT gateway per AZ and associate each private subnet with the NAT in its same AZ by turning aws_nat_gateway.this into a count/for_each over availability zones or over aws_subnet.public (matching AZs), create a corresponding aws_eip per NAT (e.g., aws_eip.nat with same count/for_each), change aws_route_table.private to either be per-AZ (one private RT per AZ) or keep single RT per subnet mapping, and switch aws_route_table_association.private to look up/associate the correct route_table_id and nat_gateway_id for each private subnet (use aws_nat_gateway.this[count.index].id or lookup by AZ) so each private subnet uses its AZ-local NAT gateway.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@infra/aws/main.tf`:
- Around line 27-29: The CI terraform-plan workflow fails because the required
Terraform variable db_password (declared in infra/aws/variables.tf and
infra/aws/modules/forge-platform/variables.tf) is not provided; update the
.github/workflows/terraform-plan.yml job that runs terraform plan
-var="environment=preview" to supply db_password either by setting
TF_VAR_db_password in the job environment or by appending -var="db_password=${{
secrets.DB_PASSWORD }}" (or another secret) to the terraform plan command so the
db_password variable is injected during plan.
In `@infra/aws/modules/forge-platform/main.tf`:
- Around line 428-533: Introduce two boolean variables (enable_cloudfront =
true, enable_waf = true) and use them to conditionally create the CloudFront and
WAF resources: set count = var.enable_cloudfront ? 1 : 0 on
aws_cloudfront_distribution.assets and related resources/data
(data.aws_iam_policy_document.assets_bucket and aws_s3_bucket_policy.assets) and
set count = var.enable_waf ? 1 : 0 on aws_wafv2_web_acl.alb and
aws_wafv2_web_acl_association.alb; update all cross-references to use index 0
(e.g., aws_cloudfront_distribution.assets[0].arn, aws_wafv2_web_acl.alb[0].arn)
where the resources are now conditional, and ensure the S3 bucket policy/data
document is adjusted so it only includes the CloudFront-specific statement when
var.enable_cloudfront is true (use conditional expressions or separate document
resource guarded by the same count).
In `@infra/aws/modules/forge-platform/variables.tf`:
- Around line 65-69: The db_password variable lacks validation and may accept
empty or whitespace-only values; add a Terraform validation block to the
variable "db_password" that checks length(trimspace(var.db_password)) > 0 and
returns a clear error message (e.g., "db_password must not be empty or
whitespace") so invalid inputs are rejected immediately; keep the variable type
string and sensitive = true.
---
Duplicate comments:
In `@infra/aws/modules/forge-platform/main.tf`:
- Around line 382-383: The current Terraform sets skip_final_snapshot = true
while deletion_protection is conditional, which risks deleting prod DBs without
a final snapshot; update the aws_db_instance configuration so
skip_final_snapshot is false for prod (e.g., skip_final_snapshot =
var.environment != "prod") and ensure a final_snapshot_identifier is provided
for prod (use a deterministic name or timestamped var) while keeping
deletion_protection = var.environment == "prod"; this ensures production DBs
always get a final snapshot and non-prod can skip it.
---
Nitpick comments:
In `@infra/aws/modules/forge-platform/main.tf`:
- Around line 61-102: Current config creates a single aws_nat_gateway.this and
routes all private subnets through it, creating an AZ-level single point of
failure; update to provision one NAT gateway per AZ and associate each private
subnet with the NAT in its same AZ by turning aws_nat_gateway.this into a
count/for_each over availability zones or over aws_subnet.public (matching AZs),
create a corresponding aws_eip per NAT (e.g., aws_eip.nat with same
count/for_each), change aws_route_table.private to either be per-AZ (one private
RT per AZ) or keep single RT per subnet mapping, and switch
aws_route_table_association.private to look up/associate the correct
route_table_id and nat_gateway_id for each private subnet (use
aws_nat_gateway.this[count.index].id or lookup by AZ) so each private subnet
uses its AZ-local NAT gateway.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
infra/aws/main.tfinfra/aws/modules/forge-platform/main.tfinfra/aws/modules/forge-platform/outputs.tfinfra/aws/modules/forge-platform/variables.tfinfra/aws/variables.tf
🚧 Files skipped from review as they are similar to previous changes (1)
- infra/aws/variables.tf
Harden DB credential handling by removing external password inputs, enabling RDS managed master-user secrets, and injecting DATABASE_PASSWORD to ECS via Secrets Manager with least-privilege task-role access. Made-with: Cursor
|
Follow-up hardening update pushed in
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
infra/aws/modules/forge-platform/main.tf (2)
446-558:⚠️ Potential issue | 🟠 MajorCloudFront/WAF feature toggles are still missing.
These resources are unconditional, so they cannot be disabled per environment as intended for environment-aware module controls.
As per coding guidelines:
infra/**/*.{tf,tfvars}: Keep Terraform modules explicit and environment inputs parameterized.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/main.tf` around lines 446 - 558, The CloudFront and WAF resources (aws_cloudfront_origin_access_control.assets, aws_cloudfront_distribution.assets, data.aws_iam_policy_document.assets_bucket, aws_s3_bucket_policy.assets, aws_wafv2_web_acl.alb, aws_wafv2_web_acl_association.alb) are created unconditionally; add boolean feature flags (e.g. var.enable_cloudfront and var.enable_waf) and guard each resource with count = var.enable_cloudfront ? 1 : 0 or count = var.enable_waf ? 1 : 0 so they can be disabled per environment, update dependent references (use aws_cloudfront_distribution.assets[count.index].arn or conditional lookup) and ensure the S3 bucket policy and any associations reference the conditional resource only when enabled (or fallback to no-op) to avoid dangling references when features are turned off.
395-408:⚠️ Potential issue | 🟠 MajorConditionally skip final snapshots based on environment to prevent accidental prod data loss.
Line 407 hardcodes
skip_final_snapshot = truefor all environments. Whiledeletion_protectionis enabled in prod, this provides only single-layer safety. If deletion protection is ever removed or the code path changed, the database could be destroyed without a final snapshot, causing permanent data loss.Condition the snapshot behavior on environment with a named identifier for prod:
Proposed fix
- skip_final_snapshot = true + skip_final_snapshot = var.environment != "prod" + final_snapshot_identifier = var.environment == "prod" ? "${local.name_prefix}-final-snapshot" : null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/main.tf` around lines 395 - 408, The aws_db_instance resource aws_db_instance.cms currently sets skip_final_snapshot = true unconditionally, which risks losing prod data; change this to be conditional on the environment by using var.environment (and/or a dedicated var like var.is_prod) so skip_final_snapshot is true for non-prod and false for prod, while keeping deletion_protection = var.environment == "prod"; update aws_db_instance.cms to compute skip_final_snapshot from the environment check (e.g., skip_final_snapshot = var.environment != "prod" or via a new boolean var) so prod instances always take a final snapshot.
🧹 Nitpick comments (1)
infra/aws/modules/forge-platform/main.tf (1)
61-102: Single NAT gateway is an AZ-level egress SPOF for private workloads.For prod resilience, consider one NAT gateway per AZ with per-AZ private route tables.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/forge-platform/main.tf` around lines 61 - 102, The current setup uses a single aws_nat_gateway.platform and one aws_route_table.private, which makes private subnets single-AZ egress SPOF; update to create one NAT gateway per AZ and a corresponding private route table per AZ and associate each private subnet with its AZ-specific route table: replace aws_nat_gateway.platform with a per-AZ resource (use for_each or count over availability zones and create an aws_eip per NAT), create aws_route_table.private as per-AZ (for_each/count) with nat_gateway_id referencing the AZ-specific aws_nat_gateway, and update aws_route_table_association.private to associate each aws_subnet.private[count.index] with its matching per-AZ aws_route_table.private. Ensure tags/names include the AZ index/key so resources are distinguishable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@infra/aws/modules/forge-platform/main.tf`:
- Around line 352-356: The DATABASE_PASSWORD secret currently points to
aws_db_instance.cms.master_user_secret[0].secret_arn which will inject the
entire Secrets Manager JSON; update the valueFrom to reference only the password
JSON field so the env var contains a plain password string (use the Secrets
Manager JSON key selector syntax with the same secret ARN and the password key,
e.g. append the SecretString/json key selector for "password" to the ARN).
Ensure you modify the secrets entry for DATABASE_PASSWORD (the secrets block
where name = "DATABASE_PASSWORD") to use that ARN+JSON-key selector so only the
password value is injected.
---
Duplicate comments:
In `@infra/aws/modules/forge-platform/main.tf`:
- Around line 446-558: The CloudFront and WAF resources
(aws_cloudfront_origin_access_control.assets,
aws_cloudfront_distribution.assets, data.aws_iam_policy_document.assets_bucket,
aws_s3_bucket_policy.assets, aws_wafv2_web_acl.alb,
aws_wafv2_web_acl_association.alb) are created unconditionally; add boolean
feature flags (e.g. var.enable_cloudfront and var.enable_waf) and guard each
resource with count = var.enable_cloudfront ? 1 : 0 or count = var.enable_waf ?
1 : 0 so they can be disabled per environment, update dependent references (use
aws_cloudfront_distribution.assets[count.index].arn or conditional lookup) and
ensure the S3 bucket policy and any associations reference the conditional
resource only when enabled (or fallback to no-op) to avoid dangling references
when features are turned off.
- Around line 395-408: The aws_db_instance resource aws_db_instance.cms
currently sets skip_final_snapshot = true unconditionally, which risks losing
prod data; change this to be conditional on the environment by using
var.environment (and/or a dedicated var like var.is_prod) so skip_final_snapshot
is true for non-prod and false for prod, while keeping deletion_protection =
var.environment == "prod"; update aws_db_instance.cms to compute
skip_final_snapshot from the environment check (e.g., skip_final_snapshot =
var.environment != "prod" or via a new boolean var) so prod instances always
take a final snapshot.
---
Nitpick comments:
In `@infra/aws/modules/forge-platform/main.tf`:
- Around line 61-102: The current setup uses a single aws_nat_gateway.platform
and one aws_route_table.private, which makes private subnets single-AZ egress
SPOF; update to create one NAT gateway per AZ and a corresponding private route
table per AZ and associate each private subnet with its AZ-specific route table:
replace aws_nat_gateway.platform with a per-AZ resource (use for_each or count
over availability zones and create an aws_eip per NAT), create
aws_route_table.private as per-AZ (for_each/count) with nat_gateway_id
referencing the AZ-specific aws_nat_gateway, and update
aws_route_table_association.private to associate each
aws_subnet.private[count.index] with its matching per-AZ
aws_route_table.private. Ensure tags/names include the AZ index/key so resources
are distinguishable.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
infra/aws/main.tfinfra/aws/modules/forge-platform/main.tfinfra/aws/modules/forge-platform/outputs.tfinfra/aws/modules/forge-platform/variables.tfinfra/aws/outputs.tfinfra/aws/variables.tf
🚧 Files skipped from review as they are similar to previous changes (2)
- infra/aws/variables.tf
- infra/aws/main.tf
Refactor root stack files by concern and wire delegated Route53, ACM certificates, HTTPS ALB listeners, and CloudFront custom domains for prod/stage CMS and assets hostnames. Made-with: Cursor
|
Pushed follow-up in What was added:
Also preserved requested CIDR mapping:
Validation: |
Simplify AWS provider configuration by removing the extra alias provider and set the stack default region to us-east-2. Made-with: Cursor
|
Applied requested provider simplification in
Terraform still validates in |
Move the full infrastructure implementation into `modules/cms`, route root through `modules/forge`, and remove the legacy `modules/forge-platform` layout. Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
infra/aws/modules/cms/main.tf (1)
334-339:⚠️ Potential issue | 🔴 CriticalExtract only the password field from the RDS secret JSON.
The
valueFromat line 337 points to the full secret ARN without a JSON key selector. ECS will inject the entire Secrets Manager JSON payload (containingusername,password,engine,host, etc.) into theDATABASE_PASSWORDenvironment variable. Applications expecting a plain password string will fail authentication.🔧 Proposed fix
secrets = [ { name = "DATABASE_PASSWORD" - valueFrom = aws_db_instance.cms.master_user_secret[0].secret_arn + valueFrom = "${aws_db_instance.cms.master_user_secret[0].secret_arn}:password::" } ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/cms/main.tf` around lines 334 - 339, The ECS task secret is currently pointing at the full Secrets Manager ARN (aws_db_instance.cms.master_user_secret[0].secret_arn) which will inject the whole JSON payload; change the secret reference so only the password JSON field is injected into DATABASE_PASSWORD by using the Secrets Manager JSON key selector syntax (append the SecretString JSON key for password to the ARN, e.g. use the secret ARN with the SecretString:password selector) inside the secrets block that defines name = "DATABASE_PASSWORD" so the container receives only the plaintext password.
🧹 Nitpick comments (3)
infra/aws/route53.tf (1)
1-3: Consider enabling DNSSEC for the public hosted zone.DNSSEC signing helps prevent DNS spoofing and cache poisoning attacks. While the PR notes mention some security hardening was deferred, consider adding DNSSEC in a follow-up:
🔒 Example DNSSEC configuration
resource "aws_route53_zone" "cms" { name = var.delegated_zone_name } + +resource "aws_route53_key_signing_key" "cms" { + hosted_zone_id = aws_route53_zone.cms.id + key_management_service_arn = aws_kms_key.dnssec.arn + name = "cms-dnssec-ksk" +} + +resource "aws_route53_hosted_zone_dnssec" "cms" { + hosted_zone_id = aws_route53_zone.cms.id + depends_on = [aws_route53_key_signing_key.cms] +}Note: DNSSEC requires a KMS key in us-east-1 with specific key policy for Route53 signing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/route53.tf` around lines 1 - 3, The public Route53 zone defined as aws_route53_zone "cms" (name = var.delegated_zone_name) should be configured for DNSSEC signing; add a KMS key in us-east-1 with the required Route53 key policy and create an aws_route53_key_signing_key resource that references the cms zone's id and the KMS key ARN (set status to ACTIVE) so Route53 can sign the zone; ensure the KMS key policy allows Route53 to use the key for signing and document that the KMS key must live in us-east-1 for DNSSEC to work with Route53.infra/aws/modules/cms/main.tf (2)
435-475: CloudFront distribution lacks WAF protection (intentional per PR scope).Static analysis flags that the CloudFront distribution doesn't have a WAF attached. Per the PR comments, this was intentionally deferred for separate design discussion. When implemented, attach a WAFv2 Web ACL with
scope = "CLOUDFRONT"(requiresus-east-1provider) and reference it viaweb_acl_id.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/cms/main.tf` around lines 435 - 475, The CloudFront distribution resource aws_cloudfront_distribution.assets currently has no WAF attached; to add protection later create an aws_wafv2_web_acl (with scope = "CLOUDFRONT" using an aws provider configured for us-east-1) and set its ID on the distribution via the web_acl_id attribute of aws_cloudfront_distribution.assets; ensure the WAFv2 resource and provider are declared and then reference the web_acl_id in the aws_cloudfront_distribution.assets block.
389-389: Consider environment-aware final snapshot behavior for prod.
skip_final_snapshot = trueapplies to all environments, including prod. Whiledeletion_protection = trueguards prod against accidental deletion, if deletion protection is ever disabled, the database could be destroyed without a recovery snapshot.🛡️ Proposed fix
- skip_final_snapshot = true + skip_final_snapshot = var.environment != "prod" + final_snapshot_identifier = var.environment == "prod" ? "${local.name_prefix}-final-snapshot" : null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/cms/main.tf` at line 389, The Terraform RDS setting skip_final_snapshot = true unconditionally disables final snapshots; change it to be environment-aware by replacing the literal with a variable or conditional (e.g., skip_final_snapshot = var.env == "prod" ? false : true or skip_final_snapshot = var.skip_final_snapshot) in the same resource where skip_final_snapshot is defined, and ensure deletion_protection (and/or lifecycle.prevent_destroy) remains enabled for prod so prod still requires an explicit action to remove the DB and will create a final snapshot if deletion protection is ever turned off.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@infra/aws/locals.tf`:
- Around line 1-11: The pipeline fails because locals.app_domains and
locals.assets_domains only define keys "prod" and "stage" while var.environment
(or entries in var.environments) can contain other values; update the Terraform
to either validate the environment variable or generate the maps dynamically:
add a variable validation that ensures var.environment (if set) is contained in
var.environments using contains(var.environments, var.environment) and a clear
error message, or replace the static app_domains and assets_domains with
for-expressions that build a map from var.environments (e.g., a { for env in
var.environments : env => env == "prod" ? var.delegated_zone_name :
"stage.${var.delegated_zone_name}" } pattern and similarly for assets_domains
using "assets..." prefix) so keys always align with target_environments and
avoid Invalid index errors when referencing local.app_domains[each.value] or
local.assets_domains[each.value].
In `@infra/aws/modules/cms/certificates.tf`:
- Around line 33-63: The ACM certificate for CloudFront is being created in the
default region (var.aws_region) but must exist in us-east-1; update the
resources so aws_acm_certificate.assets is created using an AWS provider aliased
for us-east-1 (e.g., provider "aws" alias "us_east_1"), and ensure any related
resources used for validation (aws_route53_record.assets_cert_validation and
aws_acm_certificate_validation.assets) reference the appropriate provider or
remain with the global Route53 provider as needed; modify the
aws_acm_certificate.assets resource to use provider = aws.us_east_1 and, if
necessary, attach the same provider alias to
aws_acm_certificate_validation.assets (or keep validation records under
Route53/global provider) so CloudFront can use the certificate from us-east-1.
---
Duplicate comments:
In `@infra/aws/modules/cms/main.tf`:
- Around line 334-339: The ECS task secret is currently pointing at the full
Secrets Manager ARN (aws_db_instance.cms.master_user_secret[0].secret_arn) which
will inject the whole JSON payload; change the secret reference so only the
password JSON field is injected into DATABASE_PASSWORD by using the Secrets
Manager JSON key selector syntax (append the SecretString JSON key for password
to the ARN, e.g. use the secret ARN with the SecretString:password selector)
inside the secrets block that defines name = "DATABASE_PASSWORD" so the
container receives only the plaintext password.
---
Nitpick comments:
In `@infra/aws/modules/cms/main.tf`:
- Around line 435-475: The CloudFront distribution resource
aws_cloudfront_distribution.assets currently has no WAF attached; to add
protection later create an aws_wafv2_web_acl (with scope = "CLOUDFRONT" using an
aws provider configured for us-east-1) and set its ID on the distribution via
the web_acl_id attribute of aws_cloudfront_distribution.assets; ensure the WAFv2
resource and provider are declared and then reference the web_acl_id in the
aws_cloudfront_distribution.assets block.
- Line 389: The Terraform RDS setting skip_final_snapshot = true unconditionally
disables final snapshots; change it to be environment-aware by replacing the
literal with a variable or conditional (e.g., skip_final_snapshot = var.env ==
"prod" ? false : true or skip_final_snapshot = var.skip_final_snapshot) in the
same resource where skip_final_snapshot is defined, and ensure
deletion_protection (and/or lifecycle.prevent_destroy) remains enabled for prod
so prod still requires an explicit action to remove the DB and will create a
final snapshot if deletion protection is ever turned off.
In `@infra/aws/route53.tf`:
- Around line 1-3: The public Route53 zone defined as aws_route53_zone "cms"
(name = var.delegated_zone_name) should be configured for DNSSEC signing; add a
KMS key in us-east-1 with the required Route53 key policy and create an
aws_route53_key_signing_key resource that references the cms zone's id and the
KMS key ARN (set status to ACTIVE) so Route53 can sign the zone; ensure the KMS
key policy allows Route53 to use the key for signing and document that the KMS
key must live in us-east-1 for DNSSEC to work with Route53.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (15)
infra/aws/locals.tfinfra/aws/main.tfinfra/aws/module.tfinfra/aws/modules/cms/certificates.tfinfra/aws/modules/cms/dns.tfinfra/aws/modules/cms/main.tfinfra/aws/modules/cms/outputs.tfinfra/aws/modules/cms/variables.tfinfra/aws/modules/forge/main.tfinfra/aws/modules/forge/outputs.tfinfra/aws/modules/forge/variables.tfinfra/aws/outputs.tfinfra/aws/providers.tfinfra/aws/route53.tfinfra/aws/variables.tf
🚧 Files skipped from review as they are similar to previous changes (1)
- infra/aws/main.tf
Extract assets infrastructure into `modules/assets` and keep `modules/cms` focused on runtime/app resources while preserving existing forge outputs. Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
infra/aws/modules/cms/main.tf (2)
376-396:⚠️ Potential issue | 🟠 MajorConsider conditional final snapshot for production.
skip_final_snapshot = trueapplies to all environments including production. If the RDS instance is accidentally deleted in prod, data will be permanently lost without a final snapshot.🔧 Proposed fix
db_subnet_group_name = aws_db_subnet_group.cms.name vpc_security_group_ids = [aws_security_group.rds.id] - skip_final_snapshot = true + skip_final_snapshot = var.environment != "prod" + final_snapshot_identifier = var.environment == "prod" ? "${local.name_prefix}-final-snapshot" : null deletion_protection = var.environment == "prod"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/cms/main.tf` around lines 376 - 396, The current aws_db_instance "cms" sets skip_final_snapshot = true for all environments, which will permanently delete production data if removed; change this to be conditional so that skip_final_snapshot is false in prod (e.g., skip_final_snapshot = var.environment != "prod"), and add a final snapshot identifier (final_snapshot_identifier) or allow AWS to create one for prod; keep deletion_protection = var.environment == "prod" as is to avoid accidental deletions. Ensure the changes are made on the aws_db_instance.cms resource and update any related variables or naming (local.name_prefix) used to construct the snapshot identifier.
333-338:⚠️ Potential issue | 🔴 CriticalExtract only the password field from the RDS managed secret.
The
valueFromreferences the full secret ARN without a JSON key selector. RDS-managed secrets store JSON containingusername,password,engine,host,port, etc. ECS will inject the entire JSON blob intoDATABASE_PASSWORD, causing authentication failures when the application expects a plain string.🐛 Proposed fix
secrets = [ { name = "DATABASE_PASSWORD" - valueFrom = aws_db_instance.cms.master_user_secret[0].secret_arn + valueFrom = "${aws_db_instance.cms.master_user_secret[0].secret_arn}:password::" } ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/cms/main.tf` around lines 333 - 338, The ECS secret currently points to the RDS secret ARN (aws_db_instance.cms.master_user_secret[0].secret_arn) which contains a JSON blob; change the secrets mapping so ECS pulls only the JSON "password" field rather than the whole secret by referencing the secret ARN with the Secrets Manager JSON key selector (i.e., append the SecretString/json-key selector for "password" to the ARN) so DATABASE_PASSWORD receives only the plain password string.
🧹 Nitpick comments (3)
infra/aws/modules/cms/variables.tf (1)
1-109: Well-structured variable definitions with clear documentation.All variables include descriptions and appropriate defaults. The removal of
db_passwordin favor of Secrets Manager-managed credentials is a good security practice.Consider adding validation blocks for critical inputs like
environmentto enforce allowed values:variable "environment" { description = "Deployment environment name." type = string validation { condition = contains(["stage", "prod"], var.environment) error_message = "Environment must be 'stage' or 'prod'." } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/cms/variables.tf` around lines 1 - 109, Add an input validation block to the variable "environment" to enforce allowed values (e.g., "stage" and "prod") so callers cannot pass invalid environment names; update the variable "environment" declaration to include a validation { condition = contains(["stage","prod"], var.environment) error_message = "Environment must be 'stage' or 'prod'." } (use the variable name "environment" and the var.environment symbol in the condition).infra/aws/modules/cms/main.tf (1)
161-177: RDS security group missing explicit egress rule.The RDS security group defines ingress but no egress. While AWS allows all egress by default, an explicit minimal egress rule improves security posture documentation and prevents unintended changes if defaults change.
♻️ Proposed addition
ingress { description = "ECS to Postgres" from_port = 5432 to_port = 5432 protocol = "tcp" security_groups = [aws_security_group.ecs_service.id] } + egress { + description = "No outbound required" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = [] + } + tags = merge(local.tags, {Note: An empty
cidr_blockseffectively denies all egress, which is appropriate for RDS since it doesn't need to initiate outbound connections.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/cms/main.tf` around lines 161 - 177, The aws_security_group.rds resource defines ingress but no explicit egress; add an explicit minimal egress block to document and lock down outbound behavior. Inside the aws_security_group.rds resource, add an egress block (e.g., description = "deny all outbound", from_port = 0, to_port = 0, protocol = "-1", cidr_blocks = []) so the security group explicitly denies outbound traffic rather than relying on AWS defaults.infra/aws/modules/cms/outputs.tf (1)
26-29: Consider defensive indexing formaster_user_secret.The
[0]index assumesmanage_master_user_password = trueis set on the RDS instance. Per the PR, this is currently the case, so this will work correctly. However, if that configuration ever changes, this output would fail at plan time.For added robustness, you could use
try():value = try(aws_db_instance.cms.master_user_secret[0].secret_arn, null)This is a minor defensive suggestion—current implementation is correct given the RDS configuration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/aws/modules/cms/outputs.tf` around lines 26 - 29, The output "db_master_secret_arn" currently indexes aws_db_instance.cms.master_user_secret[0].secret_arn directly which will fail if manage_master_user_password is ever disabled; wrap the access in a defensive try expression so the value gracefully falls back (e.g., returns null) when master_user_secret[0] is absent, keeping the output robust without changing RDS behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@infra/aws/modules/assets/certificates.tf`:
- Around line 1-31: The ACM certificate is being created in the module's default
region (us-east-2) but CloudFront requires the certificate in us-east-1; update
the module so aws_acm_certificate.assets (and
aws_acm_certificate_validation.assets if it must reference the same provider)
are created with an AWS provider configured for us-east-1 (use a provider alias
like "us_east_1" in your providers.tf and attach provider = aws.us_east_1 to the
aws_acm_certificate resource), leaving aws_route53_record.assets_cert_validation
using the default/global Route53 provider; alternatively, move creation of the
certificate into the root module using a us-east-1 provider and pass the
certificate ARN into this assets module as an input variable for CloudFront
consumption.
---
Duplicate comments:
In `@infra/aws/modules/cms/main.tf`:
- Around line 376-396: The current aws_db_instance "cms" sets
skip_final_snapshot = true for all environments, which will permanently delete
production data if removed; change this to be conditional so that
skip_final_snapshot is false in prod (e.g., skip_final_snapshot =
var.environment != "prod"), and add a final snapshot identifier
(final_snapshot_identifier) or allow AWS to create one for prod; keep
deletion_protection = var.environment == "prod" as is to avoid accidental
deletions. Ensure the changes are made on the aws_db_instance.cms resource and
update any related variables or naming (local.name_prefix) used to construct the
snapshot identifier.
- Around line 333-338: The ECS secret currently points to the RDS secret ARN
(aws_db_instance.cms.master_user_secret[0].secret_arn) which contains a JSON
blob; change the secrets mapping so ECS pulls only the JSON "password" field
rather than the whole secret by referencing the secret ARN with the Secrets
Manager JSON key selector (i.e., append the SecretString/json-key selector for
"password" to the ARN) so DATABASE_PASSWORD receives only the plain password
string.
---
Nitpick comments:
In `@infra/aws/modules/cms/main.tf`:
- Around line 161-177: The aws_security_group.rds resource defines ingress but
no explicit egress; add an explicit minimal egress block to document and lock
down outbound behavior. Inside the aws_security_group.rds resource, add an
egress block (e.g., description = "deny all outbound", from_port = 0, to_port =
0, protocol = "-1", cidr_blocks = []) so the security group explicitly denies
outbound traffic rather than relying on AWS defaults.
In `@infra/aws/modules/cms/outputs.tf`:
- Around line 26-29: The output "db_master_secret_arn" currently indexes
aws_db_instance.cms.master_user_secret[0].secret_arn directly which will fail if
manage_master_user_password is ever disabled; wrap the access in a defensive try
expression so the value gracefully falls back (e.g., returns null) when
master_user_secret[0] is absent, keeping the output robust without changing RDS
behavior.
In `@infra/aws/modules/cms/variables.tf`:
- Around line 1-109: Add an input validation block to the variable "environment"
to enforce allowed values (e.g., "stage" and "prod") so callers cannot pass
invalid environment names; update the variable "environment" declaration to
include a validation { condition = contains(["stage","prod"], var.environment)
error_message = "Environment must be 'stage' or 'prod'." } (use the variable
name "environment" and the var.environment symbol in the condition).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (12)
infra/aws/modules/assets/certificates.tfinfra/aws/modules/assets/dns.tfinfra/aws/modules/assets/main.tfinfra/aws/modules/assets/outputs.tfinfra/aws/modules/assets/variables.tfinfra/aws/modules/cms/certificates.tfinfra/aws/modules/cms/dns.tfinfra/aws/modules/cms/main.tfinfra/aws/modules/cms/outputs.tfinfra/aws/modules/cms/variables.tfinfra/aws/modules/forge/main.tfinfra/aws/modules/forge/outputs.tf
🚧 Files skipped from review as they are similar to previous changes (2)
- infra/aws/modules/forge/main.tf
- infra/aws/modules/cms/dns.tf
Use explicit platform naming by moving module composition to `modules/platform` and updating root references and outputs accordingly. Made-with: Cursor
Relocate VPC, subnets, routeing, NAT, and shared security groups from cms to platform, then pass network IDs into cms as explicit module inputs. Made-with: Cursor
Move module wiring and Route53 zone blocks into `infra/aws/main.tf` and remove now-redundant split files. Made-with: Cursor
Remove root locals and compute app/assets hostnames inside platform from the forge delegated zone, while renaming Route53 zone resource to `forge`. Made-with: Cursor
Adjust stage app hostname derivation to `cms.stage.{zone}` while keeping prod at
`cms.{zone}`.
Made-with: Cursor
|
Reviewed both threads. Status:
Remaining actionable CodeRabbit items in current HEAD:
If you want, I can push a small follow-up commit now that resolves exactly these three. |
Resolve the remaining actionable review items by fixing ECS secret extraction, making prod DB deletions snapshot-safe, and wiring us-east-1 ACM for CloudFront. Made-with: Cursor
Review feedback addressed (7712618)Fixed:
Validation:
Not changed in this commit:
|
Centralize ALB/WAF ownership in the platform module while keeping CMS hostname certificate and DNS pointer logic in cms, and include the agreed security hardening updates for logging, backup posture, and egress controls. Made-with: Cursor
Add an infra package with lint commands for aws and vercel Terraform stacks, and expose a root lint:infra shortcut for local and CI-friendly validation. Made-with: Cursor
Move the host-based listener rule definition into cms while keeping shared ALB ownership in platform, so app-level hostname routing remains defined with cms. Made-with: Cursor
Rename platform module resources and internal module references to generic platform/application identifiers while preserving existing routing behavior. Made-with: Cursor
Remove remaining cms_container naming from platform internals by standardizing on app_container inputs and wiring cms module to the same app-prefixed schema. Made-with: Cursor
Register infra as a workspace package and route lint:infra through turbo so infrastructure linting participates in the monorepo task graph. Made-with: Cursor
Keep lint entrypoint unified under the existing root lint command and remove the extra lint:infra alias. Made-with: Cursor
Shift target group and ALB-to-app security-group rules into cms, and remove app runtime pass-through variables from platform so app config lives in module. Made-with: Cursor
|
Implemented your latest review changes in commit Addressed:
Validation:
|
Point ALB target-group health checks at Strapi's native /_health endpoint so service health reflects application readiness without custom routes. Made-with: Cursor
Lock CMS module runtime settings to Strapi defaults, remove pass-through app tuning variables, and align ALB<->CMS SG rule naming with review feedback. Made-with: Cursor
Review feedback addressed (62209a3)Fixed:
Deferred / ticketed:
Kept those deferred items out of this PR to avoid expanding scope; captured all as follow-up issues in the epic stream. |
Summary
infra/awsfor stage/prod viamodules/forge-platform.Resolves #70
Contracts Changed
Regeneration Required
Validation
Made with Cursor
Summary by CodeRabbit
New Features
Chores