Skip to main content

πŸ›‘οΈ AWS EC2 Auto Scaling Group behind ELB doesn't use ELB health check🟒

  • Contextual name: πŸ›‘οΈ Auto Scaling Group behind ELB doesn't use ELB health check🟒
  • ID: /ce/ca/aws/autoscaling/group-with-elb-without-elb-health-check
  • Tags:
  • Policy Type: COMPLIANCE_POLICY
  • Policy Categories: RELIABILITY

Logic​

Similar Policies​

  • Internal: dec-x-71d45f32

Similar Internal Rules​

RulePoliciesFlags
βœ‰οΈ dec-x-71d45f321

Description​

Open File

Description​

Ensure that AWS EC2 Auto Scaling Groups (ASGs) associated with Elastic Load Balancers (ELBs) are configured to use ELB health checks rather than the default EC2 health checks.

Rationale​

ELB health checks provide a more accurate and application-aware mechanism for determining instance health compared to standard EC2 status checks. By integrating directly with the load balancer, ELB health checks reflect the actual ability of instances to serve traffic. This enables Auto Scaling Groups to make more informed scaling and replacement decisions, leading to faster recovery from failures and improved application availability.

Configuring ASGs to use ELB health checks ensures that scaling decisions are based on the same health criteria used by the load balancer itself, promoting more consistent and reliable traffic distribution.

Impact​

May introduce additional configuration and management overhead compared to the default EC2 health check type.

Audit​

This policy flags an AWS EC2 Auto Scaling Group as INCOMPLIANT if:

... see more

Remediation​

Open File

Remediation​

From Command Line​

By default, Amazon EC2 Auto Scaling relies on EC2 status checks to determine instance health.

  1. To configure the ASG to use ELB health checks to ensure that instances failing load balancer health criteria are automatically replaced use update-auto-scaling-group command:
aws autoscaling update-auto-scaling-group \
--auto-scaling-group-name {{auto-scaling-group-name}} \
--health-check-type ELB
  1. (Optional but recommended) Configure a health check grace period (in seconds) to allow instances sufficient time to initialize before being evaluated by health checks:
aws autoscaling update-auto-scaling-group \
--auto-scaling-group-name {{auto-scaling-group-name}} \
--health-check-grace-period 300

policy.yaml​

Open File

Linked Framework Sections​

SectionSub SectionsInternal RulesPoliciesFlagsCompliance
πŸ’Ό AWS Foundational Security Best Practices v1.0.0 β†’ πŸ’Ό [AutoScaling.1] Auto Scaling groups associated with a load balancer should use ELB health checks11no data
πŸ’Ό AWS Well-Architected β†’ πŸ’Ό REL06-BP04 Automate responses (Real-time processing and alarming)3no data
πŸ’Ό AWS Well-Architected β†’ πŸ’Ό REL07-BP02 Obtain resources upon detection of impairment to a workload3no data
πŸ’Ό AWS Well-Architected β†’ πŸ’Ό REL11-BP01 Monitor all components of the workload to detect failures2no data
πŸ’Ό AWS Well-Architected β†’ πŸ’Ό REL11-BP03 Automate healing on all layers3no data
πŸ’Ό Cloudaware Framework β†’ πŸ’Ό System Configuration45no data
πŸ’Ό FedRAMP High Security Controls β†’ πŸ’Ό CA-7 Continuous Monitoring (L)(M)(H)213no data
πŸ’Ό FedRAMP High Security Controls β†’ πŸ’Ό CP-2(2) Capacity Planning (H)3no data
πŸ’Ό FedRAMP High Security Controls β†’ πŸ’Ό SI-2 Flaw Remediation (L)(M)(H)2714no data
πŸ’Ό FedRAMP Low Security Controls β†’ πŸ’Ό CA-7 Continuous Monitoring (L)(M)(H)113no data
πŸ’Ό FedRAMP Low Security Controls β†’ πŸ’Ό SI-2 Flaw Remediation (L)(M)(H)14no data
πŸ’Ό FedRAMP Moderate Security Controls β†’ πŸ’Ό CA-7 Continuous Monitoring (L)(M)(H)213no data
πŸ’Ό FedRAMP Moderate Security Controls β†’ πŸ’Ό SI-2 Flaw Remediation (L)(M)(H)214no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό DE.AE-02: Potentially adverse events are analyzed to better understand associated activities35no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό DE.AE-03: Information is correlated from multiple sources50no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό DE.CM-01: Networks and network services are monitored to find potentially adverse events150no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό DE.CM-02: The physical environment is monitored to find potentially adverse events13no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό DE.CM-03: Personnel activity and technology usage are monitored to find potentially adverse events85no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό DE.CM-06: External service provider activities and services are monitored to find potentially adverse events35no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό DE.CM-09: Computing hardware and software, runtime environments, and their data are monitored to find potentially adverse events149no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό ID.IM-01: Improvements are identified from evaluations26no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό ID.IM-02: Improvements are identified from security tests and exercises, including those done in coordination with suppliers and relevant third parties40no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό ID.IM-03: Improvements are identified from execution of operational processes, procedures, and activities41no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό ID.RA-01: Vulnerabilities in assets are identified, validated, and recorded31no data
πŸ’Ό NIST CSF v2.0 β†’ πŸ’Ό ID.RA-07: Changes and exceptions are managed, assessed for risk impact, recorded, and tracked31no data
πŸ’Ό NIST SP 800-53 Revision 5 β†’ πŸ’Ό CA-7 Continuous Monitoring613no data
πŸ’Ό NIST SP 800-53 Revision 5 β†’ πŸ’Ό CP-2(2) Contingency Plan _ Capacity Planning3no data
πŸ’Ό NIST SP 800-53 Revision 5 β†’ πŸ’Ό SI-2 Flaw Remediation6611no data
πŸ’Ό PCI DSS v3.2.1 β†’ πŸ’Ό 2.2 Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.5332no data
πŸ’Ό PCI DSS v4.0.1 β†’ πŸ’Ό 2.2.1 Configuration standards are developed, implemented, and maintained.13no data
πŸ’Ό PCI DSS v4.0 β†’ πŸ’Ό 2.2.1 Configuration standards are developed, implemented, and maintained.13no data