AWS Security Mistakes: 5 Critical Vulnerabilities Costing Companies Millions

Discover 5 critical AWS security mistakes that have cost companies millions. Learn how to fix S3, IAM, RDS & MFA vulnerabilities now.

Cover for 5 AWS Security Mistakes That Cost Companies Millions

22 min read


When it comes to AWS Security, many users and organizations make very common security mistakes when deploying solutions in their AWS accounts. This post will cover the five most common security mistakes I have seen while working at Amazon Web Services as a Senior Security Solutions Architect. Avoiding these security mistakes can save companies millions of dollars in lost revenue from security breaches and potential legal liability from data breaches.

The five key areas of security I will discuss in more detail are object storage, database security, IAM permissions, MFA implementation, and network security. These areas are fundamental to securing your AWS environments, yet they constantly are some of the most misconfigured areas within AWS accounts. Each of these areas represents a single point of failure for security in AWS accounts that can cascade into a snowball effect of security incidents.

1. Public S3 Buckets with Sensitive Data

The Mistake:

Someone configures a bucket with either public read access or public read/write access. Then someone accidentally stores confidential information like customer records, financial data, usernames and passwords or proprietary documents in the S3 bucket.

Real Impact:

This mistake is listed first because there are more news stories related to this misconfiguration than can be listed here. Publicly open AWS S3 buckets have led to breaches affecting millions of customers, resulting in hundreds of millions in fines and settlements. Companies have exposed everything from employee records to medical data through similar oversights.

The Fix:

To fix this security issue with an S3 bucket, you should enable Amazon S3 Block Public Access at the account level if a public bucket is made public that should not be.

Enable S3 Block Public Access at the account level:

# Enable Block Public Access for all S3 buckets in the account
aws s3control put-public-access-block \
    --account-id $(aws sts get-caller-identity --query Account --output text) \
    --public-access-block-configuration \
    BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true

For existing buckets, immediately remove public access:

# Remove public access from a specific bucket
aws s3api put-public-access-block \
    --bucket your-bucket-name \
    --public-access-block-configuration \
    BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true

You can also use bucket policies and Access Control Lists (ACLs) to restrict public access. Here’s an example of a secure bucket policy that only allows access from your VPC:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VPCOnlyAccess",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::your-bucket-name",
                "arn:aws:s3:::your-bucket-name/*"
            ],
            "Condition": {
                "StringNotEquals": {
                    "aws:sourceVpce": "vpce-1234567890abcdef0"
                }
            }
        }
    ]
}

If you know that an account should NEVER have publicly open S3 buckets, you can attach a Service Control Policy (SCP) to the organization the account belongs to in AWS Organizations:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyPublicS3Buckets",
            "Effect": "Deny",
            "Action": [
                "s3:PutBucketPublicReadPolicy",
                "s3:PutBucketPublicWritePolicy",
                "s3:PutBucketPolicy"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "s3:publicReadAccess": "true"
                }
            }
        }
    ]
}

Prevention Strategy:

Audit all existing buckets immediately using AWS CLI commands:

# List all buckets and check their public access settings
aws s3api list-buckets --query 'Buckets[].Name' --output text | \
xargs -I {} aws s3api get-public-access-block --bucket {}

# Find buckets with public read access
aws s3api list-buckets --query 'Buckets[].Name' --output text | \
xargs -I {} sh -c 'echo "Checking bucket: {}"; aws s3api get-bucket-acl --bucket {} --query "Grants[?Grantee.URI==\`http://acs.amazonaws.com/groups/global/AllUsers\`]"'

Set up CloudTrail logging to track bucket access patterns and configure alerts for any public access changes. Use IAM Access Analyzer for S3 to detect publicly available buckets. You can learn about how to detect public buckets with IAM Access Analyzer for S3 here.

2. Overprivileged IAM Permissions

The Mistake:

Many times, when users are initially set up in AWS, someone attaches the Admin IAM policy to the user to get things up and running. Or a new developer may be hired and needs access to the AWS account. The person setting up the account grants the developer admin rights because it is easier than figuring out what actual IAM permissions the developer should have.

AWS resources are also susceptible to over-permissioning. An example of this could be an application running on an EC2 instance that is throwing a permissions error. Instead of figuring out what permissions the EC2 instance needs, someone attaches an IAM Role with AWS administrator permissions to the instance to get the application up and running. These are just a few ways users or resources can end up over-permissioned.

Real Impact:

If a bad actor compromises an AWS user account with excessive permissions, they can then access your entire AWS environment and move laterally within it. A bad actor can compromise credentials in many different ways, ranging from access keys and secrets checked into a public code repository to accessing credentials stored insecurely in a public Amazon S3 bucket (remember mistake one above). Major breaches have involved compromised credentials with far more access than necessary, exposing millions of user records. These breaches cost companies millions of dollars in lost revenue because of legal fees, lawsuits, and fines.

The Fix:

You should implement least privilege from day one; this is not optional. Add users to groups with the correct IAM policy attached. Here’s how to create a developer group with appropriate permissions:

# Create a developer group
aws iam create-group --group-name Developers

# Create a least-privilege developer policy
aws iam create-policy \
    --policy-name DeveloperPolicy \
    --policy-document '{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:Describe*",
                    "s3:GetObject",
                    "s3:PutObject",
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents",
                    "cloudwatch:GetMetricStatistics",
                    "cloudwatch:ListMetrics"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket"
                ],
                "Resource": "arn:aws:s3:::dev-*"
            }
        ]
    }'

# Attach policy to the group
aws iam attach-group-policy \
    --group-name Developers \
    --policy-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):policy/DeveloperPolicy

For EC2 instances, create specific roles instead of using broad permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::my-app-bucket/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter",
                "ssm:GetParameters"
            ],
            "Resource": "arn:aws:ssm:us-east-1:*:parameter/myapp/*"
        }
    ]
}

Using groups allows you to avoid having ad-hoc permissions across different users. You can also use the IAM Access Analyzer to see which users have unused permissions:

# Generate access analyzer findings
aws accessanalyzer list-findings \
    --analyzer-arn arn:aws:access-analyzer:us-east-1:$(aws sts get-caller-identity --query Account --output text):analyzer/ConsoleAnalyzer-$(aws sts get-caller-identity --query Account --output text)

Prevention Strategy:

You should perform regular audits of IAM policies using services such as AWS Access Advisor:

# Get access advisor report for a user
aws iam generate-service-last-accessed-details \
    --arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):user/username

# Check the report status and retrieve results
aws iam get-service-last-accessed-details --job-id <job-id-from-previous-command>

Then, on a quarterly basis, remove unused permissions from policies. You can also use the IAM policy simulator to test access before deploying changes:

# Simulate policy evaluation
aws iam simulate-principal-policy \
    --policy-source-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):user/testuser \
    --action-names s3:GetObject \
    --resource-arns arn:aws:s3:::test-bucket/test-key

It is also recommended that you implement a just-in-time access policy for administrative account privileges.

3. Exposed RDS Databases

The Mistake:

Often, when deploying an RDS database instance, someone may make the database available on the public Internet or place it in public subnets without securing the security group with the correct network settings. This frequently happens because someone can’t figure out why they can’t connect their application to the RDS instance. So they fix the issue by allowing public access to the RDS instance instead of properly fixing the network configuration issue and leaving this attack vector open to bad actors.

Additionally, another common reason that public access is configured for RDS instances is for testing and development, with a plan to close the public accessibility later, but never getting around to fixing the configuration issue in production. By leaving this vulnerability open to bad actors, a brute-force attack could be launched against the RDS instance.

Real Impact:

Publicly exposed databases have led to numerous major breaches, ranging from voter records being stolen to entire customer databases being stolen. Depending on the type of data being stored in the database, bad actors gaining access is considered the holy grail when it comes to stealing data, especially if the database stores customers’ data, healthcare data, or payment information. Much of this data is then sold to the highest bidder to propagate other attacks, such as financial fraud and social engineering attacks. Database breaches result in major reputational damage, costly lawsuits, and regulatory fines that vary by industry and data type.

The Fix:

To fix publicly exposed RDS instances, you should place them in private DB subnet groups within your VPC. Here’s how to create a proper DB subnet group:

# Create a DB subnet group with private subnets
aws rds create-db-subnet-group \
    --db-subnet-group-name private-db-subnet-group \
    --db-subnet-group-description "Private subnet group for RDS" \
    --subnet-ids subnet-12345678 subnet-87654321 \
    --tags Key=Name,Value=PrivateDBSubnetGroup

When creating or modifying an RDS instance, ensure it’s not publicly accessible:

# Create RDS instance in private subnet with proper security
aws rds create-db-instance \
    --db-instance-identifier mydb-private \
    --db-instance-class db.t3.micro \
    --engine mysql \
    --master-username admin \
    --master-user-password mySecurePassword123! \
    --allocated-storage 20 \
    --db-subnet-group-name private-db-subnet-group \
    --vpc-security-group-ids sg-restrictive123 \
    --no-publicly-accessible \
    --storage-encrypted \
    --kms-key-id arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012

Check for publicly accessible RDS instances:

# List all RDS instances and check public accessibility
aws rds describe-db-instances \
    --query 'DBInstances[?PubliclyAccessible==`true`].[DBInstanceIdentifier,PubliclyAccessible,DBSubnetGroup.VpcId]' \
    --output table

Create restrictive security groups for your databases:

{
    "GroupName": "rds-mysql-private",
    "Description": "Security group for private MySQL RDS instance",
    "VpcId": "vpc-12345678",
    "SecurityGroupRules": [
        {
            "IpProtocol": "tcp",
            "FromPort": 3306,
            "ToPort": 3306,
            "ReferencedGroupInfo": {
                "GroupId": "sg-webapp123"
            }
        }
    ]
}

Using private DB subnet groups within RDS ensures your database instance is not exposed directly to the internet. You must use proper security settings to restrict access to only AWS resources that need to access the data. Also, ensure that users who should not have access to the RDS instance or instance credentials don’t have access. Lastly, ensure you use encryption at rest and in transit for all databases using AWS services such as KMS.

It is also recommended that monitoring and alerting for your RDS database instance be enabled:

# Enable Enhanced Monitoring for RDS
aws rds modify-db-instance \
    --db-instance-identifier mydb-private \
    --monitoring-interval 60 \
    --monitoring-role-arn arn:aws:iam::123456789012:role/rds-monitoring-role

Prevention Strategy:

When configuring preventative security measures for RDS, one key thing to do is ensure you have CloudTrail enabled:

# Create CloudTrail for RDS API monitoring
aws cloudtrail create-trail \
    --name rds-api-trail \
    --s3-bucket-name my-cloudtrail-bucket \
    --include-global-service-events \
    --is-multi-region-trail

As mentioned earlier, monitoring and alerting should be enabled. You should particularly enable RDS Enhanced Monitoring, which provides real-time system metrics for your RDS database instances. Set up CloudWatch alarms for suspicious activity:

# Create CloudWatch alarm for failed connection attempts
aws cloudwatch put-metric-alarm \
    --alarm-name "RDS-FailedConnections" \
    --alarm-description "Alert on RDS failed connections" \
    --metric-name DatabaseConnections \
    --namespace AWS/RDS \
    --statistic Sum \
    --period 300 \
    --threshold 10 \
    --comparison-operator GreaterThanThreshold \
    --dimensions Name=DBInstanceIdentifier,Value=mydb-private \
    --evaluation-periods 2

Lastly, be sure that VPC Flow logs are enabled:

# Enable VPC Flow Logs for database traffic monitoring
aws ec2 create-flow-logs \
    --resource-type VPC \
    --resource-ids vpc-12345678 \
    --traffic-type ALL \
    --log-destination-type cloud-watch-logs \
    --log-group-name VPCFlowLogs \
    --deliver-logs-permission-arn arn:aws:iam::123456789012:role/flowlogsRole

Schedule Free Cloud Security Assessment Consultation

4. No Multi-Factor Authentication Implementation

The Mistake:

Not enforcing MFA across all of your AWS accounts is a critical mistake, which again leaves you open to brute-force username and password attacks by bad actors. Brute-force attacks are one of the most basic attacks that attackers carry out and one of the easiest attacks to implement in an automated fashion.

Many times, someone new is hired within the organization, and the goal is to get them set up as quickly as possible. Usually, this means setting up their username and password, and completing MFA is overlooked. This is how weakly secured accounts exist in your AWS environment.

Not having a process for implementing MFA for all user accounts, especially privileged users with elevated permissions, is the easiest way for a bad actor to gain access to your AWS environments. Attackers only need to get it right once before they have access to your AWS environment and begin moving laterally throughout your AWS accounts.

Real Impact:

Regarding security breaches, a lack of MFA has been directly responsible for some of the most damaging cloud security breaches in recent history. Not implementing proper MFA has led to cases where bad actors have used compromised AWS accounts for cryptocurrency mining operations, racking hundreds of thousands of dollars in compute costs within days. If attackers successfully compromise user credentials through phishing attacks, credential stuffing, or social engineering, the absence of MFA means they have immediate and complete access to AWS environments. Beyond the immediate financial impact, attackers with access to AWS environments without MFA protection have been able to exfiltrate massive amounts of sensitive data, including customer personal information, financial records, and proprietary business data.

Companies have faced regulatory fines in the millions of dollars specifically because they failed to implement basic security controls like MFA. GDPR fines, HIPAA penalties, and industry-specific regulations consider MFA implementation a fundamental security requirement, and its absence can be seen as negligence in court proceedings. Major breaches have demonstrated how quickly attackers can escalate access once they gain initial entry, moving laterally through environments and accessing critical resources that multiple authentication layers should have protected.

The Fix:

Your first step to fixing MFA implementation across your AWS environment is to enforce MFA for all IAM users. Check which users don’t have MFA enabled:

# List users without MFA devices
aws iam list-users --query 'Users[?not_null(UserName)].[UserName]' --output text | \
while read user; do
    mfa=$(aws iam list-mfa-devices --user-name $user --query 'MFADevices' --output text)
    if [ -z "$mfa" ]; then
        echo "User $user has no MFA device"
    fi
done

Create a Service Control Policy (SCP) that denies access without MFA:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyAllExceptListedIfNoMFA",
            "Effect": "Deny",
            "NotAction": [
                "iam:CreateVirtualMFADevice",
                "iam:DeleteVirtualMFADevice",
                "iam:ListVirtualMFADevices",
                "iam:EnableMFADevice",
                "iam:ResyncMFADevice",
                "iam:ListAccountAliases",
                "iam:ListUsers",
                "iam:ListSSHPublicKeys",
                "iam:ListAccessKeys",
                "iam:ListServiceSpecificCredentials",
                "iam:GetAccountSummary",
                "sts:GetSessionToken"
            ],
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:MultiFactorAuthPresent": "false"
                }
            }
        }
    ]
}

You should use AWS Organizations Service Control Policies (SCPs) to create account-level policies that deny access to AWS services unless MFA is present. Here’s how to attach this policy:

# Create the SCP
aws organizations create-policy \
    --name "RequireMFA" \
    --description "Requires MFA for all actions" \
    --type SERVICE_CONTROL_POLICY \
    --content file://require-mfa-policy.json

# Attach to organizational unit
aws organizations attach-policy \
    --policy-id p-xxxxxxxxx \
    --target-id ou-xxxxxxxxx

For setting up virtual MFA devices:

# Create virtual MFA device for a user
aws iam create-virtual-mfa-device \
    --virtual-mfa-device-name MyMFADevice \
    --outfile QRCode.png \
    --bootstrap-method QRCodePNG

# Enable MFA device (after user scans QR code and provides two consecutive codes)
aws iam enable-mfa-device \
    --user-name MyUser \
    --serial-number arn:aws:iam::123456789012:mfa/MyMFADevice \
    --authentication-code-1 123456 \
    --authentication-code-2 789012

For the most sensitive accounts, such as root users and administrators with broad permissions, you should implement hardware-based MFA devices like YubiKeys rather than relying on SMS-based authentication. You should also implement conditional access policies that require MFA for console access and API calls made from unfamiliar locations or devices.

Prevention Strategy:

Here’s how you can prevent MFA security gaps from happening again. First, get CloudTrail running to track every login attempt:

# Create CloudWatch alarm for console logins without MFA
aws cloudwatch put-metric-alarm \
    --alarm-name "ConsoleLoginWithoutMFA" \
    --alarm-description "Alert on console login without MFA" \
    --metric-name ConsoleLogin \
    --namespace AWS/CloudTrailMetrics \
    --statistic Sum \
    --period 300 \
    --threshold 1 \
    --comparison-operator GreaterThanOrEqualToThreshold \
    --evaluation-periods 1

Set up automated scripts or Config rules that constantly check who’s got MFA turned on:

# AWS Config rule to check for MFA
aws configservice put-config-rule \
    --config-rule '{
        "ConfigRuleName": "mfa-enabled-for-iam-console-access",
        "Source": {
            "Owner": "AWS",
            "SourceIdentifier": "MFA_ENABLED_FOR_IAM_CONSOLE_ACCESS"
        }
    }'

Generate MFA compliance reports:

# Generate MFA compliance report
aws iam generate-credential-report
sleep 10
aws iam get-credential-report --query 'Content' --output text | base64 -d > mfa-report.csv

# Parse the CSV to identify users without MFA
awk -F',' 'NR>1 && $4=="false" {print "User " $1 " does not have MFA enabled"}' mfa-report.csv

Don’t just set it and forget it, though. You should have regular checks related to these alerts. Build a dashboard that shows MFA coverage across all your accounts at a glance. When new employees are hired, MFA is non-negotiable from day one.

5. Inadequate Security Group Management

The Mistake:

Creating overly permissive security group rules is one of those mistakes that happens constantly. You’ve probably seen it yourself: someone can’t get their application to connect, so they open up SSH (port 22), RDP (3389), or database ports (3306, 5432, 1433) to the entire internet (0.0.0.0/0) to “get it working.” The problem is, these quick fixes during troubleshooting often become permanent security holes when no one goes back to properly configure the network settings.

Here’s another way this happens: your team sets up security groups for development or testing with wide-open access rules because, well, it’s just dev, right? Then someone copies those same security groups straight into production without thinking twice about it. Before you know it, your production systems are running with development-level permissions, exposed to anyone on the internet who knows where to look.

What makes this worse is that security groups are stateful, and honestly, many administrators don’t fully understand what that means for their configurations. They’ll create inbound rules thinking they’re being specific, but those rules allow way more access than intended. Or they set up rules for a particular purpose, that purpose goes away, but the rules stick around forever because no one remembers why they were created in the first place.

Real Impact:

Poor security group management has contributed to some of our most significant cloud breaches. Here’s what happens: attackers enter the door through compromised credentials, a vulnerable app, or social engineering. Once they’re in, those wide-open security groups let them move anywhere they want. Think about it this way: an attacker compromises your web server, and because your security groups allow unrestricted database access, they can immediately start hitting every database in your environment. What should’ve been a minor incident with one web app turns into a full-blown data breach affecting all your customer records.

The real-world impact goes beyond the breach itself. Compliance requirements such as healthcare (HIPAA) and finance (PCI DSS) specifically require proper network segmentation and access controls. When your security groups allow unrestricted access, you violate those requirements, and the fines can be substantial. We’re talking millions of dollars in regulatory penalties on top of whatever damage the breach caused.

Then there’s the cryptocurrency mining nightmare that keeps happening. Attackers constantly scan for exposed services, and when they find them, thanks to poor security group configurations, they turn your infrastructure into their crypto mining operation. Companies have woken up to AWS bills in the hundreds of thousands of dollars because attackers used their compute resources to mine Bitcoin. Major breaches from recent years have shown us how network misconfigurations can turn small vulnerabilities into disasters, and we’re seeing the same patterns play out in AWS environments where security groups aren’t properly locked down.

The Fix:

Now let’s focus on fixing your security groups. The first thing you need to do is audit every single security group across all your AWS accounts:

# List all security groups with overly permissive rules (0.0.0.0/0)
aws ec2 describe-security-groups \
    --query 'SecurityGroups[?IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]].[GroupId,GroupName,IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]]' \
    --output table

# Check for specific dangerous ports open to the internet
aws ec2 describe-security-groups \
    --query 'SecurityGroups[?IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`] && (FromPort==`22` || FromPort==`3389` || FromPort==`3306` || FromPort==`5432`)]].[GroupId,GroupName,IpPermissions[0].FromPort]' \
    --output table

Replace overly permissive rules with specific IP ranges or security group references:

# Remove a rule allowing SSH from everywhere
aws ec2 revoke-security-group-ingress \
    --group-id sg-12345678 \
    --protocol tcp \
    --port 22 \
    --cidr 0.0.0.0/0

# Add a rule allowing SSH only from your corporate VPN
aws ec2 authorize-security-group-ingress \
    --group-id sg-12345678 \
    --protocol tcp \
    --port 22 \
    --cidr 203.0.113.0/24

Create proper layered security groups for your application tiers:

# Web tier security group - allows HTTP/HTTPS from internet
aws ec2 create-security-group \
    --group-name web-tier-sg \
    --description "Security group for web servers" \
    --vpc-id vpc-12345678

aws ec2 authorize-security-group-ingress \
    --group-id sg-web123 \
    --protocol tcp \
    --port 80 \
    --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress \
    --group-id sg-web123 \
    --protocol tcp \
    --port 443 \
    --cidr 0.0.0.0/0

# App tier security group - only allows access from web tier
aws ec2 create-security-group \
    --group-name app-tier-sg \
    --description "Security group for application servers" \
    --vpc-id vpc-12345678

aws ec2 authorize-security-group-ingress \
    --group-id sg-app123 \
    --protocol tcp \
    --port 8080 \
    --source-group sg-web123

# Database tier security group - only allows access from app tier
aws ec2 create-security-group \
    --group-name db-tier-sg \
    --description "Security group for database servers" \
    --vpc-id vpc-12345678

aws ec2 authorize-security-group-ingress \
    --group-id sg-db123 \
    --protocol tcp \
    --port 3306 \
    --source-group sg-app123

Here’s an example of a well-structured security group using CloudFormation:

WebTierSecurityGroup:
  Type: AWS::EC2::SecurityGroup
  Properties:
    GroupName: web-tier-secure
    GroupDescription: Secure web tier security group
    VpcId: !Ref VPC
    SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: 80
        ToPort: 80
        CidrIp: 0.0.0.0/0
        Description: HTTP from internet
      - IpProtocol: tcp
        FromPort: 443
        ToPort: 443
        CidrIp: 0.0.0.0/0
        Description: HTTPS from internet
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: 10.0.0.0/8
        Description: SSH from corporate network only
    Tags:
      - Key: Name
        Value: web-tier-secure
      - Key: Purpose
        Value: Web servers with restricted access

Your security groups should match how your application is actually built. Think of it like layers: web servers, application servers, and database servers should each have their own security groups.

Prevention Strategy:

You need to know the moment someone changes your security groups. Set up automated monitoring using AWS Config:

# Create Config rule for overly permissive security groups
aws configservice put-config-rule \
    --config-rule '{
        "ConfigRuleName": "security-group-ssh-check",
        "Source": {
            "Owner": "AWS",
            "SourceIdentifier": "INCOMING_SSH_DISABLED"
        }
    }'

# Create another rule for unrestricted source in security groups
aws configservice put-config-rule \
    --config-rule '{
        "ConfigRuleName": "security-group-unrestricted-common-ports",
        "Source": {
            "Owner": "AWS", 
            "SourceIdentifier": "EC2_SECURITY_GROUP_ATTACHED_TO_ENI_PERIODIC"
        }
    }'

Set up CloudWatch Events to alert on security group changes:

{
    "source": ["aws.ec2"],
    "detail-type": ["AWS API Call via CloudTrail"],
    "detail": {
        "eventSource": ["ec2.amazonaws.com"],
        "eventName": [
            "AuthorizeSecurityGroupIngress",
            "AuthorizeSecurityGroupEgress",
            "RevokeSecurityGroupIngress",
            "RevokeSecurityGroupEgress",
            "CreateSecurityGroup",
            "DeleteSecurityGroup"
        ]
    }
}

Create Lambda functions for automated remediation:

import boto3
import json

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')
    
    # Extract security group ID from CloudWatch Event
    detail = event['detail']
    security_group_id = detail['responseElements']['groupId']
    
    # Check if the rule allows 0.0.0.0/0 on sensitive ports
    response = ec2.describe_security_groups(GroupIds=[security_group_id])
    
    for sg in response['SecurityGroups']:
        for rule in sg['IpPermissions']:
            for ip_range in rule.get('IpRanges', []):
                if ip_range['CidrIp'] == '0.0.0.0/0' and rule['FromPort'] in [22, 3389, 3306, 5432]:
                    # Revoke the dangerous rule
                    ec2.revoke_security_group_ingress(
                        GroupId=security_group_id,
                        IpPermissions=[rule]
                    )
                    
                    # Send alert
                    print(f"Automatically removed dangerous rule from {security_group_id}")
    
    return {
        'statusCode': 200,
        'body': json.dumps('Security group remediation completed')
    }

Generate regular security group audit reports:

#!/bin/bash
# Security Group Audit Script

echo "=== Security Group Audit Report ===" > sg-audit-report.txt
echo "Generated on: $(date)" >> sg-audit-report.txt
echo "" >> sg-audit-report.txt

echo "Checking for overly permissive security groups..." >> sg-audit-report.txt

# Find security groups with 0.0.0.0/0 access
aws ec2 describe-security-groups \
    --query 'SecurityGroups[?IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]].[GroupId,GroupName,IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]].FromPort]' \
    --output text >> sg-audit-report.txt

echo "" >> sg-audit-report.txt
echo "Checking for unused security groups..." >> sg-audit-report.txt

# Find unused security groups
for sg in $(aws ec2 describe-security-groups --query 'SecurityGroups[].GroupId' --output text); do
    instances=$(aws ec2 describe-instances --filters "Name=instance.group-id,Values=$sg" --query 'Reservations[].Instances[].InstanceId' --output text)
    eni=$(aws ec2 describe-network-interfaces --filters "Name=group-id,Values=$sg" --query 'NetworkInterfaces[].NetworkInterfaceId' --output text)
    
    if [[ -z "$instances" && -z "$eni" ]]; then
        echo "Unused security group: $sg" >> sg-audit-report.txt
    fi
done

echo "Audit complete. Check sg-audit-report.txt for results."

Here’s where Infrastructure as Code becomes your best friend. Use CloudFormation, Terraform, or AWS CDK to manage your security groups. Make security group reviews part of your code review process. When changes go through pull requests, everyone can see exactly what’s being modified before it hits production.

Building a Security-First Culture in AWS

These five security mistakes share a common thread: they often result from rushed deployments, inadequate training, or treating security as an afterthought rather than a foundational requirement. The organizations that successfully avoid these costly mistakes are those that build security into their cloud adoption strategy from day one and maintain a culture of security awareness throughout their teams.

Immediate Actions You Can Take Today:

Enable AWS CloudTrail across all regions and accounts to maintain comprehensive audit logs of all API calls and user activities:

# Enable CloudTrail for all regions
aws cloudtrail create-trail \
    --name organization-wide-trail \
    --s3-bucket-name my-cloudtrail-logs-bucket \
    --include-global-service-events \
    --is-multi-region-trail \
    --enable-log-file-validation

Implement AWS Config to continuously monitor resource configurations:

# Set up Config with common security rules
aws configservice put-configuration-recorder \
    --configuration-recorder name=default,roleARN=arn:aws:iam::123456789012:role/aws-config-role \
    --recording-group allSupported=true,includeGlobalResourceTypes=true

aws configservice put-delivery-channel \
    --delivery-channel name=default,s3BucketName=my-config-bucket

aws configservice start-configuration-recorder \
    --configuration-recorder-name default

Deploy AWS Security Hub as a centralized dashboard:

# Enable Security Hub
aws securityhub enable-security-hub \
    --enable-default-standards

# Enable GuardDuty integration
aws securityhub enable-import-findings-for-product \
    --product-arn arn:aws:securityhub:us-east-1::product/aws/guardduty

Use AWS GuardDuty for intelligent threat detection:

# Enable GuardDuty
aws guardduty create-detector \
    --enable \
    --datasources S3Logs={Enable=true},Kubernetes={AuditLogs={Enable=true}},MalwareProtection={ScanEc2InstanceWithFindings={EbsVolumes={Enable=true}}}

Implement AWS Organizations with Service Control Policies (SCPs):

# Create organization and apply security baseline SCPs
aws organizations create-organization --feature-set ALL

# Apply the require-MFA SCP we created earlier
aws organizations attach-policy \
    --policy-id p-RequireMFA \
    --target-id r-organizationroot

Schedule Free Cloud Security Assessment Consultation

Establish Regular Security Assessments using automated tools and scripts like those provided throughout this post.

The cost of implementing proper AWS security measures is minimal compared to the potential losses from a breach. Companies that invest in security from the beginning save millions in potential damages while building customer trust that becomes a competitive advantage in today’s security-conscious market.

Security isn’t just about compliance or avoiding breaches—it’s about building resilient infrastructure that supports business growth without introducing unnecessary risk. In today’s threat landscape, where attacks are becoming more sophisticated and costly, AWS security isn’t optional—it’s essential for business survival and growth.

Remember that security is not a one-time implementation but an ongoing process that requires continuous attention, regular review, and adaptation to new threats. The organizations that treat security as a core business function, rather than a technical checkbox, are the ones that will thrive in the cloud while avoiding the costly mistakes that have affected so many others.

  • About Author

    Sheldon Sides

    LinkedIn

    Sheldon is Founder and Chief Solutions Architect at Avinteli. Before founding Avinteli, he led Global Security and Compliance at Amazon Web Services (AWS) for Public Sector Partners and Global ISV Partners. Prior to his leadership role, he served as a Senior Security Solutions Architect at AWS, where he conducted comprehensive security assessments and guided Fortune 500 companies through complex, enterprise-scale AWS cloud implementations. His deep cloud security expertise and hands-on assessment experience help organizations identify critical vulnerabilities, close security gaps, accelerate their secure cloud adoption, and design and develop cloud-native solutions.


Share this post!