Which of the following allows you to categorize and track your AWS costs on a detailed level?

About a month ago, I successfully passed the exam and secured the the AWS Cloud Practitioner certification. If you are a platform/backend product leader/manager/owner AND your products/services are built using AWS products/services, I strongly encourage you to prepare for and take this certification. It will not only demonstrate that you understand the cloud value proposition and best practices but also give you a better understanding of AWS's different product and services. As the world adopts the cloud-native mindset, don't get left behind!

Successful software product managers demonstrate an understanding and ability to influence/challenge cloud-native intents.

Here are my quick tips for those of you interested in pursing this.

  • To pass, you need to score 700/1000 (or at least 70% grade). There are total of 65 questions, 90 minutes. Exam cost is $100.
  • If you have been working with/around cloud technologies, your prep time should be about 5-8/week, for 4 weeks. You can add additional time for taking practice exams.
  • If you are relatively new with/around cloud technologies, your prep time should be about 5-8/week, for 8-10 weeks. You can add additional time for taking practice exams.
  • It took me about 4 weeks to prep. I took 3 practice exams and scored 72%, 80%, 95% successively before I determined I was ready to take the real test. I wanted to pass the test in my first attempt, hence scoring >90% in my practice test was important to me.
  • Find a buddy at work or friend outside of work who is equally eager to take on this certification. I formed a study group with another colleague of mine from work; this really helped!
  • Disclaimer: My advice is by no means perfect or gospel, so please consider other advice, guidance, resources as you go about preparing for this certification.

My study material had 3 areas:

  • Read the white papers (listed below) and get a real understanding of all of the topics/concepts.
  • Take the CloudGuru course (optional in my opinion though it does give you some hands-on experience working with the AWS console). However, if you can afford it or expense it, you should do it. My experience with CloudGuru was good and my employer picked up the expense. There may be better and/or more economical options out there, so be sure to do some research.
  • Understand all of the concepts that I have listed below. If my description is unclear, search them and read alternate descriptions.
All the Best! And do leave feedback/comment if you find my study tips helpful.

Read the following white papers:

  • Overview of AWS
  • Architecting for the Cloud

Understand all of the concepts below:

  • CloudWatch - It focuses on the activity of AWS services and resources, reporting on their health and performance. CloudWatch provides events and alarms, and could potentially be set up to be triggered when an EC2 instance is terminated, but will not provide detailed information over who and when the action was taken. You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the # of errors that occur in your app logs and send you a notification whenever the rate of errors exceeds a threshold you specify. A CloudWatch alarm can be set up to monitor CPU utilization and trigger further action. Further action could be an Auto Scaling Group adding another EC2 instance and/or using SNS to notify team members of the occurrence.
  • AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources.
  • CloudTrail - A log of all actions that have taken place inside your AWS environment. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.

SQS, SES, SNS -

  • SNS - Messaging/notification service. SNS can be used with CloudWatch to notify team members when a CloudWatch Event or Alarm is triggered, but SNS is not monitoring resource utilization.
  • SQS - Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
  • SES - Email service is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is a reliable, cost-effective service for businesses of all sizes that use email to keep in contact with their customers.
  • AWS Organization - Helps you centrally govern your environment as you grow and scale your workloads on AWS. Whether you are a growing startup or a large enterprise, AWS Organizations helps you to centrally manage billing, control access, compliance, and security, and share resources across your AWS accounts. Automate account creation, create groups of accounts to reflect business needs and apply policies to these groups for governance.
  • Resource Groups - You can use resource groups to organize your AWS resources. Resource groups make it easier to manage and automate tasks on large numbers of resources at one time. This guide shows you how to create and manage resource groups in AWS Resource Groups.
  • Tags - A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. For each resource, each tag key must be unique, and each tag key can have only one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report to make it easier for you to categorize and track your AWS costs. AWS provides two types of cost allocation tags, an AWS generated tags and user-defined tags. AWS defines, creates, and applies the AWS generated tags for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report.

Macie, Shield, Inspector, GuardDuty -

  • Shield - Shield focuses solely on Distributed Denial of Service (DDoS) attacks. Comes in 2 flavors: Standard & Advanced
  • GuardDuty - threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads
  • Inspector - Identify security vulnerabilities as well as deviations from security best practices. An automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by levels of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API.
  • Macie - A fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data (PII) in AWS
  • AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. Trusted Advisor checks help optimize your AWS infrastructure, increase security and performance, reduce your overall costs, and monitor service limits. Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.
  • AWS Lambda - Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Use this to deploy applications entirely on a serverless platform.
  • CloudFormation - AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. Use this if you want to experiment with the Infrastructure As Code model. AWS CloudFormation simplifies provisioning and management on AWS. You can create templates for the service or application architectures you want and have AWS CloudFormation use those templates for quick and reliable provisioning of the services or applications (called “stacks”). You can also easily update or replicate the stacks as needed. Although CloudFormation can certainly deploy systems in the AWS cloud, it is not a tool for absolute beginners.

AWS Shared Responsibility Model -

  • Amazon - Secure the cloud infrastructure
  • Disk disposal
  • Physically securing compute resources
  • Infrastructure patching
  • AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
  • Customer - Secure the workloads you deploy
  • Customer data
  • Platform and app IAM
  • OS, Network, Firewall configuration
  • OS/Application patching
  • Encryption & authentication

IAM -

  • IAM - Service enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
  • Resources - Resources are the user, group, role, policy, and identity provider objects that are stored in IAM. As with other AWS services, you can add, edit, and remove resources from IAM.
  • Identities - Identities are the IAM resource objects that are used to identify and group. You can attach a policy to an IAM identity. These include users, groups, and roles.
  • Entities - The IAM resource objects that AWS uses for authentication. These include IAM users, federated users, and assumed IAM roles.
  • Policy Simulator - You can test and troubleshoot identity-based policies, IAM permissions boundaries, Organizations service control policies, and resource-based policies.
  • Principal - A person or application that uses the AWS account root user, an IAM user, or an IAM role to sign in and make requests to AWS.
  • Role - Use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you launch an EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests.
  • Principle of Least Privilege - When you create IAM policies, follow the standard security advice of granting the least privilege, or granting only the permissions required to perform a task. Determine what users (and roles) need to do, and then craft policies that allow them to perform only those tasks.

AWS CLI Access - When working with AWS from the CLI, you need to provide an access key and secret access key

SubNetting - The practice of dividing a network into two or more networks is called subnetting. AWS provides two types of subnetting: one is Public, which allows the internet to access the machine, and another is Private, which is hidden from the internet.

Security Group - Acts as a virtual firewall for your instance to control inbound and outbound traffic.

AWS Acceptable Use Policy - The policy states that penetration testing may be performed by customers on their own instances with prior approval from AWS.

S3 - S3 provides high durability storage of objects.

S3 Bucket Policy - These specify what actions are allowed or denied for which principals on the bucket that the bucket policy is attached to (e.g., allow user Alice to PUT but not DELETE objects in the bucket). 

How to Secure AWS Accounts? -

  • Create multi-factor authentication for the root account - This will add an additional layer of security to the root account.
  • Add IP restrictions for all accounts - This would greatly limit who can access your environment and from where.

EC2 Instances -

  • On-Demand - Most expensive. Contract can be for any duration of time.
  • Reserved - Less expensive. Contract must be >= 6 months.
  • Spot - Least expensive. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads. The key phrase in this question is, “It is alright if there are interruptions in the application”. If the application could not accept interruptions, then the best option would be on-demand. Because Spot Instances are tightly integrated with AWS services such as Auto Scaling, EMR, ECS, CloudFormation, Data Pipeline and AWS Batch, you can choose how to launch and maintain your applications running on Spot Instances.

AWS Support Plans - Every plan has billing support and service health checks.

  • Basic
  • Developer - +IAM
  • Business - The Business level support plan provides one hour or less support for production level failures. One hour or less support for production level failures. Comes with AWS Trusted Advisor.
  • Enterprise - Recommended if you have business and/or mission critical workloads in AWS. Includes Infrastructure Event Management. 15-minute response time support if your business-critical system goes down. Comes with AWS Trusted Advisor, TAM (Technical Account Manager), and Management Business Review.

AWS Cost Explorer - Lets you visualize, understand, and manage your AWS costs and usage over time. You can analyze your cost and usage data at a high level (e.g., total costs and usage across all accounts in your organization) or for highly specific requests.

Cost and Usage Report - It is looking back on charges accrued, not looking forward and projecting future charges. AWS Cost and Usage reports provide a detailed data set about your AWS billing, delivered to an Amazon Simple Storage Service (Amazon S3) bucket of your choice. You can receive reports that break down your costs by the hour or day, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format. You can view the reports using spreadsheet software such as Microsoft Excel or Apache OpenOffice Calc, or access them from an application using the Amazon S3 API.

With the AWS Pricing Calculator, you can input the services you will use, and the configuration of those services, and get an estimate of the costs these services will accrue. AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS.

AWS Marketplace - It is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.

Elastic Load Balancer (ELB): Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant. It supports 3 kinds of load balancing - 

  • Classic: It provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. Classic Load Balancer is intended for applications that were built within the EC2-Classic network.
  • Application: It is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Operating at the individual request level (Layer 7), Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.
  • Network: Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies.

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations.

A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups to add an additional layer of security to your VPC.

AWS CodeBuild - It is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers.

AWS CodeStar - It is a cloud-based service for creating, managing, and working with software development projects on AWS. You can quickly develop, build, and deploy applications on AWS with an AWS CodeStar project.

AWS CodePipeline - It is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates.

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

AWS Artifact - On-demand access to AWS security and compliance reports. AWS SOC-2 report is very helpful as it provides implementation and operational excellence of AWS security controls.

Instance MetaData - Contains data for EC2 instance such as public keys, ip address, and instance id

Cross-Region Replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions. Nothing in the scenario indicates the data needs to be moved across regions.

Multipart Upload allows you to upload a single object as a set of parts. After all parts of your object are uploaded, Amazon S3 then presents the data as a single object. You can use a multipart upload for objects from 5 MB to 5 TB in size. Amazon S3 customers are encouraged to use multipart uploads for objects greater than 100 MB.

Elasticache - You can use it to store the results of often-used queries, and this will allow quicker retrieval of this data. It allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. It is not for delivery.

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

DAX - Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second.

AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager.

Glacier Deep Archive meets the requirement and is the cheapest option. Amazon S3 Glacier (mins to hours) and S3 Glacier Deep Archive (12 to 48 hours) are a secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup. They are designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.

Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns.

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect. Amazon Route 53 is fully compliant with IPv6 as well.

Which AWS services provide automatic replication across AZs?

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.

An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.

Amazon Virtual Private Cloud (VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications.

AWS Partner Network - APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their journey to the cloud. APN Consulting Partners often implement Technology Partner solutions in addition to the professional services they offer.

5 pillars of a well architected framework: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization.

Cloud Best Practices -

  • Design for Failure
  • Decouple your components
  • Implement elasticity
  • Think parallel
  • Keep dynamic data closer to the compute and static data closer to the end-user

Desktop as a Service (DaaS) - Your company has decided to use Amazon WorkSpaces. They can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes.

Auto Scaling Group can be used to scale out and scale in the instances as the demand dictates. This will save money and avoid having instances sitting idle for long periods of time. AWS Auto Scaling monitors your applications and automatically adjusts your capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to set up application scaling for multiple resources across multiple services in minutes. 

Core Design Principle - Deploying in Multiple Availability zones will protect against downtime should an Availability Zone be lost.

AWS Management Console is a web application for managing Amazon Web Services.

6 Advantages of Cloud Computing -

  • Trade capital expense for variable expense
  • Benefit from massive economies of scale 
  • Stop guessing capacity 
  • Increase speed and agility 
  • Stop spending money running and maintaining data centers 
  • Go global in minutes 

Request a service limit increase - Use the Limits page in the Amazon EC2 console to request an increase in the limits for resources provided by Amazon EC2 or Amazon VPC on a per-Region basis.