AWS things to Remember

 
Redis:
  • Overview: Redis (Remote Dictionary Server) is an open-source, in-memory data structure store used as a database, cache, and message broker.
  • Features: Supports various data structures such as strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs, and geospatial indexes.
  • Use Cases: Frequently used for caching, session management, real-time analytics, pub/sub messaging, and leaderboard tracking.
  • Advantages: High performance, data persistence options, built-in replication, and support for complex data types.

Memcached:

  • Overview: Memcached is an open-source, high-performance, distributed memory caching system.
  • Features: Designed to speed up dynamic web applications by reducing database load through caching data and objects in memory.
  • Use Cases: Commonly used for caching database query results, session data, and API responses.
  • Advantages: Simple design, easy to deploy, and highly effective for read-heavy workloads.

Comparison Table: Redis vs. Memcached

Feature Redis Memcached
Data Structures Supports various data structures Supports key-value pairs only
Persistence Offers data persistence options No built-in data persistence
Replication Built-in replication and high availability No built-in replication
Advanced Features Supports pub/sub messaging, transactions Simpler, focuses on basic caching
Use Cases Real-time analytics, session management Caching database query results, sessions
Performance High performance with rich functionality High performance for basic key-value caching
Scalability Scales well with clustering Scales well, but without built-in clustering features
 

Leaderboard Tracking:

  • Definition: Leaderboard tracking is the process of maintaining and updating a ranking system that displays the top performers or high scores in a competitive environment, such as in gaming or sports applications.
  • Purpose: To provide users with a real-time ranking based on their performance metrics, such as points, scores, or achievements.
  • Features:
    • Real-Time Updates: Continuously updates rankings as new scores are submitted.
    • Sorting and Ranking: Efficiently sorts entries to display the highest scores or top performers.
    • User Interaction: Allows users to see their own ranking and compare it with others.

Use Cases:

  • Gaming: Displaying player rankings based on game scores or achievements.
  • Fitness Apps: Ranking users based on exercise metrics like steps taken or calories burned.
  • Sales Competitions: Tracking top sales representatives based on sales performance.
  • Educational Platforms: Ranking students based on test scores or learning achievements.

Example: Using Redis for Leaderboard Tracking

Redis is particularly well-suited for leaderboard tracking due to its support for sorted sets, which allow for efficient storage and retrieval of data based on scores.

Redis Sorted Sets:

  • ZADD: Adds members with scores to a sorted set.
  • ZRANGE: Retrieves members in a specified range, ordered by score.
  • ZRANK: Returns the rank of a member in the sorted set.
  • ZREVRANGE: Retrieves members in a specified range, ordered by score in descending order.

Cloud-Native Definition: Cloud-native refers to applications and services designed specifically to leverage cloud computing architectures and technologies. These applications are built to fully exploit the advantages of the cloud environment, such as scalability, flexibility, high availability, and automated management. Cloud-native applications typically use microservices architecture, containerization, dynamic orchestration, and continuous delivery/deployment practices to achieve rapid development, resilience, and scalability.

OLTP Workloads: Online Transaction Processing (OLTP) workloads involve managing and processing transactional data. These workloads typically handle a large number of short online transactions such as INSERT, UPDATE, DELETE, and SELECT operations. OLTP systems are designed for speed and efficiency, ensuring quick response times and high concurrency for multiple users performing various transactions simultaneously.

Web serving

Cloud file storage solutions follow common file-level protocols, file naming conventions, and permissions that developers are familiar with. Therefore, file storage can be integrated into web applications.

Analytics

Many analytics workloads interact with data through a file interface and rely on features such as file lock or writing to portions of a file. Cloud-based file storage supports common file-level protocols and has the ability to scale capacity and performance. Therefore, file storage can be conveniently integrated into analytics workflows.

 
Media and entertainment

Many businesses use a hybrid cloud deployment and need standardized access using file system protocols (NFS or SMB) or concurrent protocol access. Cloud file storage follows existing file system semantics. Therefore, storage of rich media content for processing and collaboration can be integrated for content production, digital supply chains, media streaming, broadcast playout, analytics, and archive.

 
Home directorie

Businesses wanting to take advantage of the scalability and cost benefits of the cloud are extending access to home directories for many of their users. Cloud file storage systems adhere to common file-level protocols and standard permissions models. Therefore, customers can lift and shift applications that need this capability to the cloud

Reserved IPs

For AWS to configure your VPC appropriately, AWS reserves five IP addresses in each subnet. These IP addresses are used for routing, Domain Name System (DNS), and network management.

For example, consider a VPC with the IP range 10.0.0.0/22. The VPC includes 1,024 total IP addresses. This is then divided into four equal-sized subnets, each with a /24 IP range with 256 IP addresses. Out of each of those IP ranges, there are only 251 IP addresses that can be used because AWS reserves five.

AWS reserves five IP addresses in each subnet that cannot be assigned to a resource.

An Elastic IP address is a public IPv4 address that is reachable from the internet. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the internet. For example, this allows you to connect to your instance from your local computer.

An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in a VPC and the internet. An internet gateway does not impose availability risks or bandwidth constraints on network traffic. 

When you create a VPC, you must specify an IPv4 CIDR block for the VPC. The allowed block size is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses). The CIDR block of a subnet can be the same as the CIDR block for the VPC (for a single subnet in the VPC) or a subset of the CIDR block for the VPC (for multiple subnets). If you create more than one subnet in a VPC, the CIDR blocks of the subnets cannot overlap.

 

Since you’ve set the Healthy threshold to 2 and the interval to 10 seconds, it takes at least 20 seconds for your instance to report a healthy status. By default, each load balancer node routes requests only to the healthy targets in its Availability Zone.

 Additional information: With Network Load Balancers, cross-zone load balancing is disabled by default. After you create a Network Load Balancer, you can enable or disable cross-zone load balancing at any time.

 Additional information: Cross-zone load balancing distributes traffic evenly across all targets in the Availability Zones enabled for the load balancer.

Configurations

publicDNS-CZon

Configure how traffic is distributed among the load balancer Availability Zones.

Applies only to internal requests for clients resolving the load balancer DNS name using Route 53 Resolver .
 
Availability Zone affinityClient DNS queries will favor load balancer IP addresses in their own Availability Zone. Queries may resolve to other zones if there are no healthy load balancer IP addresses in their own zone.Partial Availability Zone affinity85% of client DNS queries will favor load balancer IP addresses in their own Availability Zone. The remaining queries will resolve to any zone. Resolving to any zone may also occur if there are no healthy load balancer IP addresses in the client’s zone.
 
Any Availability Zone – DefaultClient DNS queries will resolve to healthy load balancer IP addresses across all load balancer Availability Zones.
 
Disable cross-zone load balancing – DefaultEach load balancer node load balances traffic among healthy targets in its own Availability Zone.Enable cross-zone load balancingEach load balancer node load balances traffic among healthy targets in all its enabled Availability Zones. Data transfer charges apply 

How your configurations apply

The solid orange lines in the diagram demonstrate how your configurations apply, which depends on the DNS endpoint used by the client for this internet-facing load balancer.

DNS resolution
Client routing policy (DNS record) has no effect on public DNS.
publicDNS-CZon
 
Traffic
 
Availability Zone (AZ)
 

Load balancer types

Application Load Balancer

Application Load Balancer diagram Info

Choose an Application Load Balancer when you need a flexible feature set for your applications with HTTP and HTTPS traffic. Operating at the request level, Application Load Balancers provide advanced routing and visibility features targeted at application architectures, including microservices and containers.

Network Load Balancer

 Info

Network Load Balancer diagram

Choose a Network Load Balancer when you need ultra-high performance, TLS offloading at scale, centralized certificate deployment, support for UDP, and static IP addresses for your applications. Operating at the connection level, Network Load Balancers are capable of handling millions of requests per second securely while maintaining ultra-low latencies.

Gateway Load Balancer

 
Gateway Load Balancer diagram

Choose a Gateway Load Balancer when you need to deploy and manage a fleet of third-party virtual appliances that support GENEVE. These appliances enable you to improve security, compliance, and policy controls.

  • Amazon Elastic Load Balancer

    An Amazon Elastic Load Balancer (Amazon ELB) is a service that automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored.

    Customers can enable Elastic Load Balancing within a single Availability Zone or across multiple zones for even more consistent application performance. Elastic Load Balancing can also be used in an Amazon Virtual Private Cloud (VPC) to distribute traffic between application tiers.

    Network Load Balancer

    Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC, based on IP protocol data. Ideal for load balancing of both TCP and UDP traffic, Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone. It is integrated with other popular AWS services such as Auto Scaling, Amazon EC2 Container Service (ECS), Amazon CloudFormation, and AWS Certificate Manager (ACM).

     

  • A route table contains a set of rules, called routes, that are used to determine where network traffic is directed. Each subnet in your VPC must be associated with a route table; the table controls the routing for the subnet. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table.

     To use an Internet gateway, your subnet’s route table must contain a route that directs Internet-bound traffic to the Internet gateway. You can scope the route to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6), or you can scope the route to a narrower range of IP addresses; for example, the public IPv4 addresses of your company’s public endpoints outside of AWS, or the Elastic IP addresses of other Amazon EC2 instances outside your VPC. If your subnet is associated with a route table that has a route to an Internet gateway, it’s known as a public subnet.

  • An Internet gateway serves two purposes: to provide a target in your VPC route tables for Internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.
  • An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

  • AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.
  • The talk introduces AWS identity services for daily cloud use.
  • A security group acts as a virtual firewall for your instance to control inbound and outbound traffic, instance based and not subnet or vpc
  • WHAT IS A SECURITY GROUP?

    A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not the subnet level. For each security group, you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic.

    The following are basic characteristics of security groups:

    • You can specify allow rules, but not deny rules.
    • You can specify separate rules for inbound and outbound traffic.
    • By default, no inbound traffic is allowed until you add inbound rules to the security group.
    • By default, all outbound traffic is allowed until you add outbound rules to the group. Then, you specify the outbound traffic that is allowed.
    • Responses to allowed inbound traffic are allowed to flow outbound regardless of outbound rules and vice versa, as security groups are therefore stateful.
    • Instances associated with a security group can’t talk to each other unless you add rules allowing it.
      • Exception: The default security group has these rules by default.
    • After you launch an instance, you can change which security groups the instance is associated with.
  • Network Access Control Lists (ACLs): Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level.
  • The focus is on authentication and authorization in AWS.
  • AWS accounts are now containers for resources, not just customer identifiers.
  • AWS organizations manage multiple accounts, with one management account paying bills.
  • IAM (Identity and Access Management) manages AWS identities, such as users and roles.
  • IAM roles use short-term credentials for better security.
  • AWS Single Sign-On (SSO) simplifies permissions for human users.
  • AWS applications use IAM roles for access without handling secrets.
  • AWS policies use string-matching to allow or deny API requests.
  • KMS (Key Management Service) integrates with AWS for data encryption.
  • Resource-based policies control access between AWS accounts.
  • S3 block public access should be enabled for security.
  • Lambda functions can be invoked by S3 via resource-based policies.
  • VPC Endpoints create private connectivity for enhanced security.
  • Multiple IAM policies combine to authorize AWS actions.