Top 32 Amazon Web Services (AWS) Interview Questions You Must Prepare 19.Mar.2024

The most obvious way is to roll-your-own scripts, and use the AWS API tools.  Such scripts could be written in bash, Perl or another language or your choice.

The next option is to use a configuration management and provisioning tools like puppet or better it’s successor Opscode Chef.You might also look towards a tool like Scalr. Lastly, you can go with a managed solution such as Rightscale.

S3 stands for Simple Storage Service.  You can think of it like FTP storage, where you can move files to and from there, but not mount it like a filesystem.  AWS automatically puts your snapshots there, as well as AMIs there.  Encryption should be considered for sensitive data, as S3 is a proprietary technology developed by Amazon themselves, and as yet unproven vis-a-vis a security standpoint.

The aptitude of any scheme to enhance the tasks on hand on its present hardware resources to grip inconsistency in command is known as scalability. The capability of a scheme to augment the tasks on hand on its present and supplementary hardware property is recognized as flexibility, hence enabling the industry to convene command devoid of putting in the infrastructure at all.  AWS has several configuration management solutions for AWS scalability, flexibility, availability and management.

We can launch different types of instances from a single AMI. An instance type essentially determines the hardware of the host computer used for your instance. Each instance type offers different compute and memory capabilities.

After we launch an instance, it looks like a traditional host, and we can interact with it as we would do with any computer. We have complete control of our instances; we can use sudo to run commands that require root privileges.

One thing must be ensured that no one should seize the information in the cloud while data is moving from point one to another and also there should not be any leakage with the security key from several storerooms in the cloud. Segregation of information from additional companies’ information and then encrypting it by me of approved methods is one of the options.

An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). From an AMI, we launch an instance, which is a copy of the AMI running as a virtual server in the cloud. We can even launch multiple instances of an AMI.

Traditional perimeter security that we’re already familiar with using firewalls and so forth is not supported in the Amazon EC2 world.  AWS supports security groups.One can create a security group for a jump box with ssh access – only port 22 open.From there a webserver group and database group are created.The webserver group allows 80 and 443 from the world, but port 22 *only* from the jump box group.Further the database group allows port 3306 from the webserver group and port 22 from the jump box group.Add any machines to the webserver group and they can all hit the database.  No one from the world can, and no one can directly ssh to any of your boxe.

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable (scalable) computing capacity in the cloud. You can use Amazon EC2 to launch as many virtual servers you need. In Amazon EC2 you can configure security and networking as well as manage storage.Amazon EC2 service also helps in obtaining and configuring capacity using minimal friction.

Amazon EC2 provides many data storage options for your instances. Each option has a unique combination of performance and durability. These storages can be used independently or in combination to suit your requirements.

There are mainly four types of storages provided by AWS:

Amazon EBS: Its durable, block-level storage volumes  can attached in running Amazon EC2 instance. The Amazon EBS volume persists independently from the running life of an Amazon EC2 instance. After an EBS volume is attached to an instance, you can use it like any other physical hard drive. Amazon EBS encryption feature supports encryption feature.

Amazon EC2 Instance Store: Storage disk that is attached to the host computer is referred to as instance store. The instance storage provides temporary block-level storage for Amazon EC2 instances. The data on an instance store volume persists only during the life of the associated Amazon EC2 instance; if you stop or terminate an instance, any data on instance store volumes is lost.

Amazon S3: Amazon S3 provides access to reliable and inexpensive data storage infrastructure. It is designed to make web-scale computing easier by enabling you to store and retrieve any amount of data, at any time, from within Amazon EC2 or anywhere on the web.

Adding Storage: Every time you launch an instance from an AMI, a root storage device is created for that instance. The root storage device contains all the information necessary to boot the instance. You can specify storage volumes in addition to the root device volume when you create an AMI or launch an instance using block device mapping.

Amazon Web Services provides several ways to access Amazon EC2, like web-based interface, AWS Command Line Interface (CLI) and Amazon Tools for Windows Powershell. First, you need to sign up for an AWS account and you can access Amazon EC2.

Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action.

Stopping and Starting an instance: When an instance is stopped, the instance performs a normal shutdown and then tritions to a stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again at a later time. You are not charged for additional instance hours while the instance is in a stopped state.

Terminating an instance: When an instance is terminated, the instance performs a normal shutdown, then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false. The instance itself is also deleted, and you can’t start the instance again at a later time.

  • In AWS, we can use Lambda@Edge utility to solve the problem of low network latency for end users.
  • In Lambda@Edge there is no need to provision or manage servers. We can just upload our Node.js code to AWS Lambda and create functions that will be triggered on CloudFront requests.
  • When a request for content is received by CloudFront edge location, the Lambda code is ready to execute.
  • This is a very good option for scaling up the operations in CloudFront without managing servers.

Configuration management has been around for a long time in web operations and systems administration.  Yet the cultural popularity of it has been limited.  Most systems administrators configure machines as software was developed before version control – that is manually making changes on servers.  Each server can then and usually is slightly different.  Troubleshooting though, is straightforward as you login to the box and operate on it directly.  Configuration management brings a large automation tool in the picture, managing servers like strings of a puppet.  This forces standardization, best practices, and reproducibility as all configs are versioned and managed.  It also introduces a new way of working which is the biggest hurdle to its adoption.

Enter the cloud, then configuration management becomes even more critical.That’s because virtual servers such as amazons EC2 instances are much less reliable than physical ones.You absolutely need a mechanism to rebuild them as-is at any moment.This pushes best practices like automation, reproducibility and disaster recovery into center stage.

Yes.This is an incredible feature of AWS and cloud virtualization.  Spin up a new larger instance than the one you are currently running.  Pause that instance and detach the root ebs volume from this server and discard.  Then stop your live instance, detach its root volume.  Note down the unique device ID and attach that root volume to your new server. And then start it again.  Voila, you have scaled vertically in-place!!

There are 4 components involved and are as below. Amazon S3: with this, one can retrieve the key information which are occupied in creating cloud structural design and amount of produced information also can be stored in this component that is the consequence of the key specified. Amazon EC2 instance: helpful to run a large distributed system on the Hadoop cluster. Automatic parallelization and job scheduling can be achieved by this component.

Amazon SQS: this component acts as a mediator between different controllers. Also worn for cushioning requirements those are obtained by the manager of Amazon.

Amazon SimpleDB: helps in storing the tritional position log and the errands executed by the consumers.

AWS (Amazon Web Services) is a platform to provide secure cloud services, database storage, offerings to compute power, content delivery, and other services to help business level and develop.

AMI can be elaborated as Amazon Machine Image, basically, a template consisting software configuration part. For example an OS, applications, application server. If you start an instance, a duplicate of the AMI in a row as an unspoken attendant in the cloud.

Autoscaling is a feature of AWS which allows you to configure and automatically provision and spin up new instances without the need for your intervention.  

You do this by setting thresholds and metrics to monitor.  When those thresholds are crossed, a new instance of your choosing will be spun up, configured, and rolled into the load balancer pool. Voila, you’ve scaled horizontally without any operator intervention!

AMI holds for Amazon Machine Image. It is efficiently a snap of the source filesystem. Products appliance servers have a bio that shows the master drive report of the initial slice on a disk. A disk form though can lie anyplace physically on a disc, so Linux can boot from an absolute position on the EBS warehouse interface.

Create a unique AMI at beginning rotating up and instance from a granted AMI. Later uniting combinations and components as needed. Comprise wary of setting delicate data over an AMI (learn salesforce online). For instance, your way credentials should be joined to an instance later spinup. Among a database, mount an external volume that carries your MySQL data next spinup actually enough.

The API tools can be used for spinup services and also for the written scripts. Those scripts could be coded in Perl, bash or other languages of your preference. There is one more option that is patterned administration and stipulating tools such as a dummy or improved descendant. A tool called Scalr can also be used and finally we can go with a controlled explanation like a Rightscale.

As the Amazon EC2 service is a cloud service so it has all the cloud features. Amazon EC2 provides the following features:

  1. Virtual computing environment (known as instances)
  2. Pre-configured templates for your instances (known as Amazon Machine Images – AMIs)
  3. Amazon Machine Images (AMIs) is a complete package that you need for your server (including the operating system and additional software)
  4. Amazon EC2 provides various configurations of CPU, memory, storage and networking capacity for your instances (known as instance type)
  5. Secure login information for your instances using key pairs (AWS stores the public key and you can store the private key in a secure place)
  6. Storage volumes of temporary data is deleted when you stop or terminate your instance (known as instance store volumes)
  7. Amazon EC2 provides persistent storage volumes (using Amazon Elastic Block Store – EBS)
  8. A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your instances using security groups
  9. Static IP addresses for dynamic cloud computing (known as Elastic IP address)
  10. Amazon EC2 provides metadata (known as tags)
  11. Amazon EC2 provides virtual networks that are logically isolated from the rest of the AWS cloud, and that you can optionally connect to your own network (known as virtual private clouds – VPCs)

EBS is a virtualized SAN or storage area network.  That me it is RAID storage to start with, so it’s redundant and fault tolerant.  If disks die in that RAID you don’t lose data.  Great! It is also virtualized, so you can provision and allocate storage, and attach it to your server with various API calls. No calling the storage expert and asking him or her to run specialized commands from the hardware vendor.

Performance on EBS can exhibit variability.  That is, it can go above the SLA performance level, then drop below it.  The SLA provides you with an average disk I/O rate you can expect. This can frustrate some folks, especially performance experts who expect reliable and consistent disk throughout on a server.  Traditional physically hosted servers behave that way. Virtual AWS instances do not.

Backup EBS volumes by using the snapshot facility via API call or via a GUI interface like elasticfox.

Improve performance by using Linux software raid and striping across four volumes.

AMI stands for Amazon Machine Image. It is effectively a snapshot of the root filesystem. Commodity hardware, servers have a bios that points the master boot record of the first block on a disk. A disk image, though can sit anywhere physically on a disk, so Linux can boot from an arbitrary location on the EBS storage network.

Build a new AMI by first spinning up and instance from a trusted AMI.Then adding packages and components as required. Be wary of putting sensitive data onto an AMI.  For instance, your access credentials should be added to an instance after spinup with a database, mount an outside volume that holds your MySQL data after spinup as well.

Different types of events triggered by Amazon CloudFront are as follows:

Viewer Request: When an end user or a client program makes an HTTP/HTTPS request to CloudFront, this event is triggered at the Edge Location closer to the end user.

Viewer Response: When a CloudFront server is ready to respond to a request, this event is triggered.

Origin Request: When CloudFront server does not have the requested object in its cache, the request is forwarded to Origin server. At this time this event is triggered.

Origin Response: When CloudFront server at an Edge location receives the response from Origin server, this event is triggered.

Some of the main features of Amazon CloudFront are as follows: Device Detection Protocol Detection Geo Targeting Cache Behavior Cross Origin Resource Sharing Multiple Origin Servers HTTP Cookies Query String Parameters Custom SSL.

There are several best practices for secure Amazon EC@Following are few of them.

  1. Use AWS Identity and Access Management (AM) to control access to your AWS resources.
  2. Restrict access by only allowing trusted hosts or networks to access ports on your instance.
  3. Review the rules in your security groups regularly, and ensure that you apply the principle of least
  4. Privilege — only open up permissions that you require.
  5. Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk.

Here is the list of layers of the cloud computing

  • PaaS – Platform as a Service
  • IaaS – Infrastructure as a Service
  • SaaS – Software as a Service

An Elastic Load Balancer ensures that the incoming traffic is distributed optimally across various AWS instances.  A buffer will synchronize different components and makes the arrangement additional elastic to a burst of load or traffic. The components are prone to work in an unstable way of receiving and processing the requests. The buffer creates the equilibrium linking various apparatus and crafts them effort at the identical rate to supply more rapid services.

The fundamental elements of AWS are:

Route 53: A DNS web service

Easy E-mail Service: It permits addressing e-mail utilizing RESTFUL API request or through normal SMTP

Identity and Access Management: It gives heightened protection and identity control for your AWS account

Simple Storage Device or (S3): It is warehouse equipment and the well-known widely utilized AWS service

Elastic Compute Cloud (EC2): It affords on-demand computing sources for hosting purposes. It is extremely valuable in trouble of variable workloads

Elastic Block Store (EBS): It presents persistent storage masses that connect to EC2 to enable you to endure data beyond the lifespan of a particular EC2

Cloud Watch: To observe AWS sources, It permits managers to inspect and obtain key Additionally, one can produce a notification alert in the state of crisis.

Surely, you can vertically estimate on Amazon instance. During that

  • Twist up a fresh massive instance than the one you are currently governing
  • Delay that instance and separate the source webs mass of server and dispatch
  • Next, quit your existing instance and separate its source quantity
  • Note the different machine ID and connect that source mass to your fresh server
  • Also, begin it repeatedly Study AWS Training Online From Real Time Experts

There are 5 layers and are listed below

  • CC- Cluster Controller
  • SC- Storage Controller
  • CLC- Cloud Controller
  • Walrus
  • NC- Node Controller

Amazon SQS (Simple Queue Service) is a message passing mechanism that is used for communication between different connectors that are connected with each other. It also acts as a communicator between various components of Amazon. It keeps all the different functional components together. This functionality helps different components to be loosely coupled, and provide an architecture that is more failure resilient system.