AWS SAA-C02:知识体系+401题解析

近日考完AWS SAA-C02( AWS Certified Solutions Architect - Associate)认证。分享一下参考题目以及部分题目的解法分析。希望能帮助到正在准备考试的朋友们。

IAM

  • 用户(User):

    代表访问AWS的终端用户

    • 可使用密码来访问AWS管理平台
    • 可使用Access Key ID和Secret Access Key并通过API, CLI或SDK的形式来访问AWS服务(主要针对应用程序对AWS资源的访问)
    • 默认用户没有任何权限,我们需要用策略赋予每个用户所需要的最小权限
  • 组(Group)

    :拥有相同权限的用户组合

    • 拥有相同权限的用户可以归入一个组,方便权限的统一管理和控制
    • 一个组可以拥有多个用户,一个用户可以属于多个组
  • 角色(Role)

    :角色可以分配给AWS服务,让AWS服务有访问其他AWS资源的权限

    • 角色不包含任何用户名/密码
    • 角色比用户更加安全可靠,因为没有泄露用户名/密码或者Access Key的可能性
    • 举个例子,我们可以赋予EC2实例一个角色,让其有访问S3的读写权限
      • 使用**角色(Role)**比使用Access Key和Secret Access Key要安全很多
      • 角色更容易管理和变更
      • 角色可以在EC2实例启动之后再分配,并且可以随时更改角色以及角色关联的策略
        • 在旧版本考试中,角色只能在EC2创建的时候分配,并且实例启动之后不能对角色进行任何更改
      • 角色是跨区域的,创建的角色可以在任何区域中使用
策略(Policy)

:定义具体访问权限的文档

  • 策略具体定义了能访问哪些AWS资源,并且能执行哪些操作(比如List, Read, Write等)
  • 策略的文档以JSON的格式展现
1
2
3
4
5
6
7
8
9
{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": [ "A list of the permissions the role is allowed to use" ],
        "Resource": [ "A list of the resources the role is allowed to access" ]
    }
}   
12345678
  • 允许服务代入角色的信任策略。例如,您可以将以下信任策略与具有 UpdateAssumeRolePolicy 操作的角色进行附加。该信任策略允许 Amazon EC2 使用角色和附加在角色上的许可。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    {
        "Version": "2012-10-17",
        "Statement": {
            "Sid": "TrustPolicyStatementThatAllowsEC2ServiceToAssumeTheAttachedRole"
            "Effect": "Allow",
            "Principal": { "Service": "ec2.amazonaws.com" },
           "Action": "sts:AssumeRole"
        }
    }    
    123456789
      
    
  • 允许用户访问对名称与用户名称匹配的 DynamoDB 表执行的所有操作

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "dynamodb:*",
      "Resource": "arn:aws:dynamodb:AWS-REGION-IDENTIFIER:ACCOUNT-ID-WITHOUT-HYPHENS:table/${aws:username}"
    }
  ]
}
12345678910

例如,以下策略允许删除您自己的多重身份验证 (MFA) 设备,但前提是您在最近 1 小时(3600 秒)内已使用 MFA 进行登录。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
    "Version": "2012-10-17",
    "Statement": {
        "Sid": "AllowRemoveMfaOnlyIfRecentMfa",
        "Effect": "Allow",
        "Action": [
            "iam:DeactivateMFADevice",
            "iam:DeleteVirtualMFADevice"
        ],
        "Resource": "arn:aws:iam::*:user/${aws:username}",
        "Condition": {
            "NumericLessThanEquals": {"aws:MultiFactorAuthAge": "3600"}
        }
    }
}

IAM实体(用户或角色)的权限边界设置该实体可以具有的最大权限。这可以更改该用户或角色的有效权限。实体的有效权限是影响用户或角色的所有策略所授予的权限。在帐户中,实体的权限可能会受到基于身份的策略,基于资源的策略,权限边界,组织SCP或会话策略的影响。因此,解决方案架构师可以在开发人员IAM角色上设置IAM权限边界,明确拒绝附加管理员策略

EC2

Amazon Elastic Compute Cloud (Amazon EC2) 在 Amazon Web Services (AWS) 云中提供可扩展的计算容量。使用 Amazon EC2 可避免前期的硬件投入,因此您能够快速开发和部署应用程序。通过使用 Amazon EC2,您可以根据自身需要启动任意数量的虚拟服务器、配置安全和网络以及管理存储。Amazon EC2 允许您根据需要进行缩放以应对需求变化或流行高峰,降低流量预测需求。

EC2的特性
  • EC2是AWS提供的一种计算服务,它以**EC2实例(EC2 Instance)**的形式存在,因此一个EC2实例可以被认为是一个虚拟机
  • 预配置的EC2镜像被称之为Amazon Machine Images (AMI),一个AMI包含了你打包的好操作系统,以及相应的应用程序和配置
  • 不同的EC2实例类型包含了不同的CPU,内存,存储和网络性能
  • AWS默认以及建议使用**密钥对(Key Pair)**的形式访问EC2实例,AWS将保存公钥,您将负责保存私钥
  • **EC2实例存储(Instance store volumes)**是一种短暂性的存储,一旦您停止或者终止您的EC2实例,这个存储内的数据将永久消失
  • **EBS存储(Amazon EBS volumes)**是一种持续性的存储,不管EC2实例是什么状态,你都可以保留EBS存储内的数据。这种类型的存储对于进行数据盘的迁移非常方便,使用场景也比较多。
  • **安全组(Security Group)**会通过检测数据包的端口、协议、源IP地址从而充当防火墙的作用
  • **弹性IP(Elastic IP address)**可以方便您为您的EC2实例分配一个固定的公网IP地址,并且保证每次关机重启该地址依旧有效。
  • **虚拟私有云(Virtual Private Cloud, VPC)**是AWS的网络组件,可以让你的AWS资源与其他用户的资源在逻辑上进行隔离。您也可以使用VPC与您的物理数据中心进行连接。
EC2实例的计费类型

EC2的实例计费类型有很多种,每一种都有自己的使用场景,不同的客户可能对计费类型的需求也不一样。

  • On-Demand Instances (按需实例)
  • Reserved Instances (预留实例)
  • Spot Instances (竞价实例)
  • Scheduled Reserved Instances (计划的预留实例)
  • Dedicated Instances (专用的实例)
  • Dedicated Hosts(专用的主机)

On-Demand Instances (按需实例)总结特点如下:

  • 按秒收费(以前是按小时收费,现在AWS更改了),用多少收费多少
  • EC2实例可以根据业务需求实时增加或减少规模
  • 不会有昂贵的初始投资成本
  • 适合用来部署有突发性,爆发性流量的应用程序,比如双11
  • 适合用来测试和开发新的应用程序

Reserved Instances (预留实例)总结特点如下:

  • 更低的每小时运行成本, 1年的合同可以获得40%左右的折扣,3年的合同可以获得60%的折扣。
  • 买断了一定的计算资源,不会出现AWS计算资源不足而无法创建EC2的情况
  • 费用在合同期内是固定的,因此费用可预期
  • 适合需要长期运行、稳定的、可预估的应用程序

目前预留实例还分两种类型,分别是标准RI可转换RI。可转换RI可以更改实例系列、操作系统、租期和付款选项,更加灵活,但是折扣率会比标准RI稍微少一些。

Spot Instances (竞价实例)总结特点如下:

  • 每个小时都会变化,依据是竞价实力的供需关系
  • 可以非常有效地降低运行EC2实例的成本(特别对于有大数量实例需求的情况下)
  • 在其上安装的应用程序随时可以中断,也就是数据和任务处理结果都需要保存在外部存储上
  • 对实例运行开始的时间没有太多要求

Scheduled Reserved Instances“定期预留实例”,使得 EC2计算容量能够以优惠的价格为定期使用预留。例如,某个 EC2实例类型可以为世界时01:00到05:00之间的日常运行而预留,从而执行整夜的数据分析,或者每周或者每月执行计算密集型计算。

  • 了解不同EC2实例类型的区别,要知道在不同的使用场景需要使用哪一种类型的EC2

  • 按需实例(On Demand Instance)- 用多少时间付费多少,费用精确到秒,不用则可以随时关闭/终止并停止费用的产生

  • 竞价实例(Spot Instance)- 以低于按需实例的价格竞得实例,但价格高于设定得价格后会自动被终止

  • 保留实例(Reserved Instance) – 相当于买断一个实例1年/3年,期间不管实例开不开都需要付总得费用,但平均下来费用会比按需实例便宜

  • 专用主机实例(Dedicated Hosts)- 涉及到软件许可证的时候,会考虑使用专用主机实例

  • 终止一个竞价实例

    • 如果主动终止一个竞价实例,需要为当前这个完整小时付费
    • 如果因为价格上涨,AWS终止了你的竞价实例,那么这个小时的费用会被免费
  • 实例的**终止保护(Termination Protection)**功能是默认关闭的,有需要必须手动开启。开启后实例将无法被终止,除非先将终止保护关闭

  • 使用EBS为存储的实例,

    默认情况

    下如果该实例被终止,这个根EBS卷也会被随之删除

    • 但也可以设置为实例被终止的时候保留根EBS卷

VPC

简单来说,VPC就是一个AWS用来隔离你的网络与其他客户网络的虚拟网络服务。在一个VPC里面,用户的数据会逻辑上地与其他AWS租户分离,用以保障数据安全。

可以简单地理解为一个VPC就是一个虚拟的数据中心,在这个虚拟数据中心内我们可以创建不同的子网(公有网络和私有网络),搭建我们的网页服务器,应用服务器,数据库服务器等等服务。

VPC有如下特点:

  • VPC内可以创建多个子网

  • 可以在选择的子网上启动EC2实例

  • 在每一个子网上分配自己规划的IP地址

  • 每一个子网配置自己的路由表

  • 创建一个Internet Gateway并且绑定到VPC上,让EC2实例可以访问互联网

  • VPC对你的AWS资源有更安全的保护

  • 部署针对实例的安全组(Security Group)

  • 部署针对子网的网络控制列表(Network Access Control List)

  • 一个VPC可以跨越多个可用区(AZ)

  • 一个子网只能在一个可用区(AZ)内

  • 安全组(Security Group)是

    有状态的

    ,而网络控制列表(Network Access Control List)是

    无状态的

    • 有状态:如果入向流量被允许,则出向的响应流量会被自动允许
    • 无状态:入向规则和出向规则需要分别单独配置,互不影响
  • VPC的子网掩码范围是从/28到/16,不能设置在这个范围外的子网掩码

  • VPC可以通过Virtual Private Gateway (VGW) 来与企业本地的数据中心相连

  • VPC可以通过AWS PrivateLink访问其他AWS账户托管的服务(VPC终端节点服务)

默认VPC
  • 在每一个区域(Region),AWS都有一个默认的VPC
  • 在这个VPC里面所有子网都绑定了一个路由表,其中有默认路由(目的地址 0.0.0.0/0)到互联网
  • 所有在默认VPC内启动的EC2实例都可以直接访问互联网
  • 在默认VPC内启动的EC2实例都会被分配公网地址和私有地址

如下图所示,我们在某一个区域内有一个VPC,这个VPC的网络是172.31.0.0/16

在这个VPC内有2个子网,分别是172.31.0.0/20 和 172.31.16.0/20。这两个子网内都有一个EC2实例,每一个实例拥有一个该子网的私有地址(172.31.x.x)以及一个AWS分配的公网IP地址(203.0.113.x)。

这两个实例关联了一个主路由表,该路由表拥有一个访问172.31.0.0/16 VPC内流量的路由条目;还有一个目的为 0.0.0.0/0 的默认路由条目,指向Internet网关。

因此这两个实例都可以通过Internet网关访问外网。

VPC Peering

VPC Peering可是两个VPC之间的网络连接,通过此连接,你可以使用IPv4地址在两个VPC之间传输流量。这两个VPC内的实例会和如果在同一个网络一样彼此通信。

  • 可以通过AWS内网将一个VPC与另一个VPC相连
  • 同一个AWS账号内的2个VPC可以进行VPC Peering
  • 不同AWS账号内的VPC也可以进行VPC Peering
  • 不支持VPC Transitive Peering
    • 如果VPC A和VPC B做了Peering
    • 而且VPC B和VPC C做了Peering
    • 那么VPC A是不能和VPC C进行通信的
    • 要通信,只能将VPC A和VPC C进行Peering

如下图,VPC A和VPC B进行了Peering之后,子网10.0.0.0/16和172.31.0.0/16会被打通,并且可以无阻地互相访问。

知识点

  • 如果两个VPC出现了地址覆盖/重复,那么这两个VPC不能做Peering
    • 例如10.0.0.0/16的VPC与10.0.0.0/24的VPC是不能做对等连接的
  • 参与VPC Peering的两个VPC可以来自不同的区域(这个功能以前是没有的)
  • 两个VPC通过VPC peering打通,那么两个VPC中所有的子网都互相路由打通了,而且地址还不能重叠。
  • 如果只需要精确控制两个vpc中的两个子网联通怎么办?借用NACL/路由表?
弹性 IP (Elastic IP)

弹性IP是专门用来分配AWS服务的IPv4地址,通过申请弹性IP地址,你可以将一个固定的公网IP分配给一个EC2实例。在这个实例无论重启,关闭,甚至终止之后,你都可以回收这个弹性IP地址并且在需要的时候分配给一个新的EC2实例。

默认情况下,AWS分配的公网IP地址都是浮动的,这意味着如果你关闭再启动你的EC2实例,这个地址也会被释放并且重新分配。但是弹性IP地址是和你的AWS账号绑定的,除非你手动释放掉这个地址,否则这个地址可以一直被你拥有。

如果弹性IP地址绑定的EC2 是stop状态,也是要收费的,只有绑定在running状态的EC2才是免费的。 其实弹性IP只要不被有效使用就需要收费,这是为了避免资源浪费~。

知识点

每个子网 CIDR 块中的前四个 IP 地址和最后一个 IP 地址无法供您使用,而且无法分配到一个实例。

比如对于一个10.0.0.0/16的VPC,如果有10.0.0.0/24的子网和10.0.1.0/24的子网,那么

  • 10.0.0.0是网络地址
  • 10.0.0.1是AWS预留的地址,用于VPC路由器
  • 10.0.0.2是AWS预留的地址,该地址被用于VPC内的DNS服务器(但对于10.0.1.0/24这个子网,10.0.1.2这个地址不会被使用,但是仍然会被保留)
  • 10.0.0.3是AWS预留的地址,供将来使用
  • 10.0.0.255是广播地址,但VPC内不支持广播,只支持单播
网络ACL(NACL)

**网络访问控制列表(NACL)**与安全组(Security Group)类似,它能在子网的层面控制所有入站和出站的流量,为VPC提供更加安全的保障。

知识点

  • 在你的默认VPC内会有一个默认的网络ACL(NACL),它会允许所有入向和出向的流量
  • 你可以创建一个自定义的网络ACL,在创建之初所有的入向和出向的流量都会被拒绝,除非进行手动更改
  • 对于所有VPC内的子网,每一个子网都需要关联一个网络ACL。如果没有关联任何网络ACL,那么子网会关联默认的网络ACL
  • 一个网络ACL可以关联多个子网,但一个子网只能关联一个网络ACL
  • 网络ACL包含了一系列(允许或拒绝)的规则,网络ACL会按顺序执行,一旦匹配就结束,不会再继续往下匹配
  • 网络ACL有入向和出向的规则,每一条规则都可以配置允许或者拒绝
  • 网络ACL是无状态的(安全组是有状态的)
    • 被允许的入向流量的响应流量必须被精准的出向规则所允许(反之亦然)
    • 一般至少需要允许临时端口(TCP 1024-65535)
    • 关于临时端口的知识,可以参见这里
NAT

NAT的全程是“Network Address Translation”,中文解释是“网络地址转换”,它可以让整个机构只使用一个公有的IP地址出现在Internet上。

NAT是一种把内部私有地址(192.168.1.x,10.x.x.x等)转换为Internet公有地址的协议,它一定程度上解决了公网地址不足的问题。

NAT实例(NAT Instance)

  • 创建NAT实例之后,一定要关闭源/目标检查(Source/Destination Check)
  • NAT实例需要创建在公有子网内
  • 私有子网需要创建一条默认路由(0.0.0.0/0),指到NAT实例
  • NAT实例的瓶颈在于实例的大小,如果遇到了网络吞吐瓶颈,你可以加大实例类型
  • 需要自己创建弹性伸缩组(Auto Scaling Group),自定义脚本来达到NAT实例的高可用(比如部署在多个可用区)
  • 需要关联一个安全组(Security Group)

注意:NAT网关按照时间和流量收费,免费套餐不包含,记得及时删除,以免产生不必要的费用。

注意:Elastic IP Addresses 没有被使用时要收费,如不使用,记得及时删除,以免产生不必要的费用。

NAT网关(NAT Gateway)
  • 网络吞吐可以达到10Gbps

  • 不需要为NAT的操作系统和程序打补丁

  • 不需要关联安全组

  • 自动分配一个公网IP地址(EIP)

  • 私有子网需要创建一条默认路由(0.0.0.0/0)到NAT网关

  • 不需要更改源/目标检查(Source/Destination Check)

Internet Gateway和NATGateway

在亚马逊云上,创建VPC后,VPC内的实例(instance)如何访问Internet呢? 通常有两种方法:

  1. 直接分配公网IP地址
    • 将VPC关联到互联网网关(Internet Gateway)
    • 这种方式,instance所在的子网,属于公共子网

2. 通过NAT Gateway 或 NAT instance

  • 在每个子网的路由表中,将默认路由设置为NAT Gateway 或 NAT instance
  • 这种方式,instance所在的子网,属于私有子网

使用ELB(弹性负载均衡器)从Internent访问公共子网和私有子网的不同方式 对于Internet可访问的ELB, 只能关联到公共子网,即默认路由是到Internet Gateway的。 如果要从Internent 访问ELB, 再访问到私有子网,需要中间加一层公共子网。

VPC流日志(Flow Logs)

**VPC流日志(Flow Logs)**可以捕获经过你的VPC的网络流量(入向和出向),Flow Logs的日志数据保存在Amazon CloudWatch Logs中。

创建了Flow Logs后,你可以在Amazon CloudWatch Logs中查看和检索其数据。

Flow logs可以在以下级别创建:

  • VPC级别
  • 子网级别
  • 网络接口级别

同时,VPC Flow Logs还有如下特性:

  • 对于Peer VPC不能开启Flow Logs功能,除非这个VPC也在你的账户内
  • 不能给Flow Logs打标签
  • Flow Logs创建后不能更改其配置

VPC Flow Logs并不捕获所有经过VPC的流量,以下流量将不会被捕获:

  • 实例访问Amazon DNS服务器(即.2地址)的流量
  • Windows进行Windows许可证激活的流量
  • 访问实例Metadata的流量(即去往169.254.169.254的流量)
  • DHCP流量
  • 访问VPC路由器的流量(即.1地址)
VPC终端节点(VPC Endpoints)

在一般的情况下,如果你需要访问S3服务,EC2实例或者DynamoDB的资源,你需要通过Internet公网来访问这些服务。有没有更快速、更安全的访问方式呢?

**VPC终端节点(VPC Endpoints)**提供了这种可能性。

VPC终端节点能建立VPC和一些AWS服务之间的高速、私密的“专线”。这个专线叫做PrivateLink,使用了这个技术,你无需再使用Internet网关、NAT网关、VPN或AWS Direct Connect连接就可以访问到一些AWS资源了!

知识点

VPC内的服务(比如EC2)需要访问S3的资源,只需要通过VPC终端节点和更改路由表,就可以通过AWS内网访问到这些服务。在这个情况下,VPC内的服务(EC2)甚至不需要连接任何外网。

**终端节点(Endpoints)**是虚拟设备,它是以能够自动水平扩展、高度冗余、高度可用的VPC组件设计而成,你也不需要为它的带宽限制和故障而有任何担忧。

AWS PrivateLink是专为客户设计用于特定用途的AWS内网,它采用了高度可用并且可扩展的架构(意味着你无需再为PrivateLink的性能和高可用性做任何额外架构设计)。

VPC终端有两种类型:接口网关

网关类型支持以下服务(需要记住):

  • Amazon S3
  • DynamoDB
Direct Connect线路

AWS Direct Connect线路可以让你通过以太网光纤线路连接你的内部网络与AWS Direct Connect Location,可以打通你的内部网络与AWS的网络,从而拥有高速率、低延迟,安全、可靠的专线网络。

一般来说,我们要搭建一条Direct Connect线路,需要先通过本地的网络服务提供商将我们内部网络接到一个同城市的Direct Connect Location (这个Location可以是Equinix, CoreSite, Digital Reality的数据中心,全球有几十个这样的地理位置)上。然后需要向AWS申请Cross Connect,将服务提供商的路由器直接连接到同一个机房不同机柜的AWS设备上。

通过这样的连接,我们可以端到端地利用专线的稳定性和高吞吐量访问我们位于AWS内的所有资源。

Direct Connect的特点

  • AWS提供的Direct Connect的带宽是1Gbps或者是10Gbps
  • 少于1Gbps速率的Direct Connect线路可以向AWS Direct Connect合作伙伴申请,可以申请50Mbps到500Mbps的线路
  • Direct Connect的数据包使用802.1Q协议进行封装(Q-in-Q tagging)

VPN连接和Direct Connect的区别

  • VPN连接可以在数分钟之内就搭建成功。如果有紧急的业务需求,可以使用VPN连接。VPN连接是基于互联网线路的,因此带宽不高,稳定性也不好,但价格便宜
  • AWS Direct Connect使用的是专线,你的数据不会跑在互联网上,是私有的、安全的网络

ELB

Elastic Load Balancing 在多个目标(如 Amazon EC2 实例、容器、IP 地址和 Lambda 函数)之间自动分配传入的应用程序流量。它可以在单个可用区内处理不断变化的应用程序流量负载,也可以跨多个可用区处理此类负载。Elastic Load Balancing 提供三种负载均衡器,它们均能实现高可用性、自动扩展和可靠的安全性,因此能让您的应用程序获得容错能力。

  • Elastic Load Balancing 使您的应用程序能够随客户需求的增长而扩展,让您高枕无忧。当任何 EC2 实例的延迟超过预先配置的阈值时,Elastic Load Balancing 能够为 Amazon EC2 实例触发 Auto Scaling。有了这种能力,您的应用程序就可以随时准备好为下一个客户请求提供服务。
  • ELB具有弹性,能自动对自身进行性能的提升,即可以理解为ELB能处理无穷无尽的数据请求
    • 但ELB的弹性不是立马生效的,如果应用程序在某个时间点有爆发性的流量发生(比方说淘宝双11),那么ELB是不会马上进行扩容的,扩容的过程需要一定的时间(1到7分钟)
    • 如果有可预料的爆发性流量要发生(或者需要进行压力测试),那么可以联系AWS技术支持,告诉AWS流量预计发生的开始和结束时间、预计的每秒请求数、总请求数。AWS可以对该ELB进行**预热(pre-warm)**从而提前达到能处理这些流量的性能大小
  • 借助 Elastic Load Balancing 中增强的容器支持,您现在可以在同一个 Amazon EC2 实例上的多个端口之间进行负载均衡。
  • Elastic Load Balancing 可以在多个目标(Amazon EC2 实例、容器、IP 地址和 Lambda 函数)和可用区之间自动均衡流量,同时确保只有正常目标收到流量,从而为应用程序提供容错能力。如果一个可用区内的所有目标均不正常,Elastic Load Balancing 将把流量路由至另一个可用区内的正常目标。当目标恢复正常状态后,负载均衡将自动恢复至原目标。
  • Elastic Load Balancing 使用户能够在 VPC 中轻松创建面向 Internet 的入口点,或在 VPC 内应用程序的各层之间路由请求流量。您可以向负载均衡器分配安全组,以控制向一系列授权来源开放哪些端口。由于 Elastic Load Balancing 与 VPC 集成在一起,所有现有的网络访问控制列表 (ACL) 和路由表均将继续提供额外的网络控制功能。
  • Elastic Load Balancing 让您能够使用同一负载均衡器在 AWS 资源和本地资源之间进行负载均衡。例如,如果您需要在 AWS 资源和本地资源之间分配应用程序流量,则可以将所有资源注册到同一个目标组内,并将该目标组与负载均衡器关联起来。或者,您可以使用两个负载均衡器(其中一个用于 AWS 资源,另一个用于本地资源)在 AWS 资源和本地资源之间进行基于 DNS 的加权负载均衡。
  • ELB只在一个特定的AWS区域中工作,不能跨区域(Region),但可以跨可用区(AZs)
  • ELB本身就是一个绝对高可用,永不宕机的分布式软件,用户不需要考虑ELB的高可用性,不需要为其设计高可用的架构设计。而且ELB不是单点故障
  • 基于ELB在所处应用架构中的位置不同,可以分两个类型ELB
    • Internet Load Balancer – 是面向公网的负载均衡器,能接受来自Internet用户的连接请求
    • Internal Load Balancer – 是面向AWS私有网段的负载均衡器,一般仅服务于AWS内部的资源。典型的使用案例是放置在前端服务器和后端服务器之间
Application Load Balancer

Application Load Balancer 最适合 HTTP 和 HTTPS 流量的负载均衡,面向交付包括微服务和容器在内的现代应用程序架构,提供高级请求路由功能。Application Load Balancer 运行于单独的请求级别(第 7 层),可根据请求的内容将流量路由至 Amazon Virtual Private Cloud (Amazon VPC) 内的不同目标。

ALB(应用程序负载平衡器)是Amazon.com提供的称为AWS(亚马逊Web服务)的系统的一部分,并且是一种负载平衡服务,用于分配Web服务上生成的负载。 近年来,由于SNS的普及,对Web应用程序的访问突然增加。 突然的流量高峰会减慢Web服务的显示速度并导致错误。 诸如ALB的负载均衡器将负载分配到此类Web服务上,并提高了稳定性和高可用性。 通过使用ALB的许多功能,您将能够连续有效地操作Web服务。

AWS服务有很多好处,但是在ALB中特别有益的是: ・支持高可用性 ・证书管理和用户认证等安全性 -灵活响应各种级别的应用程序负载 ・对应用程序进行详细的监视和审核

Network Load Balancer

若要对需要极高性能的传输控制协议 (TCP)、用户数据报协议 (UDP) 和传输层安全性 (TLS) 协议流量进行负载均衡,最适合使用网络负载均衡器。网络负载均衡器运行于连接级别(第 4 层),可将流量路由至 Amazon Virtual Private Cloud (Amazon VPC) 内的不同目标,每秒能够处理数百万请求,同时能保持超低延迟。网络负载均衡器还针对处理突发和不稳定的流量模式进行了优化。

网络负载均衡器(NLB)的特点是它是一种负载均衡器模型,最适合需要高性能的环境中的负载分配。 它每秒可以处理数百万个请求,而通信延迟的延迟很短,并且经过优化,可用于流量模式突然或突然改变的情况。 具有这些特性的NLB也将成为ELB不可或缺的一部分。

Classic Load Balancer

Classic Load Balancer 同时运行于请求级别和连接级别,可在多个 Amazon EC2 实例之间提供基本的负载均衡。Classic Load Balancer 适用于在 EC2-Classic 网络内构建的应用程序。

Auto Scaling

亚马逊弹性伸缩(Auto Scaling)自动地增加/减少EC2实例的数量,从而让你的应用程序一直能保持可用的状态。

你可以预定义Auto Scaling,使其在需求高峰期自动增加EC2实例,而在需求低谷自动减少EC2实例。这样不仅能让你的应用程序一直保持健康的状态,而且也节省了你为EC2实例所付出的费用。

Auto Scaling 适用于那些需求稳定的应用程序,同时也适用于在每小时、每天、甚至每周都有需求变化的应用程序。

  • Auto Scaling能保证你一直拥有一定数量的EC2实例来分担应用程序的负载
  • Auto Scaling能带来更高的容错性、更好的可用性和更高的性价比
  • 你可以控制伸缩的策略来决定在什么时候终止和创建EC2实例,以处理动态变化的需求
  • 默认情况下,Auto Scaling能控制每一个可用区内所运行的实例数量尽量平均
    • 为了达到这个目标,Auto Scaling在需要启动新实例的时候,会选择一个目前拥有运行实例最少的可用区
弹性伸缩组(Auto Scaling Group)
  • 弹性伸缩组(ASG)是弹性伸缩的核心,它包含了多个拥有类似配置/类型的EC2实例,这些实例被逻辑上认为是一样的
  • 弹性伸缩组需要的几个参数:
    • 启动配置(Launch Configuration):它决定了EC2使用什么模板,模板内容包括了镜像文件(AMI),实例类型、密钥对、安全组和挂载的存储设备
    • 最小和最大的性能:决定了在弹性伸缩的情况下,EC2实例数量的浮动范围
    • 所需的性能:决定了这个弹性伸缩组要保持的运作所需要的基本的EC2实例数量;如果没有填写,则默认为其数值等同于最小的性能
    • 可用区和子网:定义EC2实例启动时候所在的可用区和子网信息
    • 参数和健康检查:参数定义了何时启动新实例,何时终止旧实例;健康检查决定了实例的健康状态。
  • 如果一个EC2实例的健康状态变成“不健康”,那么ASG会终止这个EC2实例,并且自动启动一个新的EC2实例
  • 弹性伸缩组(ASG)只能在某一个AWS区域内运行,不能跨越多个区域
  • 如果启动配置(Launch Configuration)有更新,那么之后启动的新EC2实例会使用新的启动配置,而旧的EC2实例不受影响
  • 从AWS管理平台你可以直接删除一个弹性伸缩组(ASG);从AWS CLI你只能先将最小的性能和需求的性能两个参数设置为0,才能删除这个弹性伸缩组。
Placement GroupEC2置放群组

EC2 置放群组(Placement Group)逻辑性地把一些实例放置在一个组里面,在这个组里面的实例能享受低延迟高网络吞吐的网络。

  • EC2 Placement Group分为
    • 集群置放群组(Cluster Placement Group)即传统的置放群组,所有的实例需要在同一个可用区内
    • 分布置放群组(Spread Placement Group)是将实例分布到不同的底层硬件,可以在不同的可用区内。你最多可以在每一个置放群组的每一个可用区内创建7个实例
    • 分区置放群组(Partition Placement Group)确保了置放群组中的每个分区具有自己的一组机架,每个机架具有自己的网络和电源
  • Placement Group提供了低延迟,高速率的网络,可提供高达10 Gbps的速度
  • EC2 Placement Group的命名需要在你的AWS账户内唯一,不能有命名重复
  • 只有特定的EC2实例类型可以放在配置Placement Group内(某些计算优化型、GPU、内存优化型和存储优化型的实例)
  • AWS建议在一个Placement Group内的所有EC2实例是一模一样的,否则会有短板效应
  • 不可以合并多个EC2 Placement Group
  • 不可以将一个正在运行的EC2实例放到一个EC2 Placement Group中;只能为这个EC2实例创建一个AMI,然后基于AMI创建一个新的实例并且加入到Placement Group内
  • Placement Group可以跨越peerd VPC,但要保证在同一个可用区内
  • 如果在Placement Group中创建实例的时候出现“capacity error”的错误,可以停止再启动组中的所有实例,再重新创建刚才的实例
    • 停止再启动组中的所有实例可以改变这些实例所在的底层物理设备,从而带来更多的性能和空间启动新的实例
  • Placement Group的创建会告诉AWS将组里的实例安置在物理上接近的AWS设备内

S3

Amazon S3 提供一系列适合不同使用案例的存储类。这包括 S3 标准(适用于频繁访问的数据的通用存储);S3 智能分层(适用于具有未知或变化的访问模式的数据);**S3 标准 - 不频繁访问(S3 标准 - IA)**和 **S3 单区 - 不频繁访问(S3 单区 - IA)**适用于长期存在、但访问不太频繁的数据;以及 Amazon S3 Glacier (S3 Glacier)Amazon S3 Glacier 深度存档(S3 Glacier 深度存档)(适用于长期存档和数字保留)。Amazon S3 还提供了在整个数据生命周期内管理数据的功能。设置 S3 生命周期策略之后,无需更改您的应用程序,您的数据将自动传输到其他存储类。

  • 启用了**版本控制(Version Control)**你可以恢复S3内的文件到之前的版本
  • S3可以开启生命周期管理,对文件在不同的生命周期进行不同的操作。比如,文件在创建30天后迁移到便宜的S3等级(S3-IA),再经过30天进行归档(迁移到Glacier),再过30天就进行删除
  • S3是对象存储,可以在S3上存储各种类型的文件,它不是块存储(EBS是块存储)
  • S3的名字是需要全球唯一的,不能与任何区域的任何人拥有的S3重名

通用

Amazon S3 标准(S3 标准)

针对频繁访问的数据,S3 标准提供较高的持久性、可用性和性能对象存储。由于 S3 标准可交付低延迟的高吞吐量,因此适合广泛使用案例,包括云应用程序、动态网站、内容分配、移动和游戏应用程序以及大数据分析。S3 存储类可在对象级别进行配置,单一存储桶可包含跨 S3 标准、S3 智能分层、S3 标准 - IA 和 S3 单区 - IA 存储的对象。您还可使用 S3 生命周期策略在存储类之间自动转移对象,而无需更改任何应用程序。

主要特征:

  • 较低的延迟和较高的吞吐量性能
  • 可跨多个可用区实现 99.999999999% 的对象的持久性
  • 针对影响整个可用区的事件具有弹性
  • 经过设计,可在指定年度内实现 99.99% 的可用性
  • Amazon S3 服务等级协议提供支持,实现可用性
  • 支持传输中数据 SSL 和静态数据加密
  • 用于自动将对象迁移到其他 S3 存储类的 S3 生命周期管理

未知或变化的访问

Amazon S3 智能分层(S3 智能分层)

S3 智能分层存储类设计为通过自动将数据移至最经济高效的访问层,而不影响性能或运行开销来优化成本。它的工作原理是:将对象存储在两个访问层中:一个层已针对频繁访问而优化,另一个成本较低的层已针对不频繁访问而优化。对于每对象的小额月度监控和自动化费用,Amazon S3 监控 S3 智能分层中对象的访问模式,然后将连续 30 天未访问的对象移至不频繁访问层。如果访问不频繁访问层中的对象,则对象将自动移回频繁访问层。在使用 S3 智能分层存储类时不收取检索费用,并且在访问层之间移动对象不收取额外的分层费用。对于访问模式未知或不可预测的长期存在的数据,它是理想的存储类。S3 存储类可在对象级别进行配置,单一存储桶可包含存储在 S3 标准、S3 智能分层、S3 标准 - IA 和 S3 单区 - IA 中的对象。您可直接将对象上传到 S3 智能分层,或使用 S3 生命周期策略将对象从 S3 标准和 S3 标准 - IA 传输到 S3 智能分层。您还可将 S3 智能分层中的对象存档至 S3 Glacier。

主要特征:

  • 和 S3 标准相同的较低延迟和较高吞吐量性能
  • 小额月度监控和自动分层费用
  • 基于变化的访问模式在两种访问层之间自动移动对象
  • 可跨多个可用区实现 99.999999999% 的对象的持久性
  • 针对影响整个可用区的事件具有弹性
  • 经过设计,可在指定年度内实现 99.9% 的可用性
  • Amazon S3 服务等级协议提供支持,实现可用性
  • 支持传输中数据 SSL 和静态数据加密
  • 用于自动将对象迁移到其他 S3 存储类的 S3 生命周期管理

不频繁访问

Amazon S3 标准 - 不频繁访问(S3 Standard – IA)

S3 标准 - IA 适用于不常访问、但在需要时要求快速访问的数据。S3 标准 – IA 提供较高的持久性、较高的吞吐量以及较低的 S3 标准延迟,并且每 GB 的存储价格和检索费用都较低。成本较低且性能出色使得 S3 标准 - IA 成为长期存储和备份的理想选择,也非常适用于灾难恢复文件的数据存储。S3 存储类可在对象级别进行配置,单一存储桶可包含跨 S3 标准、S3 智能分层、S3 标准 - IA 和 S3 单区 - IA 存储的对象。您还可使用 S3 生命周期策略在存储类之间自动转移对象,而无需更改任何应用程序。

主要特征:

  • 和 S3 标准相同的较低延迟和较高吞吐量性能
  • 可跨多个可用区实现 99.999999999% 的对象的持久性
  • 针对影响整个可用区的事件具有弹性
  • 数据在整个可用区遭到破坏时具有弹性
  • 经过设计,可在指定年度内实现 99.9% 的可用性
  • Amazon S3 服务等级协议提供支持,实现可用性
  • 支持传输中数据 SSL 和静态数据加密
  • 用于自动将对象迁移到其他 S3 存储类的 S3 生命周期管理
Amazon S3 单区 - 不频繁访问(S3 单区 - IA)

S3 单区 - IA 适用于不常访问、但在需要时要求快速访问的数据。其他 S3 存储类将数据存储在至少三个可用区 (AZ) 中,而 S3 单区 - IA 将数据存储在单个 AZ 中并且成本较 S3 标准 - IA 低 20%。S3 单区 - IA 非常适合希望针对不频繁访问的数据使用较低费用选项且不需要 S3 标准或 S3 标准 - IA 的可用性和弹性的客户。对于存储本地数据或可轻松重新创建的数据的辅助备份副本,它是一个理想的选择。对于使用 S3 跨区域复制从另一 AWS 账户复制的数据,您还可使用它作为其经济高效的存储。

S3 单区 - IA 提供相同的持久性†、较高的吞吐量以及较低的 S3 标准延迟,并且每 GB 的存储价格和检索费用都较低。S3 存储类可在对象级别进行配置,单一存储桶可包含跨 S3 标准、S3 智能分层、S3 标准 - IA 和 S3 单区 - IA 存储的对象。您还可使用 S3 生命周期策略在存储类之间自动转移对象,而无需更改任何应用程序。

主要特征:

  • 和 S3 标准相同的较低延迟和较高吞吐量性能
  • 经过设计,可在单个可用区中实现对象的 99.999999999% 的持久性†
  • 可在指定年度内实现 99.5% 的可用性
  • Amazon S3 服务等级协议提供支持,实现可用性
  • 支持传输中数据 SSL 和静态数据加密
  • 用于自动将对象迁移到其他 S3 存储类的 S3 生命周期管理

† 由于 S3 单区 – IA 将数据存储在单个 AWS 可用区中,存储在这个存储类中的数据将在可用区销毁时丢失。

存档

Amazon S3 Glacier (S3 Glacier)

S3 Glacier 是安全、持久且成本低的存储类,可用于数据存档。您可以放心存储任意大小的数据 – 成本与本地解决方案相当,甚至更低。为了保持成本低廉,同时满足各种需求,S3 Glacier 提供三种检索选项,各自的检索时间从数分钟到数小时不等。您可直接将对象上传到 S3 Glacier,或使用 S3 生命周期策略在适用于活动数据的任何 S3 存储类(S3 标准、S3 智能分层、S3 标准 - IA 和 S3 单区 - IA)与 S3 Glacier 之间传输数据。有关更多信息,请访问 Amazon S3 Glacier 页面 »

主要特征:

  • 可跨多个可用区实现 99.999999999% 的对象的持久性
  • 数据在整个可用区遭到破坏时具有弹性
  • 支持传输中数据 SSL 和静态数据加密
  • 成本低,非常适合长期存档
  • 检索时间可配置,从数分钟到数小时不等
  • 用于直接上传到 S3 Glacier 的 S3 PUT API,以及用于对象自动迁移的 S3 生命周期管理
Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive)

S3 Glacier Deep Archive 是 Amazon S3 成本最低的存储类,支持每年可能访问一两次的数据的长期保留和数字预留。它是为客户设计的 – 特别是那些监管严格的行业,如金融服务、医疗保健和公共部门 – 为了满足监管合规要求,将数据集保留 7-10 年或更长时间。S3 Glacier Deep Archive 还可用于备份和灾难恢复使用案例,是成本效益高、易于管理的磁带系统替代,无论磁带系统是本地库还是非本地服务都是如此。S3 Glacier Deep Archive 是 Amazon S3 Glacier 的补充,后者适合存档,其中会定期检索数据并且每隔几分钟可能需要一些数据。存储在 S3 Glacier Deep Archive 中的所有对象都将接受复制并存储在至少三个地理分散的可用区中,受 99.999999999% 的持久性保护,并且可在 12 小时内恢复。

主要特征:

  • 可跨多个可用区实现 99.999999999% 的对象的持久性
  • 为长期保留的数据(保留 7-10 年)设计的成本最低的存储类
  • 磁带库的完美替代
  • 检索时间为 12 小时以内
  • 用于直接上传到 S3 Glacier Deep Archive 的 S3 PUT API,以及用于对象自动迁移的 S3 生命周期管理

跨 S3 存储类的性能

S3 标准 S3 智能分层* S3 标准 – IA S3 单区 - IA† S3 Glacier S3 Glacier Deep Archive
具有持久性设计 99.999999999% (11 个 9) 99.999999999% (11 个 9) 99.999999999% (11 个 9) 99.999999999% (11 个 9) 99.999999999% (11 个 9) 99.999999999% (11 个 9)
设计有可用性 99.99% 99.9% 99.9% 99.5% 99.99% 99.99%
可用性 SLA 99.9% 99% 99% 99% 99.9% 99.9%
可用区 ≥3 ≥3 ≥3 1 ≥3 ≥3
每个对象的最低容量费用 不适用 不适用 128KB 128KB 40KB 40KB
最低存储持续时间费用 不适用 30 天 30 天 30 天 90 天 180 天
检索费用 不适用 不适用 每检索 1GB 每检索 1GB 每检索 1GB 每检索 1GB
首字节延迟 毫秒 毫秒 毫秒 毫秒 选择分钟或小时 选择小时
存储类型 对象 对象 对象 对象 对象 对象
生命周期转换

† 由于 S3 单区 – IA 将数据存储在单个 AWS 可用区中,存储在这个存储类中的数据将在可用区销毁时丢失。

* S3 智能分层收取小额分层费用,对自动分层有 128KB 的最小合格对象大小限制。您可以存储更小的对象,但始终将按频繁访问层费率计费。有关更多信息,请参阅 Amazon S3 定价

S3 传输加速(Transfer Acceleration)

Amazon S3 Transfer Acceleration 可在客户与 S3 存储桶之间实现快速、轻松、安全的远距离文件传输。Transfer Acceleration 利用 Amazon CloudFront 的全球分布式边缘站点。当数据到达某个边缘站点时,会被经过优化的网络路径路由至 Amazon S3。

一般来说,我们在上传文件到S3存储桶的时候,是直接通过Internet将数据传输到位于某一个区域的S3存储桶。但如果我们的存储桶位于一个离用户比较远的区域(比如说S3存储桶位于东京区域,而我们的用户位于中国),那么基于Internet的传输速度就会非常慢。

这个时候使用S3传输加速 (Amazon S3 Transfer Acceleration),可以利用AWS CloudFront CDN网络的**边缘节点(Edge Locations)**加速传输的过程。我们可以将数据上传到离我们最近的边缘节点(比如说香港),然后再通过AWS内部网络(更高速,更稳定)传输到东京区域的S3存储桶。

在以下情形下你可能就需要考虑使用S3传输加速:

  • 您位于全球各地的客户需要上传到集中式存储桶
  • 您定期跨大洲传输数 GB 至数 TB 数据
  • 您在上传到 Amazon S3 时未充分利用 Internet 上的可用带宽
使用S3托管一个静态网站

您可以在 Amazon Simple Storage Service (Amazon S3) 上托管静态网站。这个静态网站可以包含HTML,图片和视频等静态文件,也可以包含客户端脚本。

但是这个静态网站不支持服务端处理的脚本,比如PHP、JSP或者ASP.NET。

假设S3存储桶的名字是aws-cloudbin-com,那么访问其中的index.html文件的路径是

  • 普通的S3 URL:https://s3.ap-northeast-2.amazonaws.com/itilv4.cn/index.html
  • S3托管的静态网站URL:http://itilv4.cn.s3-website-ap-northeast-2.amazonaws.com

请留意其中的区别,这个内容在SAA考试中经常会涉及。

S3版本控制
  • 启用版本控制后S3会保存一个文件的所有版本,包括所有历史写入的版本,即使删除了的文件也会保存

  • 版本控制是很好的备份工具

  • 启用了版本控制功能之后,要恢复一个文件,只需要删除**删除标记(Delete Marker)**即可

  • 版本控制默认不开启,但一旦启用,就不能关闭,只能暂停

  • 版本控制一经开启,所有在这个S3桶内的对象都会受用

  • 版本控制可以和生命周期规则集成使用

  • 使用MFA (Multi-Factor Authentication,多重认证) Delete可以在删除文件的时候增加多一层安全保障,防止用户进行误删除操作

S3跨区域同步
  • 启用S3跨区域复制首先需要在源S3存储桶目标S3存储桶都开启S3版本控制功能
  • 源S3存储桶和目标S3存储桶不能位于同一个区域
  • 在开启跨区域复制前的已存在的文件不会被自动同步
    • 开启跨区域复制之后,新增加的文件会被自动同步
  • 跨区域复制不能叠加,意味着数据不可以从A同步到B,然后再同步到C
  • 删除文件,文件的某一个版本或者删除删除标记(Delete Marker)是不会被同步的
  • 复制时间控制将在 15 分钟内复制 99.99% 的新对象,将会收取额外的每 GB 数据传输费和 CloudWatch 指标费。
S3的安全性
  • 默认情况下,所有新创建的S3存储桶都是私有的,只有存储桶的创建者/拥有者才能访问
  • 你可以通过**桶策略(Bucket Policy)访问控制列表(Access Control Lists)**两个方法来控制S3存储桶的安全性
  • S3存储桶的访问日志可以存到另一个S3存储桶里面,方便对日志进行查看

EBS

  • 亚马逊EBS卷提供了高可用、可靠、持续性的块存储,EBS可以依附到一个正在运行的EC2实例上
  • 如果你的EC2实例需要使用数据库或者文件系统,那么建议使用EBS作为首选的存储设备
  • EBS卷的存活可以脱离EC2实例的存活状态。也就是说在终止一个实例的时候,你可以选择保留该实例所绑定的EBS卷
  • EBS卷可以依附到**同一个可用区(AZ)**内的任何实例上
  • EBS卷可以被加密,如果进行了加密那么它存有的所有已有数据,传输的数据,以及制造的镜像都会被加密
  • EBS卷可以通过快照(Snapshot)来进行(增量)备份,这个快照会保存在S3 (Simple Storage System)上
  • 你可以使用任何快照来创建一个基于该快照的EBS卷,并且随时将这个EBS卷应用到该区域的任何实例上
  • EBS卷创建的时候已经固定了可用区,并且只能给该可用区的实例使用。如果需要在其他可用区使用该EBS,那么可以创建快照,并且使用该快照创建一个在其他可用区的新的EBS卷
  • 快照还可以复制到其他的AWS区域

![image-20200926002432072](/Users/gaoyunhu/Library/Application Support/typora-user-images/image-20200926002432072.png)

  • EBS的不同类型,需要了解不同类型的EBS主要的使用场景

    • 通用型SSD – GP2 (高达10,000 IOPS),适用于启动盘,低延迟的应用程序等
    • 预配置型SSD – IO1 (超过10,000 IOPS),适用于IO密集型的数据库
    • 吞吐量优化型HDD -ST1,适用于数据仓库,日志处理
    • HDD Cold – SC1 – 适合较少使用的冷数据
    • HDD, Magnetic
  • 不能将EBS挂载到多个EC2实例上,一个EBS只能挂载到1个EC2实例上。

    • 如果有共享数据盘的需求,请使用EFS (Elastic File System)
  • 根EBS卷默认是不能进行加密的,但可以使用第三方的加密工具(例如BitLocker)对其进行加密

    • 除了根磁盘外的其他卷是可以加密的

EFS (Elastic File System)

Amazon EFS提供了可扩展的文件系统,可以用在EC2实例上。EFS使用起来非常简单,你很容易地就可以在EFS上创建和配置一个文件系统。

它是具有弹性的,会根据文件总量的大小自动伸缩,即它能永远满足你的需求。你可以在多个EC2实例上使用同样的一个EFS文件系统,以达到共享通用数据的目的。

Amazon EFS可以简单地理解为是共享盘或NAS存储。

EFS的一些特性:

  • 支持Network File System version 4 (NFSv4)协议

  • EFS是Block Base Storage,而不是Object Base Storage(例如S3)

  • 使用EFS,你只需要为你使用的存储空间付费,没有预支费用

  • 可以有高达PB级别的存储

  • 同一时间能支持上千个NFS连接

  • EFS的数据会存储在一个AWS区域的多个可用区内

  • Read After Write Consistency

  • Amazon EFS在Windows实例上不受支持

  • 读写一致性(Read After Write Consistency)

AWS EBS, S3和EFS的区别

AWS S3对于静态页面的托管、多媒体分发、版本管理、大数据分析、数据存档来说都非常有用。S3可以和AWS CloudFront结合使用而达到更快的上传和下载速度。

AWS EBS是可以用来做数据库或托管应用程序的持续性文件系统,EBS具有很高的IO读写速度并且即插即用。

相比前面两种存储,AWS EFS是比较新的一项服务。它提供了可以在多个EC2实例之间共享的网络文件系统,功能类似于NAS设备。可以用EFS来处理大数据分析、多媒体处理和内容管理。

AMI Snapshot

Amazon Machine Image (AMI) 是亚马逊AWS提供的系统镜像,

  • 由实例的操作系统、应用程序和应用程序相关的配置组成的模板
  • 一个指定的需要在实例启动时附加到实例的卷的信息(比方说定义了使用8 GB的General Purpose SSD卷)
  • AMI是区域化的,只能使用本区域的AMI来创建实例;但你可以将AMI从一个区域复制到另一个区域

AMI的生命周期,你可以创建并注册一个AMI,并且可以使用这个AMI来创建一个EC2实例。同时你也可以将这个AMI复制到同一个AWS区域或者不同的AWS区域。你同样也可以注销这个AMI镜像。

你可以通过创建一个关于EBS的快照将Amazon EBS卷上的数据备份起来,方便之后基于该快照创建新的EBS卷。快照还有如下特点:

  • 备份的快照将会保存在**亚马逊S3 (Simple Storage System)**上
  • EBS快照属于增量备份,即第二次之后的快照只会更新变化了的那一部分数据
  • 你可以在EC2实例运行的状态下进行EBS的快照操作,但会给EC2的系统带来一定延迟(CPU,内存利用率会变高)
  • 最佳实践是将EC2实例停止,然后将EBS从EC2上卸载下来,进行快照操作
  • 你可以基于EBS快照在同一个AWS区域创建新的EBS卷,这个卷可以是任何EBS类型,任何支持的大小
  • 你也可以将快照复制到其他AWS区域
  • 加密的EBS卷在创建快照后,该快照也会被自动加密
  • 通过加密快照创建的EBS也是自动加密的
  • 在复制未加密的快照时,你可以在复制过程中对其加密
  • 你可以分享快照给其他账户或AWS市场,但仅限于这个快照是没有进行过加密的

有几个比较常见的场景会需要你使用AMI和EBS快照的功能。

如果你想将一个EC2实例从一个AWS区域迁移到另一个AWS区域,你需要:

  1. 创建基于这个EC2实例的AMI
  2. 将这个AMI进行复制,复制到另一个AWS区域
  3. 通过这个AMI创新创建一个EC2实例
  4. 充当数据盘的EBS也需要做EBS快照
  5. 将这个EBS快照进行复制,复制到另一个AWS区域
  6. 通过这个EBS快照创建EBS卷,并且依附到EC2实例上去

如果你想复制一个EBS卷到该AWS区域的不同可用区,你可以:

  1. 创建一个EBS快照
  2. 通过EBS快照创建一个新的EBS卷,并且定义大小、卷类型、是否加密等属性
快照与AMI的区别

把EBS上的 ****数据****拷贝到S3上进行保存的称为快照(Snapshot)。 在快照中 并不包含用于管理实例的信息。既可以使用快照生成一个EBS卷,也可以使用快照生成AMI

AMI就是 ****数据信息****加上 实例管理信息的一个文件。 启动一个新实例时,必须指定一个AMI,而不能是快照

AWS Storage Gateway

是一项混合云存储服务,可让您从本地访问几乎不受限制的云存储。客户使用 Storage Gateway 简化存储管理,降低关键混合云存储用例的成本。其中包括将备份和存档移动到云、使用云存储支持的本地文件共享,以及为本地应用程序提供对 AWS 中数据的低延迟访问。

为了支持这些用例,Storage Gateway 提供了三种不同类型的网关:文件网关磁带网关卷网关,这些网关将本地应用程序无缝连接到云存储,从而在本地缓冲数据以进行低延迟访问。您的应用程序可以使用 NFS、SMB、iSCSI 等标准存储协议通过虚拟机或网关硬件设备连接到该服务。网关会连接到 Amazon S3、Amazon S3 Glacier、Amazon S3 Glacier Deep Archive、Amazon EBS 和 AWS Backup 等 AWS 存储服务,这些服务为 AWS 中的文件、卷、快照和虚拟磁带提供存储。该服务包括一种高度优化且高效的数据传输机制,拥有带宽管理、自动网络恢复能力。

AWS Storage Gateway 使用哪种加密方式保护数据

答:在任何类型的网关设备与 AWS 存储之间传输的所有数据均已使用 SSL 进行了加密。默认情况下,AWS Storage Gateway 存储在 S3 中的所有数据均已使用 Amazon S3 托管加密密钥 (SSE-S3) 在服务器端进行了加密。此外,您还可以选择配置不同的网关类型,以使用 AWS Key Management Service (KMS) 通过 Storage Gateway API 加密存储的数据。请参阅下文,按文件网关磁带网关卷网关了解有关 KMS 支持的具体信息。

  • 文件网关(File Gateway):通过 NFS 连接直接访问存储在 Amazon S3 或者 Amazon Glacier上的文件,并且本地进行缓存

  • Volume Gateway

    :使用 iSCSI 作为本地磁盘连接到本地服务器上,让本地服务器可以访问到 Amazon S3 内的文件,其中,Volume Gateway 又分为以下两种

    • Stored Volumes:所有的数据都将保存到本地,但是会异步地将数据备份到AWS S3上
    • Cached Volumes:所有的数据都会保存到S3,但是会将最经常访问的数据缓存到本地
  • Tape Gateway:用来取代传统的磁带备份,通过 Tape Gateway 可以使用NetBackup,Backup Exec或Veeam 等备份软件将文件备份到 Amazon S3 或者 Amazon Glacier 上

  • AWS Storage Gateway 支持三种存储接口:文件、卷和磁带

AWS DataSync

使您可以轻松快捷地在本地存储与Amazon S3或Amazon Elastic File System(Amazon EFS),适用于Windows File Server的Amazon FSx之间移动大量在线数据。与数据传输相关的手动工作可能非常耗时,并且给IT操作带来负担。DataSync可处理许多任务,包括脚本复制作业,计划和监视传输,数据身份验证以及优化网络使用(跳过或自动)。DataSync软件代理连接到网络文件系统(NFS)和服务器消息块(SMB)存储以及您的自我管理对象存储,因此您无需修改应用程序。DataSync可以通过Internet或AWS Direct Connect链接传输数百TB的数据和数百万个文件,其速度是开源工具的10倍。通过DataSync,您可以将活动数据集和归档迁移到AWS,将数据传输到云以进行及时分析和处理,以及将数据复制到AWS以实现业务连续性。DataSync入门很容易。部署DataSync代理,连接到文件系统,选择一个AWS存储资源,然后开始数据传输。您只需为传输的数据付费。

Snowball

Snowball 是一种 PB 级数据传输解决方案,旨在使用安全设备将大量数据传入和传出亚马逊 AWS。

很多公司在上云的过程中会需要把数据从传统的数据中心迁移到AWS的数据中心去,但是对于拥有海量数据的公司来说,这会是一个不小的挑战。即使使用 AWS DirectConnect (DX) 的 1Gbps 专线来传输数据,对于 PB 级别的数据来说也需要花费很长的一段时间。

在过去,AWS 提供了一种数据导入/导出服务,叫做 AWS Import/Export Disk。基本上是 AWS 会寄一些磁盘给到客户,客户手动将数据导入到磁盘,然后将磁盘寄回给 AWS。

但是磁盘的容量有限,并且不容易管理,也容易损坏数据,因此现在这种方式已经不用了。取而代之的是更安全,更高性价比的 AWS Snowball 服务。

如果您有大量需要迁移至 AWS 的数据,Snowball 通常比通过 Internet 传输数据更快并且性价比更高。如果您要定期接收或需要与客户、消费者或业务伙伴共享大量数据,请使用 Snowball 设备。Snowball 设备可以直接从 AWS 运送至客户或消费者所在位置。如果您需要更加安全快速地将数 TB 到数 PB 数据传输到 AWS,那么 Snowball 是数据传输的一个有效选择。如果您不希望对网络基础设施进行昂贵的升级、您经常遇到大量数据积压的情况、您在物理隔绝环境下工作,或者您所在的区域没有高速 Internet 连接或这种高速连接的成本过高,Snowball 同样是正确的选择。

Snowball 还有如下特性:

  • 可以在本地数据中心和 Amazon S3 之间进行数据的导入和导出
  • 支持 50TB 的容量版本以及 80TB 的容量版本,可以同时使用多个 Snowball 并行传输数据。
  • 外设使用了防篡改外壳,支持 AES-256 加密和行业标准的可信平台模块 (TPM)

使用 AWS Snowball,你需要到 AWS 管理控制台申请,AWS 会邮寄一个物理 Snowball 给你,然后你需要通过以太网和客户端软件把数据从本地传输到 Snowball上,最后将 Snowball 邮寄给 AWS 即可。AWS 会负责将 Snowball 内的数据导入到你所需要的 S3 存储桶上。

ECS

Amazon Elastic Container Service (ECS)是一个有高度扩展性的容器管理服务。它可以轻松运行、停止和管理集群上的Docker容器,你可以将容器安装在EC2实例上,或者使用Fargate来启动你的服务和任务。

Amazon ECS可以在一个区域内的多个可用区中创建高可用的应用程序容器,你可以定义集群中运行的Docker镜像和服务。而且你可以充分利用AWS内部的**Amazon ECR (Elastic Container Registry)**或者外部的Registry(比如Docker Hub或自建的Registry)来存储和提取容器镜像。

使用Amazon ECS服务,你不需要再担心如何去运营集群管理、配置管理和基础架构的扩展性。

Amazon ECS还可以带来一致的部署和构建体验、管理和扩展批处理和**提取-转换-加载(ETL)**工作负载以及在微服务模型上构建先进的应用程序架构。

ECS 任务定义(Task Definition)

要在Amazon ECS上运行应用程序,你需要创建任务定义。任务定义是一个JSON格式的文本文件,这个文件定义了构建应用程序的各种参数。这些参数包括了:要使用哪些容器镜像,使用哪种启动类型,打开什么端口,使用什么数据卷等等。

以下是一个简单的任务定义示例,这个示例可以用来创建一个运行NGINX服务器的单个容器。

{ "family": "webserver", "containerDefinitions": [ { "name": "web", "image": "nginx", "memory": "100", "cpu": "99" }, ], "requiresCompatibilities": [ "FARGATE" ], "networkMode": "awsvpc", "memory": "512", "cpu": "256", }

ECS任务定义有点类似AWS的CloudFormation,只是ECS任务定义是用来创建Docker容器的。

ECS调度( Scheduling)

ECS任务调度负责将任务放置到集群中,你可以定义一个**服务(Service)**来运行和管理一定数量的任务。

服务调度(Service Scheduler)

  • 保证了一定数量的任务持续地运行,如果任务失败了会自动进行重新调度
  • 保证了任务内会注册一个ELB给所有容器

自定义调度(Custom Scheduler)

  • 你可以根据自己的业务需求来创建自己的调度
  • 利用第三方的调度

ECS集群(Cluster)

当你使用Amazon ECS运行任务时,你的任务会放在到一个逻辑的资源池上,这个池叫做集群(Cluster)

如果你使用Fargate启动类型,那么ECS将会管理你的集群资源,你不需要管理容器的底层基础架构。

如果你使用EC2的启动类型,那么你的集群会是一组容器实例。

在Amazon ECS上运行的容器实例实际上是运行了ECS**容器代理(Container Agent)**的EC2实例。

特点:

  • 集群包含了多种不同类型的容器实例
  • 集群只能在同一个区域内
  • 一个容器实例只能存在于一个集群中
  • 可以创建IAM策略来限制用户访问某个集群

ECS容器代理(Container Agent)

容器代理会在Amazon ECS集群内的每个基础设施资源上运行。使用容器代理可以让容器实例和集群进行通信,它可以向ECS发送有关资源当前运行的任务和资源使用率的信息。

容器代理可以接受ECS的请求进行启动和停止任务。

  • 在某些ECS AMI上已经预安装好了
  • 可以在Amazon Linux,Ubuntu,Redhat等系统上运行
  • 不能在Windows上运行

ECS安全性

  • IAM角色

    • EC2实例可以使用IAM角色访问ECS
    • ECS任务使用IAM角色来访问服务和资源
  • 实例上需要关联一个安全组(Security Groups)

  • 可以在ECS集群上访问和配置EC2实例的操作系统层

Lambda

使用AWS Lambda,你无需配置和管理任何服务器和应用程序就能运行你的代码。只需要上传代码,Lambda就会处理运行并且根据需要自动进行横向扩展。因此Lambda也被称为**无服务(Serverless)**函数。创建您自己的按 AWS 规模、性能和安全性运行的后端服务。AWS Lambda 可以自动运行代码来响应多个事件,例如,通过 Amazon API Gateway 发送的 HTTP 请求、Amazon S3 存储桶中的对象修改、Amazon DynamoDB 中的表更新以及 AWS Step Functions 中的状态转换。

Lambda 在可用性高的计算基础设施上运行您的代码,执行计算资源的所有管理工作,其中包括服务器和操作系统维护、容量预配置和自动扩展、代码和安全补丁部署以及代码监控和记录。您只需要提供代码。

AWS Lambda的特点

  • 没有服务器/无服务,或者说真实的服务器由AWS管理
  • 只需要为运行的代码付费,不需要管理服务器和操作系统
  • 持续性/自动的性能伸缩
  • 非常便宜
  • AWS只会在代码运行期间收取相应的费用,代码未运行时不产生任何费用
  • 代码的最长执行时间是15分钟,如果代码执行时间超过15分钟,则需要将1个代码细分为多个

AWS Elastic Beanstalk

AWS Elastic Beanstalk 是一项易于使用的服务,用于在熟悉的服务器(例如 Apache 、Nginx、Passenger 和 IIS )上部署和扩展使用 Java、.NET、PHP、Node.js、Python、Ruby、GO 和 Docker 开发的 Web 应用程序和服务。

您只需上传代码,Elastic Beanstalk 即可自动处理包括容量预配置、负载均衡、自动扩展和应用程序运行状况监控在内的部署工作。同时,您能够完全控制为应用程序提供支持的 AWS 资源,并可以随时访问底层资源。

Elastic Beanstalk 不额外收费 – 您只需为存储和运行应用程序所需的 AWS 资源付费。

采用Elastic BeanStalk的DevOps环境部署业务流程如下:

以简单web服务+ELB负载均衡的典型应用举例,需要运维和开发完成以下步骤:

  1. DevOps在Elastic BeanStalk服务种选择需要部署的服务架构后创建服务。
  2. DevOps在服务器上部署代码。

可以看出,基于Elastic BeanStalk服务的DevOps部署方式比传统部署方式方便灵活很多,摆脱了传统环境下开发和运维按部就班泾渭分明的生产关系,Elastic Beanstack可以做到开发运维一体化,one shot for everything且业务无限弹性扩张。

API Gateway

Amazon API Gateway可以让开发人员创建、发布、维护、监控和保护任何规模的API。你可以创建能够访问 AWS、其他 Web 服务以及存储在 AWS 云中的数据的API。

API Gateway没有最低使用成本,我们用多少服务内容就花费多少。

比如在最新的A Cloud Guru的serverless 会议上面提到了,他们整个网站都是基于API Gateway和Lambda的,并没有任何计算服务器(EC2,ECS等),永远不用担心性能和扩容的问题。并且他们每个月的花销只是580美金!

API Gateway和Lambda的结合可以构成如下图所示的无服务(Serverless)架构。

http://www.cloudbin.cn/wp-content/uploads/2020/02/api01.png

关于API Gateway,我们需要了解这些

  • 理解什么是API Gateway,它能用来做什么

  • API Gateway可以缓存内容,从而更快地将一些常用内容发送给用户

  • API Gateway是一种低成本的无服务(serverless)方案,而且它可以自动弹性伸缩(类似ELB,NAT网关)

  • 可以对API Gateway进行节流,以防止恶意攻击

  • 可以将API Gateway的日志放到CloudWatch中

  • 如果你使用JavaScript/AJAX来跨域访问资源,那么你需要保证在API Gateway上已经开启了

    CORS (Corss-Origin Resource Sharing)

    功能

    • 如果没有开启CORS功能,在使用API Gateway做跨域访问的时候,可能会出现错误 “Origin policy cannot be read at the remote resource?”
    • 我们在S3的课程中也介绍过CORS的功能,可以参见S3的课程

Serverless即无服务器架构正在迅速举起,AWS Lambda 和AWS API Gateway作为Serverless 架构主要的服务,正受到广泛关注,也有越来越多用户使用它们,享受其带来的便利。传统上来说,Lambda 和API Gateway主要用以实现RESTful接口,其响应输出结果是JSON数据,而实际业务场景还有需要输出二进制数据流的情况,比如输出图片内容。本文以触发式图片处理服务为例,深入挖掘Lambda 和 API Gateway的最新功能,让它们支持二进制数据,展示无服务器架构更全面的服务能力。

先看一个经典架构的案例——响应式主动图片处理服务。

Lambda配合 S3 文件上传事件触发在后台进行图片处理,比如生成缩略图,然后再上传到 S3,这是Lambda用于事件触发的一个经典场景。

在实际生产环境中这套架构还有一些局限,比如:

  • 后台运行的图片处理可能无法保证及时完成,用户上传完原图后需要立即查看缩略图时还没有生成。
  • 很多图片都是刚上传后使用频繁,一段时间以后就使用很少了,但是缩略图还不能删,因为也可能有少量使用,比如查看历史订单时。
  • 客户端设备类型繁多,一次性生成所有尺寸的缩略图,会消耗较多Lambda运算时间和 S3存储。
  • 如果增加了新的尺寸类型,旧图片要再生成新的缩略图就比较麻烦了。

我们使用用户触发的架构来实现实时图片处理服务,即当用户请求某个缩略图时实时生成该尺寸的缩略图,然后通过 CloudFront缓存在CDN上。这其实还是事件触发执行Lambda,只是由文件上传的事件主动触发,变成了用户访问的被动触发。但是只有原图存储在S3,任何尺寸的缩图都不生成文件不存储到S3。要实现此架构方案,核心技术点就是让Lambda和API Gateway可以响应输出二进制的图片数据流。

总体架构图如下:https://s3.cn-north-1.amazonaws.com.cn/images-bjs/20170527-1.PNG

主要技术点:

  • 涉及服务都是AWS完全托管的,自动扩容,无需运维,尤其是 Lambda,按运算时间付费,省去 EC2 部署的繁琐。
  • 原图存在 S3 上,只开放给 Lambda 的读取权限,禁止其它人访问原图,保护原图数据安全。
  • Lambda 实时生成缩略图,尽管Lambda目前还不支持直接输出二进制数据,我们可以设置让它输出base64编码后的文本,并且不再使用JSON结构。配合API Gateway可以把base64编码后的文本再转换回二进制数据,最终就可以实现输出二进制数据流了。
  • 用 API Gateway 实现 图片访问的URL。我们常见的API Gateway用来做RESTful 的API接口,接口的 URL形式通常是 /resource?parameter=value,其实还可以配置成不用GET参数,而把URL中的路径部分作参数映射成后端的参数。
  • 回源 API Gateway,缓存时间可以用户自定义,建议为24小时。直接支持 HTTPS,支持享用AWS全球边缘节点。
  • CloudFront 上还可使用 Route 53 配置域名,支持用户自己的域名。

相比前述的主动生成,被动触发生成有以下便利或优势:

  • 缩略图都不存储在S3上,节省存储空间和成本。
  • 方便给旧图增加新尺寸的缩略图。

我们这样一个例子使用了Lambda和API Gateway的一些高级功能,并串联了一系列AWS全托管的服务,演示了一个无服务器架构的典型场景。虽然实现的功能比较简单,但是 Lambda函数可以继续扩展,提供更丰富功能,比如截图、增加水印、定制文本等,几乎满足任何的业务需求。相比传统的的计算能力部署,不论是用EC2还是ECS容器,都要自己管理扩容,而使用 Lambda无需管理扩容,只管运行代码。能够让我们从繁琐的重复工作中解脱,而把业务集中到业务开发上,这正是无服务器架构的真正理念和优势。

Route53

Amazon Route 53是一种高可用、高扩展性的云DNS服务。它为开发人员和企业提供一种非常可靠和经济的方法,把对用户友好的、易读的域名(比如aws.xiaopeiqing.com)转换为IP地址(例如120.79.65.207)。目前Amazon Route53已经支持IPv6。

在我们更加深入了解Route53之前,首先让我们先来看一下什么是DNS。

什么是DNS

DNS的全称是Domain Name System,它的作用就是将一个域名最终解析成一个IP地址。就像一个电话本,我们找到张三,就知道他的电话号码是1234;我们找到李四,就知道他的电话号码是2345。

比如我们在浏览互联网的时候都会记住一些比较常用的域名,例如www.baidu.com, www.qq.com, www.google.com等等,这些都是比较容易记住的名字。而如果要我们去记忆12位十进制组成的IP地址就相对困难很多了。

DNS在这里面就起到了翻译的功能,保证我们通过易读的名字能访问到IP地址和后台的真实服务器。

我们比较常访问的网站,都会使用到顶级域名,顶级域名包含了下面这些例子:

  • .com
  • .net
  • .cn
  • .edu
  • .gov

这些顶级域名都是被IANA (Internet Assigned Numbers Authority) 这个机构来进行统一的管理和统筹的。

目前,ICANN旗下的InterNIC负责管理和分发所有的互联网域名,确保互联网上的域名不会重复。

Alias记录 – 和CNAME类似,又叫做别名记录,可以将一个域名指向另一个域名。

  • 和CNAME最大的区别是,Alias可以应用在根域(Zone Apex)。即可以为xiaopeiqing.com的根域创建Alias记录,而不能创建CNAME
  • 别名记录可以节省你的时间,因为Route53会自动识别别名记录所指的记录中的更改。例如,假设example.com的一个别名记录指向位于lb1-1234.us-east-2.elb.amazonaws.com上的一个ELB负载均衡器。如果该负载均衡器的IP地址发生更改,Route53将在example.com的DNS应答中自动反映这些更改,而无需对包含example.com的记录的托管区域做出任何更改。

CNAME** – CNAME (Canonical Name)可以将一个域名指向另一个域名。比如将aws.xiaopeiqing.com指向xiaopeiqing.com

CNAME 记录可以将 DNS 查询重定向到任何 DNS 记录。例如,您可以创建一条 CNAME 记录,该记录将查询从 acme.example.com 重定向到 zenith.example.com 或acme.example.org。您不需要使用 Route 53 作为您要将查询重定向到的域的 DNS 服务

您不能创建与托管区域(区域 APEX)同名的 CNAME 记录。对于域名 (example.com) 的托管区域和子域 (zenith.example.com) 的托管区域都是如此。

  • 弹性负载均衡器(ELB)没有固定的IPv4地址,在使用ELB的时候永远使用它的DNS名字。很多场景下我们需要绑定DNS记录到ELB的endpoint地址,而不绑定任何IP

  • 需要熟记Alias记录和CNAME的区别,也可以参考一下在别名和非别名记录之间做出选择

  • 考试中,如果出现选择Alias记录和CNAME记录的选择,95%的情况都要选择Alias记录

Routing Policy 路由策略

AWS Route53中有多种不同的路由策略(Routing Policy),我们可以根据自己的不同需求将我们的DNS解析到不同的目标上去。

  • 简单路由策略(Simple Routing Policy):提供单一资源的策略类型,即一个DNS域名指向一个单一目标简单路由策略(Simple Routing Policy)

    AWS Route 53,我们使用**简单路由策略(Simple Routing Policy)**来为域名创建一个标准的DNS记录,而不用复杂的例如基于延迟或者权重的方法。一般我们使用简单路由策略将我们的流量指向单一的资源,例如一台Web服务器。

    在简单路由策略配置里面,对于同一个DNS名我们只能创建一条目标,这个目标可能是一组IP地址,或者是一个Alias记录。

    使用简单路由策略将我们的域名指向到一个S3托管的静态网站。

  • 加权路由策略(Weighted Routing Policy):按照不同的权值比例将流量分配到不同的目标上去.使用AWS Route53加权路由策略(Weighted Rouing Policy),我们可以将多个资源关联到同一个域名(例如iteablue.com),并根据不同的权值比重将流量分发给不同的资源。

    我们可以使用加权路由策略来做负载均衡,或者软件测试。比如将5%的流量引导到测试应用上,观看测试应用的效果。

    我们可以为每一个记录都分配一个权值,每一条记录分配到的总流量的比例是权值/所有记录的权值之合

  • 延迟路由策略(Latency Routing Policy):根据网络延迟的不同,将与用户延迟最小的结果应答给最终用户。AWS Route53的**延迟路由策略(Latency Routing Policy)**可以让我们从延迟最低的AWS区域为用户处理请求,从而提高性能和速度。

    要使用基于延迟的路由策略,我们需要在Route53中创建多条DNS记录(延迟路由策略类型),并且将它们指向不同区域内的目标。当用户去访问这个DNS记录的时候,会先对不同目标的延迟做比较,并且选择延迟最低的一个目标进行访问。举个例子,如果我在东京区域和首尔区域都有ELB负载均衡器,我需要为2个ELB都创建延迟路由类型的DNS记录。当一个用户访问这个域名的时候,Route53会查看到东京区域以及到首尔区域之间的延迟,并且使用延迟较低的一个,把其结果反馈给用户。

    如果与首尔之间的延迟较低,那么用户最终会访问首尔的ELB负载均衡器。

  • 地理位置路由策略(Geolocation Routing Policy):根据用户所在的地理位置,将不同的目标结果应答给用户。**地理位置路由策略(Geolocation Routing Policy)**可以根据用户所在的位置来返回不同的DNS结果。

    比如可以让位于东京的用户访问东京的ELB负载均衡器,位于首尔的用户访问首尔的ELB负载均衡器,位于新加坡的用户也访问首尔的ELB负载均衡器等。

    使用基于地理位置的路由策略,我们可以对内容进行本地化(提供当地的语言和特色);也可以向某些地理位置提供内容服务,而向其他地理位置提供另外的内容服务,甚至不提供服务。

    我们可以按大陆(七大洲))、国家/地区来指定地理位置,并且地理区域范围越精细则优先级越高。

    Route53判定地理位置的依据是用户的源IP地址,有一些IP地址可能无法识别为具体的地理位置,因此我们最好设置一条默认的匹配规则。在这条默认的匹配规则里,没有被任何国家/地区所匹配的位置,还是可以访问到某个内容。

  • 故障转移路由策略(Failover Routing Policy):配置主动/被动(Active/Passive)的故障转移策略,保证DNS解析的容灾

CloudFront CDN

Amazon CloudFront是一种全球**内容分发网络(CDN)**服务,可以安全地以低延迟和高传输速度向浏览者分发数据、视频、应用程序和API。

CDN的全称是Content Delivery Network,即内容分发网络。基本思路是解决传统情况下用户访问网站的时候直接访问源服务器,而利用CDN你访问的是位于全球各地的分发网络(边缘站点),从而达到更快的访问速度和减少源服务器的负载。

在没有CDN的情况下,位于全球各地的用户都需要跨越长距离的地理位置访问位于某一个地方的源服务器。一方面这样的情况下网络延迟是一个非常严重的问题,另一方面也会对源服务器的负载有很大的影响。

而在有CDN的情况下,用户不直接访问源服务器,而是访问位于全球不同地方的Edge站点。这些边缘站点都保存了源服务器的文件缓存,也离用户最近,因此能快速地提供用户所需要的信息内容。

知识点

  • 边缘站点(Edge Location):边缘站点是内容缓存的地方,它存在于多个网络服务提供商的机房,它和AWS区域和可用区是完全不一样的概念。截至2018年中,AWS目前一共有100多个边缘站点。
  • 源(Origin):这是CDN缓存的内容所使用的源,源可以是一个S3存储桶,可以是一个EC2实例,一个弹性负载均衡器(ELB)或Route53,甚至可以是AWS之外的资源。
  • 分配(Distribution):AWS CloudFront创建后的名字
  • 分配分为两种类型,分别是
    • Web Distribution:一般的网站应用
    • RTMP (Real-Time Messaging Protocol):媒体流
  • 你不只是可以从边缘站点读取数据,你还可以往边缘站点写入数据(比如上传一个文件),边缘站点会将你写入的数据同步到源上
  • 在CloudFront上的文件会被缓存在边缘节点,缓存的时间是TTL(Time To Live)。文件存在超过这个时间,缓存会被自动清除
  • 如果在到达TTL时间之前,你希望更新文件,那么你也可以手动清除缓存,但你将会被AWS收取一定的费用

Security Group安全组

在每一个EC2实例创建的过程中,你都会被要求为其指定一个安全组(Security Group)。这个安全组充当了主机的虚拟防火墙作用,能根据协议、端口、源IP地址来过滤EC2实例的入向和出向流量。

  • 如果某个流量被入方向的规则放行,那么无论它的出站规则如何,它的出方向响应流量都会被无条件放行
  • 如果从主机发出去的出站请求,无论入站规则如何,该请求的响应流量都会被无条件放行
  • 你不能使用安全组来禁止某些特定的IP地址访问主机,要达到这个效果需要使用网络访问控制列表(NACL)
  • 在安全组内只能设置允许的条目,不能设置拒绝的条目
  • 安全组的源IP地址可以选择所有IP地址(0.0.0.0/0),特定的IP地址(比如8.8.8.8/24),或者处于同一个VPC中的其他安全组
  • 一个流量只要被安全组的任何一条规则匹配,那么这个流量就会被允许放
  • 安全组会关联到EC2实例的ENI(网络接口)上

举个例子,如下图所定义的安全组规则。

  • 安全组会跟踪TCP/22的入向和出向流量,因为源IP地址定义的是具体的地址(203.0.113.1/32),而不是所有IP地址(0.0.0.0/0)
  • 安全组不会跟踪TCP/80的流量,因为其入向和出向的流量都是针对所有IP地址(0.0.0.0/0)
  • 安全组会跟踪ICMP流量,因为无论规则如何,安全组都会跟踪ICMP流量
入站规则
协议类型 端口号 源 IP
TCP 22 (SSH) 203.0.113.1/32
TCP 80 (HTTP) 0.0.0.0/0
ICMP 全部 0.0.0.0/0
出站规则
协议类型 端口号 目的地 IP
全部 全部 0.0.0.0/0

安全组(Security Group)和网络访问控制列表(Network Access Control List)都扮演了类似的防火墙功能。

CloudWatch

可以让你监控AWS上运行的资源的状态,方便你收集和跟踪资源的各项指标,并且可以设置相应的警报和自动应对的更改。

CloudWatch中的几个参数:

  • 面板(Dashboards)-可创建自定义面板来方便观察你AWS环境中的不同监控对象
  • 告警(Alarms)- 当某个监控对象超过阈值时,会给你发出告警信息
  • 事件(Events)- 针对AWS环境中所发生的变化进行的反应
  • 日志(Logs)-Cloudwatch日志帮助你收集、监控和存储日志信息

CloudWatch的其他特点:

  • 基本监控免费,采样频率为5分钟,监控CPU,磁盘IO,网络流量等
  • 详细监控收费,采样频率为1分钟,监控内容和基本监控一样
  • 上面两种监控模式都不能监控内存使用率,监控内存需要使用自定义参数的监控
  • 监控数据会保存15个月
  • CloudWatch还可以监控弹性伸缩组(Auto Scaling Group),弹性负载均衡器(ELB),EBS等等

CloudTrail

AWS CloudTrail 是一项支持对您的 AWS 账户进行监管、合规性检查、操作审核和风险审核的服务。借助 CloudTrail,您可以记录日志、持续监控并保留与整个 AWS 基础设施中的操作相关的账户活动。CloudTrail 提供 AWS 账户活动的事件历史记录,这些活动包括通过 AWS 管理控制台、AWS 开发工具包、命令行工具和其他 AWS 服务执行的操作。此事件历史记录可以简化安全性分析、资源更改跟踪和问题排查工作。 此外,您可以使用 CloudTrail 来检测 AWS 账户中的异常活动。这些功能可帮助您简化分析和问题排查。

RDS

关系型数据库(SQL)

关系数据库,是建立在关系模型基础上的数据库,借助于集合代数等数学概念和方法处理数据库中的数据。

用地球的语言来讲,关系是一个由行和列组成的表格,一个关系数据库可以包含多个这样的表格。

也可以简单理解为关系数据库就是一个由多个工作表组成的Excel表格。

我们可以用来定义一些预设参数,比如姓名,性别,地址,年龄等信息;并且每一来代表不同的实体,比如张三的信息,李四的信息。行和列就构成了数据的集合。

Amazon Relational Database Service (RDS) 可以为我们提供在AWS云上轻松设置、操作和扩展我们的关系数据库。AWS会为RDS提供高性能、高可用、安全和兼容性,我们只需要专注于管理数据库本身就可以了。

管理和使用AWS RDS,我们不需要管理任何操作系统层面的东西,不需要为OS打补丁和更新,而是直接管理RDS程序和版本。

Amazon RDS支持的关系数据库有:

  • SQL Server
  • Oracle
  • MySQL Server
  • PostgreSQL
  • Aurora
  • MariaDB
非关系数据库(NoSQL)

非关系数据库又叫做NoSQL,全称是Not Only SQL

NoSQL主要用于超大规模数据的存储(比如Facebook或Google每天所收集的万亿比特的数据),这些数据没有固定的模式,不需要预设置好数据库的所有参数。

举个例子,如果社交平台去收集用户的人物画像信息,这些信息可能会包括一些自然属性:例如性别,年龄,姓名;财富:收入水平,是否有固定资产,有哪些固定资产;家庭情况:是否结婚,有几个小孩和家庭成员;购物习惯:喜欢网购还是实体店购物,喜欢到哪个电商平台购物,购物的金额和频率是什么;位置信息:在哪个城市生活,常去的地理位置……

这些千奇百怪的数据,如果保存在关系数据库(RDBMS)中,我们会没有办法很好地预定义所有的属性(列),然后添加我们的记录;也没有办法在后期添加额外的属性。

很多情况下,每一个目标的属性都不一样,有一些属性A有,但B没有;又一些属性B有,但C没有。

在这种情况下,NoSQL更适合存储这些海量的、无规则的信息。NoSQL也适用于现在物联网(IoT)产生的数据。

目前,AWS所提供的NoSQL服务叫做DynamoDB

NoSQL的基本概念:

  • 数据库(Database)
    • 集合(Collection)- 相当于关系数据库中的表
    • 文档(Document)- 相当于关系数据库中的行
    • 键值(Key Value Pairs) = 相当于关系数据库中的列

NoSQL的键值会存放在类似JSON的对象中。

DynamoDB

DynamoDB是一种非关系数据库(NoSQL),可在任何规模提供可靠的性能。DynamoDB能在任何规模下实现不到10毫秒级的一致相应,并且它的存储空间无限。

DynamoDB的特点:

  • 使用SSD存储

  • 数据分散在3个不同地理位置的数据中心(并不是3个可用区)

  • 最终一致性读取(Eventual Consistent Reads)

    • 默认的设置,即如果写入数据到DynamoDB之后马上读取该数据,可能会读取到旧的信息
    • DynamoDB需要时间(一秒内)把写入的数据同步到3个不同地理位置的数据中心
  • 强一致性读取(Strongly Consistent Reads)

    • 在写入数据到DynamoDB之后马上读取该数据,会等所有写入操作以及数据同步全部完成后再回馈结果
    • 即强一致性读取一定会读到最新的数据结果
  • 如果我们需要增加DynamoDB的规格,我们可以直接在AWS管理控制台上进行更改,并且不会有任何系统downtime

  • 除非您指定其他读取方式,否则 DynamoDB 将使用最终一致性读取。读取操作 (例如 GetItem,Query 和 Scan) 提供了一个 ConsistentRead 参数。如果您将此参数设置为 true,DynamoDB 将在操作过程中使用强一致性读取。

Redshift

Amazon Redshift是一个快速、功能强大、完全托管的PB级别数据仓库服务。用户可以在刚开始使用几百GB的数据,然后在后期扩容到PB级别的数据容量。

如之前的课程中所说,Redshift是一种**联机分析处理OLAP(Online Analytics Processing)**的类型,支持复杂的分析操作,侧重决策支持,并且能提供直观易懂的查询结果。

再举个之前提到的例子:

如果一个传统的电商发展到一定的规模,运营者/管理层需要做更加精细的用户群体分析,比如“20-30岁的男性在过去一年内的购买行为与电商促销活动之间的关系”,那么就要用到数据仓库了。

数据仓库在数据库层面和基础架构层面都与**联机事务处理OLTP(Online Transaction Processing)**不太一样。

Redshift的一些特点:

  • 单节点(160Gb)部署模式

  • 多节点

    部署模式

    • 领导节点:管理连接和接收请求
    • 计算节点:存储数据,执行请求和计算任务,最多可以有128个计算节点
  • Columnar Data Storage

  • Advanced Compression

  • Massively Parallel Processing (MPP)

  • 目前Redshift只能部署在一个可用区内,不能跨可用区或者用类似RDS的高可用配置

    • Redshift是用来产生报告和做商业分析的,并不需要像生产环境一样对可用性有高保证
  • 我们可以对Redshift做快照,并且在需要的时候恢复这个快照到另一个可用区

Redshift安全

  • Redshift传输过程中使用SSL加密
  • Redshift使用AES-256进行加密
  • 默认情况下Redshift帮我们解决了秘钥管理的问题
    • 我们也可以使用自己的秘钥
    • 或者使用AWS Key Management Service (KMS)来管理秘钥
Aurora

Amazon Aurora是一种兼容MySQL和PostgreSQL的商用级别关系数据库,它既有商用数据库的性能和可用性(比如Oracle数据库),又具有开源数据库的成本效益(比如MySQL数据库)。

Aurora的速度可以达到MySQL数据库的5倍,同时它的成本只是商用数据库的1/10

Aurora和其他RDS服务类似,AWS会负责各种管理任务,例如硬件、数据库补丁和数据库备份等。

另外,Aurora还有以下这些特点:

  • 10GB的起始存储空间,可以增加到最大64TB的容量
  • 计算资源可以提升到最多32vCPU和244GB的内存
  • Aurora会将你的数据复制2份到每一个可用区内,并且复制到最少3个可用区,因此你会有6份数据库备份
  • 2份及以下的数据备份丢失,不影响Aurora的写入功能
  • 3份及以下的数据备份丢失,不影响Aurora的读取功能
  • Aurora有自动修复的功能,AWS会自动检查磁盘错误和数据块问题并且自动进行修复
  • 有两种数据库只读副本
    • Aurora Replicas(最多支持15个)
    • MySQL Replica(最多支持5个)
    • 两者的区别是Aurora主数据库出现故障的时候,Aurora Replicas可以自动变成主数据库,而MySQL Replica不可以
OLTP/OLAP

数据处理大致可以分为两类,分别是OLTP和OLAP。

联机事务处理OLTP(Online Transaction Processing)**

OLTP是传统的关系数据库的主要应用,是基本的日常事务处理,例如银行交易等。

OLTP包括了以上所说的关系数据库SQL Server,Oracle,MySQL Server,PostgreSQL,Aurora,MariaDB等。

联机分析处理OLAP(Online Analytics Processing)**

OLAP是数据仓库(Data Warehousing)系统的主要应用,支持复杂的分析操作,侧重决策支持,并且能提供直观易懂的查询结果。OLAP是用来做商业智能(Business Intelligence)方面的分析的。

OLAP常用的流行工具是AWS Redshift, Greenplum, Hive等

说了这么多可能大家的理解都还是比较模糊,下面来举一个通俗一点的例子。

如果一个电商在网上卖产品,那么关于产品的信息,用户的信息,交易的信息都可以存放在OLTP类型的关系数据库上。如果用户需要查询产品有关的信息,或者运营者需要查询产品的销量,产品的库存等都可以直接通过读取数据库获取到信息。

但是当电商发展到一定的规模,运营者/管理层需要做更加精细的用户群体分析,比如“20-30岁的男性在过去一年内的购买行为与电商促销活动之间的关系”,那么就要用到数据仓库了。

数据仓库有更好地读取速度和更加便利的分析和查询方式。

Elasticache

Elasticache是AWS提供的分布式对象缓存系统,可以有效地提升现有应用程序的性能。利用Elasticache,用户可以从高吞吐和低延迟的内存数据存储中检索数据,

Elasticache通过在内存中缓存数据来减少对象读取数据库的次数,减轻了数据库的负载,以及提高了网站的访问速度(内存的访问速度比磁盘的访问速度高很多)。一般来说我们会把相对来说更新频繁的“热数据”放在Elasticache中,把“冷数据”还是放在数据库中,以支持及时的更新。

目前Elasticache支持两种业界流行的引擎,分别是:

  • Memcached
  • Redis

在实际场景中,如果我们有对数据库的读写有很高的要求,并且数据的更新没有那么频繁,我们就可以考虑使用Elasticache来减少我们的数据库负担,增加数据库读取的性能。

与Read Replicas不同的是Elasticache是缓存数据库的内容,Read Replicas会异步地同步数据库的内容。另一个不同是,Elasticache是存储在内存中的,因此比起构建在SSD的Read Replicas会快不止一个数量级。

RDS备份

AWS RDS提供了两种不同的备份方式,分别是自动备份(Automated Backups)快照(Snapshots)

自动备份(Automated Backups)

  • 你可以在创建数据库的时候定义自动备份的保留时间(Retention Period),这个时间的设置范围是1天~35天
  • 你也可以在创建数据库之后更改这个保留时间(Retention Period)
  • 如果需要,你可以将数据库恢复到保留时间内的任何时间点
  • 在你删除数据库的时候,所有的自动备份都会被删除
  • RDS的自动备份会保存在Simple Storage Service (S3)上
  • 我们可以定义自动备份的时段,在这个备份时段内数据库将会自动进行备份
  • 在自动备份的过程中,数据库存储的I/O可能会暂停(通常不到几秒),数据库性能会降低,但部署了Multi-AZ的数据库不受影响

快照(Snapshots)

  • RDS的快照需要手动进行
  • 在你删除数据库的时候,快照不会被删除,不像自动备份那样
  • 在创建快照的过程中,数据库存储的I/O可能会暂停(通常不到几秒),数据库性能会降低,但部署了Multi-AZ的数据库不受影响

数据库加密

现在AWS RDS的所有关系数据库都支持加密。一旦启用了加密的功能,所有数据的存储都将会被加密,包括数据库本身、自动备份、快照和只读副本(read replicas)。

  • 如果在创建数据库的时候没有加密,我们不能在事后对其进行加密
  • 但我们可以创建这个数据库的快照,复制该快照并且加密这个复制的版本
Multi-AZ高可用

我们可以把AWS RDS数据库部署在多个**可用区(AZ)**内,以提供高可用性和故障转移支持。

使用Multi-AZ部署模式,RDS会在不同的可用区内配置和维护一个主数据库和一个备用数据库,主数据库的数据会自动复制到备用数据库中。

使用这种部署模式,可以为我们提供数据冗余,减少在系统备份期间的I/O冻结(上面有提到)。同时,更重要的是可以防止数据库实例的故障和单个可用区的故障。

如下图所示,我们可以在两个可用区内分别部署主数据库和备用数据库。

目前Multi-AZ支持以下数据库:

  • Oracle
  • PostgreSQL
  • MySQL
  • MariaDB
  • SQL Server

值得注意的是,Aurora数据库本身就支持多可用区部署的高可用设置,因此不需要为Aurora数据库特别开启这个功能。

在上次实验中我们有讲到,创建了RDS数据库之后我们会得到一个数据库的URL Endpoint。在开启Multi-AZ的情况下,这个URL Endpoints会根据主/备数据库的健康状态自动解析到IP地址。对于应用程序来说,我们只需要连接这个URL地址即可。

高可用的设置只是用来解决灾备的问题,并不能解决读取性能的问题;要提升数据库读取性能,我们需要用到Read Replicas。

只读副本(Read Replicas)

我们可以在源数据库实例的基础上,复制一种新类型的数据库实例,称之为只读副本(Read Replicas)。我们对源数据库的任何更新,都会异步更新到只读副本中。

因此,我们可以将应用程序的数据库读取功能转移到Read Replicas上,来减轻源数据库的负载。

对于有大量读取需求的数据库,我们可以使用这种方式来进行灵活的数据库扩展,同时突破单个数据库实例的性能限制。

Read Replicas还有如下的特点:

  • Read Replicas是用来提高读取性能的,不是用来做灾备的
  • 要创建Read Replicas需要源RDS实例开启了自动备份的功能
  • 可以为数据库创建最多5个Read Replicas
  • 可以为Read Replicas创建Read Replicas(如下图所示)
  • 每一个Read Replicas都有自己的URL Endpoint
  • 可以为一个启用了Multi-AZ的数据库创建Read Replicas
  • Read Replicas可以提升成为独立的数据库
  • 可以创建位于另一个区域(Region)的Read Replicas

目前Read Replicas支持以下数据库:

  • Aurora
  • PostgreSQL
  • MySQL
  • MariaDB
  • Oracle

SQS

**Amazon Simple Queue Service (SQS)**是一种完全托管的消息队列服务,可以让你分离和扩展微服务、分布式系统和无服务应用程序。

在讲解SQS之前,首先让我们了解一下什么是消息队列。

消息队列

还是举一个电商的例子,一个用户在电商网站下单后付款后,应用服务器马上查询/更新数据库,连接支付网关并查询支付状态,通知短信/邮件网关发送相关短信/邮件,更新库存系统,更新物流系统……最后返回信息给用户,“您的下单已成功”。

但是如果网站的访问数很大,或者正值促销活动(比如淘宝双11,京东618)呢?

这个时候每一个流程都是一个瓶颈,一旦某一个地方达到了瓶颈或者出现故障,又或者用户下单的时间比程序处理订单的时间还要久的情况下,都会让用户得不到成功下单的结果,或者得到结果的时间非常长,导致用户体验不好。

这个时候,我们就要考虑到应用程序的解耦(decouple)

我们可以引入消息队列,让不同的应用程序之间打断强连接的关系,互不干扰。

应用服务器在接收到用户付款的订单之后,就把相关的信息丢到消息队列,并且返回用户“您的下单已成功,请稍后查看详细订单状态”。

而支付网关、短信/邮件网关、库存系统、物流系统等等可以到消息队列里面拉取信息,并且进行相关的数据更新和操作。

这些操作可能不需要是实时的,但是至少能保证这些队列里的信息最终都会被执行。比如下单后我不一定马上能收到短信/邮件的通知,我可能5分钟/10分钟之后才收到这些信息通知,但这个并不影响正常的业务。

这样子,消息队列就起到了连接上层业务和下层业务的作用。

Amazon SQS相当于提供了一个分布式、高可用、高性能的消息队列服务。

SQS特点

SQS有两种不同类型的队列,它们分别是:

  • 标准队列(Standard Queue)
  • FIFO队列(先进先出队列)

标准队列

标准队列拥有无限的吞吐量,所有消息都会至少传递一次,并且它会尽最大努力进行排序。

标准队列是默认的队列类型。

FIFO队列

FIFO (First-in-first-out)队列在不使用批处理的情况下,最多支持300TPS(每秒300个发送、接受或删除操作)。

在队列中的消息都只会不多不少地被处理一次

FIFO队列严格保持消息的发送和接收顺序

  • SQS是靠应用程序去拉取的,而不能主动推送给应用程序,推送服务我们使用SNS(Simple Notification Service)
  • 消息会以256 KB的大小存放
  • 消息会在队列中保存1分钟~14天,默认时间是4天
  • 可见性超时(Visibility Timeout)
    • 即当SQS队列收到新的消息并且被拉取走进行处理时,会触发Visibility Timeout的时间。这个消息不会被删除,而是会被设置为不可见,用来防止该消息在处理的过程中再一次被拉取
    • 当这个消息被处理完成后,这个消息会在SQS中被删除,表示这个任务已经处理完毕
    • 如果这个消息在Visibility Timeout时间结束之后还没有被处理完,则这个消息会设置为可见状态,等待另一个程序来进行处理
    • 因此同一个消息可能会被处理两次(或以上)
    • 这个超时时间最大可以设置为12小时
  • 标准SQS队列保证了每一个在队列内的消息都至少会被处理一次
  • 长轮询(Long Polling)
    • 默认情况下,Amazon SQS使用短轮询(Short Polling),即应用程序每次去查询SQS队列,SQS都会做回应(哪怕队列一直是空的)
    • 使用了长轮训,应用程序每次去查询SQS队列,SQS队列不会马上做回应。而是等到队列里有消息可处理时,或者等到设定的超时时间再做出回应。
    • 长轮询可以一定程度减少SQS的花销

SNS (Simple Notification Service)简介

SNS (Simple Notification Service) 是一种完全托管的发布/订阅消息收发和移动通知服务,用于协调向订阅终端节点和客户端的消息分发。

SQS (Simple Queue Service)一样,SNS也可以轻松分离和扩展微服务,分布式系统和无服务应用程序,对程序进行解耦

我们可以使用SNS将消息推送到SQS消息队列中、AWS Lambda函数或者HTTP终端节点上。

SNS通知还可以发送推送通知到IOS,安卓,Windows和基于百度的设备,也可以通过电子邮箱或者SMS短信的形式发送到各种不同类型的设备上。

SNS的一些特点

  • SNS是实时的推送服务(Push),有别于SQS的拉取服务(Pull/Poll)
  • 拥有简单的API,可以和其他应用程序兼容
  • 可以通过多种不同的传输协议进行集成
  • 便宜、用多少付费多少的服务模型
  • 在AWS管理控制台上就可以进行简单的操作

SNS能推送的目标

  • HTTP
  • HTTPS
  • Email
  • Email-JSON
  • SQS
  • Application
  • Lambda

SWF (Simple Workflow Service)

Amazon Simple Workflow Service (Amazon SWF) 提供了给应用程序异步、分布式处理的流程工具。

SWF可以用在媒体处理、网站应用程序后端、商业流程、数据分析和一系列定义好的任务上。

当用户在电商网站上下单后,即启动了该流程,该流程包含了4个任务(tasks):

  1. SWF验证用户订单信息
  2. 如果订单有效,则进行信用卡付款流程
  3. 如果付款完毕,则进行人工发货
  4. 如果发货完成,则保存订单信息到数据库,并结束流程

在这个流程中,每一个任务都是按顺序执行的,只有当上一个任务成功完成后才能执行下一个任务。

SWF除了支持顺序执行的流程之外,也支持并行处理的流程,即一个任务的完成可以触发多个任务同时执行。

基本的SWF概念

  • SWF发起者(Starter)
    • 可以激活一个工作流的应用程序,可能是电商网站上下单的行为,或者是在手机APP上点击某个按钮
  • SWF决策者( Decider)
    • SWF Decider决定了任务之间的协调,处理的顺序,并发性和任务的逻辑控制
  • SWF参与者(Worker)
    • SWF Worker可以在SWF中获取新的任务,处理任务,并且返回结果
  • SWF域(Domains)
    • 域包含了工作流的所有组成部分,比如工作流类型和活动类型

SWF决策者和参与者可以是运行在AWS上的EC2实例或者其他计算资源,SWF只是保存不同的任务,把这些任务分配给worker,并且监控他们的任务处理进展。

SWF和SQS的区别

  • SWF是面向任务的;SQS是面向消息的;
  • SWF保证了每一个任务都只执行一次而不会重复;标准的SQS消息可能会被处理多次
  • SWF保证了程序内所有任务都正常被处理,并且追踪工作流;而SQS只能在应用程序的层面追踪工作流
  • SWF内的任务最长可以保存1年;SQS内的消息最长只能保存14天

Kinesis简介

Amazon Kinesis可以让你轻松收集、处理和分析实时流数据。利用Amazon Kinesis,你可以在收到数据的同时对数据进行处理和分析,无需等到数据全部收集完成才进行处理。

在深入了解Kinesis之前,我们先来看看什么是数据流。

数据流

数据流是从成千上万的数据源上持续产生的数据,并且这些数据都很小(KB级别),它们可能是:

  • 电商网站上的订单信息(比如京东,淘宝)
  • 股票信息
  • 游戏信息
  • 社交网络信息(微信/微博的信息流)
  • 地理位置信息(滴滴)
  • 物联网数据
Kinesis服务

Kinesis目前有不同的功能服务,我们需要了解每一个服务有什么不同。这些服务分别是:

  • Kinesis Data Streams (Kinesis Streams):使用自定义的应用程序分析数据流
  • Kinesis Video Streams:捕获、处理并存储视频流用于分析和机器学习(Machine Learning)
  • Kinesis Data Firehose:将数据加载到AWS数据存储上
  • Kinesis Data Analytics:使用SQL分析数据流

借助Amazon Kinesis,您可以对传统上使用批处理进行分析的数据执行实时分析。常见的流使用案例包括在不同应用程序之间共享数据,流提取提取转换负载以及实时分析。例如,您可以使用Kinesis Data Firehose将流数据连续加载到S3数据湖或分析服务中。 示例:点击流分析使用运动学数据firehose和运动学数据分析

Kinesis Data Streams

Amazon Kinesis Data Streams可以实时收集和处理大型数据流,这些数据会被处理并且发送到多种AWS服务中去,也可以生成报警、动态更改定价和广告战略等。

如图所示,**创建者(Producer)**会持续将数据推送到Kinesis Data Streams中,这些创建者包括了EC2实例、用户的PC终端、移动终端,服务器等。

Kinesis Data Streams由一组**分片(Shards)**组成,每个shards都有一系列的数据记录,每一个数据记录都有一个分配好的序列号。

数据记录在添加到流之后会保存一定的时间,这个保留周期(Retention Period)默认是24小时,但可以手动设置为最多7天

**使用者(Comsumer)**会实时地对Kinesis Streams里的内容进行处理,并将最终结果推送到AWS服务,例如Amazon S3,DynamoDB,Redshift,Amazon EMR或者Kinesis Firehose。

Kinesis Video Streams

Kinesis Video Streams主要用来进行实时的视频处理,或者批量进行视频分析。

Kinesis Video Streams可以捕获来自多种设备类型的视频流数据(比如智能手机、网络摄像头、车载摄像头、无人机等)。

其工作的流程和Data Streams类似,如下图所示。

Kinesis Data Firehose

Kinesis Data Firehose可以让我们的实时数据流传输到我们定义的目标,包括Amazon S3,Amazon Redshift,Amazon Elasticsearch Service (ES)和Splunk。

通过Kinesis Firehose,我们可以将数据流经过转换之后传输到S3存储桶上去,并且另外将源数据备份一份到另一个S3存储桶。

Kinesis Data Analytics

使用Kinesis Data Analytics,我们可以使用标准的SQL语句来处理和分析我们的数据流。这个服务可以让我们使用强大的SQL代码来做实时的数据流分析、创建实时的参数。

Organization

**AWS组织(Organization)**是一项账户管理服务,它可以将你的多个AWS账号整合到集中管理的组织中。

AWS组织包含了**整合账单(Consolidated Billing)**和账号管理功能,通过这些功能,你能够更好地满足企业的预算、安全性和合规性的要求。

如下图所示,我们可以在AWS Organization内创建一个主账户,并且创建不同的组织单元(OU)。每一个OU可以代表一个部门或者一个系统环境,如下图的开发、测试和生产环境。http://www.cloudbin.cn/wp-content/uploads/2020/02/org01.jpg

每一个OU下面可以分配若干个不同的AWS账号,每一个账号拥有不同的访问AWS的权限。

我们可以使用访问策略来控制每一个OU的权限,OU下面可以再创建其他的OU,最多支持5层嵌套。

AWS Organization内的一个最大功能是整合账单(Consolidated Billing),它的作用是将多个AWS账户的账单都合并为同一个账单进行付款。

可以简单理解为,整合账单的主账号就是财务部门的账号,财务部门负责帮所有其他OU(开发部门,运维部门,IT基础架构部门等)产生的AWS费用进行付款。

AWS整合账单有如下优势:

  • 单一的账单:你不需要为每个账号单独处理账单,所有账号的账单都被统一成一个
  • 方便追踪:你可以很容易追踪每个账号的具体花费
  • 使用量折扣:AWS的很多服务是用得越多单价越便宜,因此如果账单进行合并更容易达到便宜折扣的门槛
  • 无额外费用:整合账单不单独收费

知识点

  • 整合账单主账号最好使用多因素认证(Multi-Factor Authentication)
  • 整合账单主账号最好只用来管理账单,不拥有任何访问AWS资源的权限
  • 一个Organization默认只能管理20个账号,超过这个数字需要找AWS Support

跨账号访问权限(Cross Account Access)

很多AWS客户都会管理多个不同的AWS账号,比如之前提到的不同的开发环境、测试环境、生产环境等都各分配不同的账号。这样子他们可以对不同类型的账号赋予不同等级和类型的权限,可以在账号和权限的安全性上有更好的控制。

那么一般情况下,一个开发者使用开发环境做了一些变更后希望登录到测试环境去做一些系统的测试,那么他必须注销他的账号,然后使用另外的用户名密码登录到测试账号。这样的复杂操作有时候对开发来说简直是个噩梦。

有了跨账号访问权限(Cross Account Access),你可以在AWS管理控制台上轻松地进行账号(角色)的切换,让你在不同的开发账号(角色)、测试账号(角色)、生产账号(角色)中进行快捷的切换。

开发账号和生产账号的切换

假设一个公司里面有两种账号,生产账号开发账号。开发账号中的用户有时候需要访问生产账号中的资源,比如将开发环境的代码推送到生产环境中等。

如下图所示,我们可以让开发账号拥有一定的访问权限,让其访问生产账号中的S3资源。

Security Token Service

使用**AWS Security Token Service (STS)**服务,你可以创建和控制对你的AWS资源访问的安全凭证。

这种临时的凭证的工作方式和长期存在于AWS账户中的IAM用户的工作方式类似,但会存在以下的区别:

  • STS服务产生的凭证是临时的,它的有效期可以是几分钟到几小时,一旦过了这个时效时间,你的凭证就会失去作用,无法再访问相应的资源
  • IAM会长期保存在AWS账户中,而临时凭证只有在需要的时候才动态生成

STS的临时凭证可以由以下几种方式产生:

  • 企业联合身份验证(Federation)

    • 使用了基于Security Assertion Markup Language (SAML) 的标准
    • 可以使用微软Active Directory的用户来获取临时权限,不需要创建IAM用户
    • 支持单点登录(Single Sign On, SSO)
  • Web联合身份验证(Federation with Mobile Apps)

    • 使用已知的第三方身份供应商(Amazon, Facebook, Google或其他OpenID提供商)来登录
  • 跨账户访问

    • 让一个账号内的用户访问同一个组织(Organization)内其他账号的AWS资源
  • LDAP和AWS STS之间的通信需要通过Identity Broker (IdP),而IdP一般需要自己开发

  • IdP总是先跟LDAP认证,审核用户名密码,然后再和STS通信

  • 应用程序最后会使用临时访问权限访问AWS的资源

另外,STS和微软AD域集成的时候,可以做到用户使用自己企业LDAP目录的AD账号密码来登录AWS管理控制台。其中的Identity Broker位置变成了ADFS (Active Directory Federation Services)。

知识体系参考了http://www.cloudbin.cn/,感谢!

模拟401题

QUESTION 1
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A solutions architect is designing a new service behind Amazon API Gateway.
The request patterns for the service will be unpredictable and can change suddenly from 0
requests to over 500 per second.
The total size of the data that needs to be persisted in a backend database is currently less than
1 GB with unpredictable future growth Data can be queried using simple key-value requests.
Which combination of AWS services would meet these requirements? (Select TWO )
前需要保留在后端数据库中的数据总大小小于
1 GB,未来增长不可预测。可以使用简单的键值请求查询数据。
哪种AWS服务组合可以满足这些要求 (选择两个)
A. AWS Fargate
B. AWS Lambda
C. Amazon DynamoDB
D. Amazon EC2 Auto Scaling
E. MySQL-compatible Amazon Aurora
bc
QUESTION 2
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A solutions architect needs to design a managed storage solution for a company's application
that includes high-performance machine learning,
This application runs on AWS Fargate and the connected storage needs to have concurrent
access to files and deliver high performance.
Which storage option should the solutions architect recommend?
解决方案架构师需要为公司的应用程序设计一个托管存储解决方案,其中包括高性能的机器学习。该应用程序在AWS Fargate上运行,并且连接的存储需要并发访问文件并提供高性能。解决方案架构师应建议哪种存储选项?
A. Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to
communicate with Amazon S3.
B. Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to
communicate with FSx for Lustre.
. Create an Amazon Elastic File System (Amazon EFS) file share and establish an IAM role that
allows Fargate to communicate with Amazon EFS.
D. Create an Amazon Elastic Block Store (Amazon EBS) volume for the application and establish an
IAM role that allows Fargate to communicate with Amazon EBS.
Answer: B
A.为应用程序创建一个Amazon S3存储桶,并为Fargate建立一个IAM角色,以 与Amazon S3通信。 B.为Luster文件共享创建Amazon FSx,并建立允许Fargate进行以下操作的IAM角色: 与FSx for Lustre通信。 。创建一个Amazon Elastic File System(Amazon EFS)文件共享并建立一个IAM角色 允许Fargate与Amazon EFS通信。 D.为应用程序创建Amazon Elastic Block 

Store(Amazon EBS)卷并建立一个 允许Fargate与Amazon EBS通信的IAM角色。 Explanation: https://aws.amazon.com/efs/

并发访问文件+交付高性能Amazon FSx。为快速处理工作负载而优化的高性能文件系统。 Lustre是一种流行的开源并行文件系统。还支持从数千个计算实例并发访问同一文件或目录。带有FSx的Amazon IAM。 。 Amazon FSx与AWS身份和访问管理(IAM)集成。 。这种集成意味着您可以控制您的AWS IAM用户和组可以采取哪些措施来管理文件系统(例如创建和删除文件系统)。 。您还可以标记Amazon FSx资源,并基于这些标记控制IAM用户和组可以执行的操作。

QUESTION 3
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company has a multi-tier application that runs six front-end web servers in an Amazon EC2
Auto Scaling group in a single Availability Zone behind an Application Load Balancer (ALB).
A solutions architect needs to modify the infrastructure to be highly available without modifying
the application.
Which architecture should the solutions architect choose that provides high availability?
A. Create an Auto Scaling group that uses three instances across each of two Regions
B.Modify the Auto Scaling group to use three instances across each of two Availability Zones
C. Create an Auto Scaling template that can be used to quickly create more instances in another
Region
D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance
traffic to the web tier
Answer: B
一家公司拥有一个多层应用程序,该应用程序在Amazon EC2中运行六个前端Web服务器 Auto Scaling组位于应用程序负载平衡器ALB)后的单个可用区中。 解决方案架构师需要修改基础架构以使其高度可用,而无需修改 应用程序。 解决方案架构师应该选择哪种架构来提供高可用性? A.创建一个Auto Scaling组,该组在两个区域的每个区域中使用三个实例 B 修改Auto Scaling组以在两个可用区中的每个可用区中使用三个实例 C.创建一个可用于在另一个实例中快速创建更多实例的Auto Scaling模板 地区 D.在循环配置中更改Amazon EC2实例前面的ALB以平衡 网络层流量

Explanation: High availability can be enabled for this architecture quite simply by modifying the existing Auto Scaling group to use multiple availability zones. The ASG will automatically balance the load so you don’t actually need to specify the instances per AZ. The architecture for the web tier will look like the one below:

QUESTION 4
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company runs an internal browser-based application The application runs on Amazon EC2
instances behind an Application Load Balancer.
The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones.
The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2
instances overnight Staff are complaining that the application is very slow when the day begins,
although it runs well by mid-morning,
How should the scaling be changed to address the staff complaints and keep costs to a
minimum?
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown
period
C. Implement a target tracking action triggered at a lower CPU threshold and decrease the cooldown
period
D.
Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before
the office opens
Answer: C

公司运行基于内部浏览器的应用程序。该应用程序在Application Load Balancer后面的Amazon EC2实例上运行。实例在多个可用区中的Amazon EC2 Auto Scaling组中运行。 Auto Scaling小组在工作时间内最多可扩展20个实例,但在一夜之间最多可扩展到2个实例员工抱怨说,一天开始时应用程序运行非常缓慢,尽管它在上午中旬运行良好,但如何将扩展比例更改为解决员工的投诉并使费用降到最低?

A.实施计划的行动,以在办公室开放之前不久将所需容量设置为20。B.实施以较低的CPU阈值触发的逐步扩展操作,并减少冷却时间C.实施以较低的CPU阈值触发的目标跟踪操作并缩短冷却时间D。实施计划的行动,在办公室开业前不久将最小和最大容量设置为203ryou

QUESTION 5
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
A solutions architect is designing a solution to access a catalog of images and provide users with
the ability to submit requests to customize images.
lmage customization parameters will be in any request sent to an AWS API Gateway API.
The customized image will be generated on demand, and users will receive a link they can click
to view or download their customized image.
The solution must be highly available for viewing and customizing images
What is the MOST cost-effective solution to meet these requirements?
解决方案架构师正在设计一种解决方案,以访问图像目录并为用户提供
提交自定义图像请求的能力。
图像定制参数将存在于发送到AWS API Gateway API的任何请求中。
定制图像将按需生成,用户将收到一个可以单击的链接。
查看或下载其自定义图像。
该解决方案必须高度可用以查看和自定义图像
满足这些要求的最经济有效的解决方案是什么?
A. Use Amazon EC2 instances to manipulate the original image into the requested customization.
Store the original and manipulated images in Amazon S3.
Configure an Elastic Load Balancer in front of the EC2 instances.
В.
Use AWS Lambda to manipulate the original image to the requested customization.
Store the original and manipulated images in Amazon S3.
Configure an Amazon CloudFront distribution with the S3 bucket as the ongin.
C. Use AWS Lambda to manipulate the original image to the requested customization.
Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB.
Configure an Elastic Load Balancer in front of the Amazon EC2 instances.
D. Use Amazon EC2 instances to manipulate the original image into the requested customization.
Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB.
Configure an Amazon CloudFront distribution with the S3 bucket as the origin.

A.使用Amazon EC2实例将原始图像处理为请求的自定义。
将原始图像和经过处理的图像存储在Amazon S3中。
在EC2实例之前配置Elastic Load Balancer。
В。
使用AWS Lambda将原始图像处理为请求的自定义。
将原始图像和经过处理的图像存储在Amazon S3中。
使用S3存储桶作为ongin配置Amazon CloudFront分配。
C.使用AWS Lambda将原始图像处理为请求的自定义。
将原始图像存储在Amazon S3中,将经过处理的图像存储在Amazon DynamoDB中。
在Amazon EC2实例之前配置Elastic Load Balancer。
D.使用Amazon EC2实例将原始图像处理为请求的自定义。
将原始图像存储在Amazon S3中,将经过处理的图像存储在Amazon DynamoDB中。
使用S3存储桶作为源配置Amazon CloudFront分配。
Answer: B

Explanation: All solutions presented are highly available. The key requirement that must be satisfied is that the solution should be cost-effective and you must choose the most cost-effective option. Therefore, it’s best to eliminate services such as Amazon EC2 and ELB as these require ongoing costs even when they’re not used. Instead, a fully serverless solution should be used. AWS Lambda, Amazon S3 and CloudFront are the best services to use for these requirements.

提出的所有解决方案都是高度可用的。 必须满足的关键要求是解决方案应该具有成本效益,并且您必须选择最具成本效益的选项。 因此,最好消除诸如Amazon EC2和ELB之类的服务,因为这些服务需要不断进行。 即使不使用也会花费很多。 相反,应使用完全无服务器的解决方案。 AWSLambda,Amazon S3和CloudFront是满足这些要求的最佳服务。

QUESTION 6
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A bicycle sharing company is developing a multi-tier architecture to track the location of its
bicycles during peak operating hours.
The company wants to use these data points in its existing analytics platform A solutions architect
must determine the most viable multi-tier option to support this architecture.
The data points must be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
--家自行车共享公司正在开发一种多层体系结构,以跟踪其位置
高峰时段骑自行车.
该公司希望在现有的分析平台中使用这些数据点.解决方案架构师
必须确定最可行的多层选项以支持此体系结构.
数据点必须可从RESTAPI访问.
哪项操作符合存储和检索位置数据的这些要求?
A. Use Amazon Athena with Amazon S3
B. Use Amazon API Gateway with AWS Lambda
C. Use Amazon QuickSight with Amazon Redshift
D. Use Amazon API Gateway with Amazon Kinesis Data Analytics
A.将Amazon Athena与Amazon S3结合使用
B.将Amazon API Gateway与AWS Lambda一起使用
C.将Amazon QuickSight与Amazon Redshift结合使用
D.将Amazon API Gateway与Amazon Kinesis Data Analytics结合使用
Answer: D

Explanation: https://aws.amazon.com/kinesis/data-analytics/

QUESTION 7
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A solutions architect is deploying a distributed database on multiple Amazon EC2 instances.
The database stores all data on multiple instances so it can withstand the loss of an instance.
The database requires block storage with latency and throughput to support several million
transactions per second per server.
Which storage solution should the solutions architect use?
数据库将所有数据存储在多个实例上,因此它可以承受-一个实例的丢失.
数据库需要具有延迟和吞吐量的块存储,以支持数百万
每台服务器每秒的事务数.
解决方案架构师应使用哪种存储解决方案?

A. Amazon EBS
B. Amazon EC2 instance store
C. Amazon EFS
D. Amazon S3
A.亚马逊EBS
B.Amazon EC2实例存储
C. Amazon EFS
D.亚马逊S3
Answer: B

Explanation: An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers

实例存储为您的实例提供临时的块级存储。 该存储是 位于物理上连接到主机的磁盘上。 实例存储非常适合 临时存储经常更改的信息,例如缓冲区,缓存,暂存数据, 和其他临时内容,或用于在一系列实例中复制的数据,例如 Web服务器的负载平衡池,

QUESTION 8
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2
instances in a VPC do not traverse the internet.
What should the solutions architect do to accomplish this? (Select TWO )
解决方案架构师需要确保API从AmazonEC2调用到AmazonDynamoDB
VPC中的实例不会遍历Internet.
解决方案架构师应该怎么做才能做到这一点?(选择两个)

A. Create a route table entry for the endpoint
B. Create a gateway endpoint for DynamoDB
C. Create a new DynamoDB table that uses the endpoint
D. Create an ENI for the endpoint in each of the subnets of the VPC
E. Create a security group entry in the default security group to provide access
A.为端点创建一个路由表条目
B.为DynamoDB创建网关端点
C.创建一个使用端点的新DynamoDB表
D.在VPC的每个子网中为端点创建一一个 ENI
E.在默认安全组中创建一个安全组条目以提供访问权限
Answer: AB

Explanation: Amazon DynamoDB and Amazon S3 support gateway endpoints, not interface endpoints. With a gateway endpoint you create the endpoint in the VPC, attach a policy allowing access to the service, and then specify the route table to create a route table entry in.

Amazon DynamoDB和Amazon S3支持网关终端节点,不支持接口终端节点。 用 您在VPC中创建端点的网关端点,请附加一个策略,以允许访问 服务,然后指定要在其中创建路由表条目的路由表。

QUESTION 9
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A solutions architect is designing a web application that will run on Amazon EC2 instances behind
an Application Load Balancer (ALB).
The company strictly requires that the application be resilient against malicious internet activity
and attacks, and protect against new common vulnerabilities and exposures.
What should the solutions architect recommend?
解决方案架构师正在设计一个可在背后的AmazonEC2实例上运行的Web应用程序
应用程序负载平衡器(ALB) .
公司严格要求该应用程序具有抵御恶意互联网活动的能力
和攻击并防范新的常见漏洞和威胁.
解决方案架构师应该建议什么?

A. Leverage Amazon CloudFront with the ALB endpoint as the origin
B. Deploy an appropriate managed rule for AWS WAF and associate it with the ALB
C. Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are
blocked
D. Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2
instances
A.以ALB终端节点为源利用Amazon CloudFront
B.为AWSWAF部署适当的托管规则并将其与ALB关联
C.订阅AWS Shield Advanced并确保常见漏洞和风险
受阻
D.配置网络ACL和安全组以仅允许端口80和443访问EC2
实例

Answer: C Explanation: https://d1 .awsstatic.com/whitepapers/Security/DDoS_ _White_ Paper.pdf

为了保护客户在AWS上所运行的资源,AWS在2016年re: Invent开发者大会上推出了 AWS Shield服务。 AWS Shield是一种托管式DDoS防护服务, AWS Shield提供持续检测和自动内联缓解功能,能够尽可能缩短应用程序的停机时间和延迟,因此您不需要联2w3系 AWS Support就能获得DDoS防护。AWS Shield有两个层级,分别为 Standard和 Advanced。所有AWS客户都可以使用 AWS Shield Standard的自动防护功能,不需要额外支付费用。

AWS Shield Standard可以防护大多数以网站或应用程序为攻击对象的网络和传输层DDoS攻击。将 AWS Shield Standard与 Amazon Cloudfront和 Amazon Route53一起使用时,您将获得针对所有已知基础设施(第3层和第4层)攻击的全面可用性保护。

对于以在 Amazon Elastic Compute Cloud(EC2)、 Elastic Load Balancing(ELB)、 Amazon Cloudfront、 AWS GlobalAccelerator和 Amazon Route53资源上运行的应用程序为目标的攻击,如果想要获得更高级别的防护,您可以使用AWSShield Advanced。除了 Standard版本提供的常见网络和传输层防护之外, AWS Shield Advanced还可以针对复杂的大型DDoS攻击提供额外的检测和媛解服务,让您能够近实时查看各种攻击。使用 AWS Shield Advanced,您还可以联系随时待命的AWS DDOS向应团队(DRT),帮您降低攻击所带来的影响。

此外,针对应用层资源的防护,您可以使用 AWS WAF这一Web应用程序防火墙,它可帮助保护您的eb应用程序或API免遭常见Web漏洞的攻击。 AWS WAF允许您创建防范常见攻击模式(例如SQL注入或跨站点脚本)的安全规则,以及滤除您定义的特定流量模式的规则,从而让您可以控制流量到达您的应用程序的方式。您可以通过 AWS WAF的托管规则快速入门,这些托管规则是由AWS或 AWS Marketplace卖家所预配置,可以解决 OWASP十大安全风险等问题。此外,这些规则会随新问题的出现定期更新。 AWS WAF包含功能全面的API,借此您可以让安全规则的创建、部署和维护实现自动化。

QUESTION 10
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company has been storing analytics data in an Amazon RDS instance for the past few years.
The company asked a solutions architect to find a solution that allows users to access this data
using an API.
The expectation is that the application will experience periods of inactivity but could receive
bursts of traffic within seconds.
Which solution should the solutions architect suggest?
过去几年,一家公司一直将分析数据存储在AmazonRDS实例中.
该公司要求解决方案架构师找到--种允许用户访问此数据的解决方案
使用API
期望该应用程序将经历-段时间的不活动状态,但可能会收到
在几秒钟内爆发出大量流量.
解决方案架构师应建议哪种解决方案?

A. Set up an Amazon API Gateway and use Amazon ECS.
B. Set up an Amazon API Gateway and use AWS Elastic Beanstalk.
C. Set up an Amazon API Gateway and use AWS Lambda functions
D. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling
A.设置一- 个Amazon API Gateway并使用Amazon ECS.
B.设置一-个Amazon API Gateway并使用AWS Elastic Beanstalk.
C.设置AmazonAPIGateway并使用AWSLambda函数
D.设置Amazon API Gateway并将Amazon EC2与Auto Scaling一起使用

Answer: C Explanation: This question is simply asking you to work out the best compute service for the stated requirements. The key requirements are that the compute service should be suitable for a workload that can range quite broadly in demand from no requests to large bursts of traffic. AWS Lambda is an ideal solution as you pay only when requests are made and it can easily scale to accommodate the large bursts in traffic. Lambda works well with both API Gateway and Amazon RDS.

这个问题只是在要求您针对所述问题制定最佳的计算服务 要求。 关键要求是计算服务应适合于 工作负载的需求范围很广,从无请求到大量流量。 AWS Lambda是一种理想的解决方案,因为您仅在发出请求时才付费,并且可以轻松 规模以适应交通的大爆发。 Lambda与API网关和 Amazon RDS。

QUESTION 11
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
A company's web application is using multiple Linux Amazon EC2 instances and storing data on
Amazon EBS volumes.
The company is looking for a solution to increase the resiliency of the application in case of a
failure and to provide storage that complies with atomicity, consistency, isolation, and durability
(ACID).
What should a solutions architect do to meet these requirements?

公司的Web应用程序正在使用多个LinuxAmazonEC2实例并将数据存储石
Amazon EBS卷.
该公司正在寻找一种解决方案,以在出现以下情况时提高应用程序的弹性:
故障并提供符合原子性,一致性,隔离性和持久性的存储(酸)
解决方案架构师应该怎么做才能满足这些要求?

A. Launch the application on EC2 instances in each Availability Zone.
Attach EBS volumes to each EC2 instance.
B.
Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones.
Mount an instance store on each EC2 instance.
C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones.
Store data on Amazon EFS and mount a target on each instance.
D.
Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones.
Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-lA).
A.在每个可用区中的EC2实例上启动应用程序.
将EBS卷附加到每个EC2实例.
B.使用跨多个可用区的Auto Scaling组创建一个Application Load Balancer.
在每个EC2实例上安装一个实例存储.
C.使用跨多个可用区的Auto Scaling组创建- -个Application Load Balancer.
将数据存储在AmazonEFS上,并在每个实例上安装-一个目标.
D.使用跨多个可用区的AutoScaling组创建--.个ApplicationLoadBalancer.
使用Amazon S3一区不频繁访问(S3 One Zone-IA)存储数据.
Answer: C

Explanation: To increase the resiliency of the application the solutions architect can use Auto Scaling groups to launch and terminate instances across multiple availability zones based on demand. An application load balancer (ALB) can be used to direct traffic to the web application running on the EC2 instances. Lastly, the Amazon Elastic File System (EFS) can assist with increasing the resilience of the application by providing a shared file system that can be mounted by multiple EC2 instances from multiple availability zones.

为了提高应用程序的弹性,解决方案架构师可以使用Auto Scaling组 根据需求跨多个可用性区域启动和终止实例。 应用程序负载平衡器(ALB)可用于将流量定向到运行在 EC2实例。 最后,Amazon Elastic File System(EFS)可以帮助提高服务器的弹性。 通过提供一个共享文件系统来安装应用程序,该文件系统可由多个EC2实例从 多个可用区。

AWS S3对于静态页面的托管、多媒体分发、版本管理、大数据分析、数据存档来说都非常有用。S3可以和AWS CloudFront结合使用而达到更快的上传和下载速度。

AWS EBS是可以用来做数据库或托管应用程序的持续性文件系统,EBS具有很高的IO读写速度并且即插即用。

相比前面两种存储,AWS EFS是比较新的一项服务。它提供了可以在多个EC2实例之间共享的网络文件系统,功能类似于NAS设备。可以用EFS来处理大数据分析、多媒体处理和内容管理。

下面是三种系统的详细对比:

特性 Amazon S3 EBS EFS
存储类型 对象存储 块存储 块存储
存储大小 没有限制 最大为16TB 没有限制
单个文件大小限制 0字节~5TB 没有限制 最大52TB
IO吞吐量 支持multipart上传如果使用single object upload,单个文件大小限制为5GB 可以选择HDD或者SSD的磁盘类型,以提供不同的IO 默认3GB
访问 能通过因特网访问 只能被单个EC2实例访问 可以被上千个EC2实例同时访问
可用性 99.99% 99.99% 高度可用(官方没有公布相关数据)
速度比较 最慢 最快 中等
价格 最便宜 中等 最贵

在真正采用某一种AWS存储类型的时候,需要考虑到上面的这些参数,以及真实的使用场景。每一种存储类型都有自己最适用的使用场景,都能最大化地发挥自己优势。

QUESTION 12
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
A company has an application that calls AWS Lambda functions.
A recent code review found database credentials stored in the source code.
The database credentials need to be removed from the Lambda source code.
The credentials must then be securely stored and rotated on an ongoing basis to meet security
policy requirements.
What should a solutions architect recommend to meet these requirements?

公司拥有一个调用AWS Lambda函数的应用程序.
最近的代码审查发现源代码中存储了数据库凭据.
需要从Lambda源代码中删除数据库凭据.
然后必须安全地存储凭据并不断进行轮换以符合安全性
政策要求.
解决方案架构师应建议哪些以满足这些要求?

A. Store the password in AWS CloudHSM.
Associate the Lambda function with a role that can retrieve the password from CloudHSM given
its key ID.
B. Store the password in AWS Secrets Manager.
Associate the Lambda function with a role that can retrieve the password from Secrets Manager
given its secret ID.
C. Move the database password to an environment variable associated with the Lambda function.
Retrieve the password from the environment variable upon execution.
D. Store the password in AWS Key Management Service (AWS KMS).
Associate the L ,ambda function with a role that can retrieve the password from AWS KMS given
its key ID.
A.将密码存储在AWS CloudHSM中.
将Lambda函数与可以从给定的CloudHSM检索密码的角色相关联
它的密钥ID.
B.将密码存储在AWS Secrets Manager中.
将Lambda函数与可以从SecretsManager检索密码的角色相关联
给出其秘密ID.
C.将数据库密码移至与Lambda函数关联的环境变量.
执行时从环境变量中检索密码.
D.将密码存储在AWS Key Management Service ( AWS KMS)中.
将Lambda函数与可以从给定的AWSKMS检索密码的角色相关联
它的密钥ID.
Answer: B

AWS Secrets Manager 有什么用途?

您可以使用 AWS Secrets Manager 集中存储、检索、控制对密钥的访问、轮换、审核和监控。

Secrets Manager 适用于寻求安全且可扩展的方法来存储和管理密钥的 IT 管理员。负责满足法规和合规性要求的安全管理员可以使用 Secrets Manager 来监控和轮换密钥,而不会影响应用程序。希望在其应用程序中替换硬编码密钥的开发人员可以通过编程方式从 Secrets Manager 检索密钥。

QUESTION 13
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A solutions architect needs the static website within an Amazon S3 bucket. A solutions architect
needs to ensure that data can be recovered in case of accidental deletion.
Which action will accomplish this?
A. Enable Amazon S3 versioning
B. Enable Amazon S3 Intelligent-Tiering.
C. Enable an Amazon S3 lifecycle policy
D. Enable Amazon S3 cross-Region replication.
Answer: A

问题13
解决方案架构师需要在AmazonS3存储桶中使用静态网站.
哪个动作可以完成此任务?
A.启用AmazonS3版本控制
B.启用AmazonS3智能分层.
C.启用AmazonS3生命周期策略
D.启用Amazon S3跨区域复制

Explanation: Object versioning is a means of keeping multiple variants of an object in the same Amazon S3 bucket. Versioning provides the ability to recover from both unintended user actions and application failures. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket.

QUESTION 14
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company is managing health records on-premises.
The company must keep these records indefinitely, disable any modifications to the records once
they are stored, and granularly audit access at all levels.
The chief technology officer (CTo) is concerned because there are already millions of records notbeing used by any application, and the current infrastructure is running out of space.
The cTo has requested a solutions architect design a solution to move existing data and support
future records.
Which services can the solutions architect recommend to meet these requirements'?
一家公司正在本地管理健康记录。 公司必须无限期地保留这些记录,一次禁止对记录进行任何修改 它们被存储,并在各个级别对访问进行细粒度的审核。 首席技术官(CTo)担心,因为已经有数百万条记录未被任何应用程序使用,并且当前的基础架构空间不足。 cTo已要求解决方案架构师设计解决方案以移动现有数据和支持 将来的记录。 解决方案架构师可以推荐哪些服务来满足这些要求?
A. Use AWS DataSync to move existing data to AWS.
Use Amazon S3 to store existing and new data.
Enable Amazon S3 object lock and enable AWS CloudTrail with data events.
B. Use AWS Storage Gateway to move existing data to AWS.
Use Amazon S3 to store existing and new data.
Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
C. Use AWS DataSync to move existing data to AWS.
Use Amazon S3 to store existing and new data.
Enable Amazon S3 object lock and enable AWS CloudTrail with management events.
D. Use AWS Storage Gateway to move existing data to AWS.
Use Amazon Elastic Block Store (Amazon EBS) to store existing and new data.
Enable Amazon S3 object lock and enable Amazon S3 server access logging,
A.使用AWS DataSync将现有数据移至AWS。使用Amazon S3存储现有数据和新数据。启用Amazon S3对象锁定,并为AWS CloudTrail启用数据事件。 B.使用AWS Storage Gateway将现有数据移至AWS。使用Amazon S3存储现有数据和新数据。启用Amazon S3对象锁定,并通过管理事件启用AWS CloudTrail。 C.使用AWS DataSync将现有数据移至AWS。使用Amazon S3存储现有数据和新数据。启用Amazon S3对象锁定,并通过管理事件启用AWS CloudTrail。 D.使用AWS Storage Gateway将现有数据移至AWS。使用Amazon Elastic Block Store(Amazon EBS)存储现有数据和新数据。启用Amazon S3对象锁定并启用Amazon S3服务器访问日志记录,
A

Explanation: Keyword: Move existing data and support future records t Granular audit access at all levels Use AWS DataSync to migrate existing data to Amazon S3, and then use the File Gateway configuration of AWS Storage Gateway to retain access to the migrated data and for ongoing updates from your on-premises file-based applications. Need a solution to move existing data and support future records = AWS DataSync should be used for migration. Need granular audit access at all levels = Data Events should be used in CloudTrail, Management Events is enabled by default.

Storage Gateway简介 AWS Storage Gateway 是一种具有无缝本地集成和优化数据传输的混合云存储方案。

你本地数据中心内的服务器可以通过 AWS Storage Gateway 连接访问 Amazon S3、Amazon Glacier、Amazon EBS 等 AWS 存储服务来进行备份、存档、灾难恢复、数据迁移等等。

QUESTION 15
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company currently operates a web application backed by an Amazon RDS MySQL database.
It has automated backups that are run daily and are not encrypted.
A security audit requires future backups to be encrypted and the unencrypted backups to be
destroyed.
The company will make at least one encrypted backup before destroying the old backups
What should be done to enable encryption for future backups?
A. Enable default encryption for the Amazon S3 bucket where backups are stored
B. Modify the backup section of the database configuration to toggle the Enable encryption check
box.
C. Create a snapshot of the database.
Copy it to an encrypted snapshot.
Restore the database from the encrypted snapshot.
D. Enable an encrypted read replica on RDS for MySQL.
Promote the encrypted read replica to primary.
Remove the original database instance.
Answer: C
一家公司当前正在运行由Amazon RDS MySQL数据库支持的Web应用程序。它具有每天运行且未加密的自动备份。安全审核要求对将来的备份进行加密,而将未加密的备份销毁。在销毁旧备份之前,公司将至少进行一次加密备份,应如何做才能为以后的备份启用加密? A.为存储备份的Amazon S3存储桶启用默认加密B.修改数据库配置的“备份”部分以切换“启用加密”复选框。 C.创建数据库快照。 将其复制到加密的快照。从加密的快照还原数据库。 D.在MySQL的RDS上启用加密的只读副本。将加密的只读副本提升为主数据库。删除原始数据库实例。答案:C说明:Amazon RDS使用快照进行备份。仅当数据库已加密时,快照才会在创建时进行加密,并且只能在首次创建数据库时为数据库选择加密。在这种情况下,数据库以及快照均未加密。但是,您可以创建快照的加密副本。您可以使用该快照进行还原,该快照将创建一个启用了加密的新数据库实例。从那时起,将为所有快照启用加密。

Explanation: Amazon RDS uses snapshots for backup. Snapshots are encrypted when created only if the database is encrypted and you can only select encryption for the database when you first create it. In this case the database, and hence the snapshots, ad unencrypted. However, you can create an encrypted copy of a snapshot. You can restore using that snapshot which creates a new DB instance that has encryption enabled. From that point on encryption will be enabled for all snapshots.

QUESTION 16
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A client reports that they want see an audit log of any changes made to AWS resources in their
account,
What can the client do to achieve this?
A. Set up Amazon CloudWatch monitors on services they own
B. Enable AWS CloudTrail logs to be delivered to an Amazon S3 bucket
C. Use Amazon CloudWatch Events to parse logs
D. Use AWS OpsWorks to manage their resources
Answer: B
Explanation:
A CloudTrail trail can be created which delivers log files to an Amazon S3 bucket.

客户端报告他们希望查看其日志中对AWS资源所做的任何更改的审核日志
帐户,
客户可以做些什么来实现这一目标?
A.在他们拥有的服务上设置Amazon CloudWatch监视器
B.启用将AWS CloudTrail日志传递到Amazon S3存储桶
C.使用Amazon CloudWatch Events解析日志
D.使用AWS OpsWorks来管理其资源
答案:B

说明: 可以创建CloudTrail跟踪,该跟踪将日志文件传递到Amazon S3存储桶。

AWS CloudTrail 是一项支持对您的 AWS 账户进行监管、合规性检查、操作审核和风险审核的服务。借助 CloudTrail,您可以记录日志、持续监控并保留与整个 AWS 基础设施中的操作相关的账户活动。CloudTrail 提供 AWS 账户活动的事件历史记录,这些活动包括通过 AWS 管理控制台、AWS 开发工具包、命令行工具和其他 AWS 服务执行的操作。此事件历史记录可以简化安全性分析、资源更改跟踪和问题排查工作。 此外,您可以使用 CloudTrail 来检测 AWS 账户中的异常活动。这些功能可帮助您简化分析和问题排查。

QUESTION 17
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
An application running in a private sübnet accesses an Amazon DynamoDB table. There is a
security requirement that the data never leave the AWS network.
How should this requirement be met?
A. Configure a network ACL on DynamoDB to limit traffic to the private subnet
B. Enable DynamoDB encryption at rest using an AWS KMS key
C. Add a NAT gateway and configure the route table on the private subnet
D. Create a VPC endpoint for DynamoDB and configure the endpoint policy
Answer: D
在专用sübnet中运行的应用程序访问Amazon DynamoDB表。
数据永不离开AWS网络的安全性要求。
应如何满足此要求?
A.在DynamoDB上配置网络ACL,以限制到专用子网的流量
B.使用AWS KMS密钥启用静态DynamoDB加密
C.添加一个NAT网关并在专用子网上配置路由表
D.为DynamoDB创建VPC端点并配置端点策略
答案:D

Explanation: Hint: Private Subnet = VPC Endpoint

说明: 提示:专用子网= VPC端点

QUESTION 18
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A three-tier application is being created to host small news articles. The application is expected to
serve millions of users. When breaking news occurs, the site must handle very large spikes in
traffic without significantly impacting database performance.
Which design meets these requirements while minimizing costs?
A. Use Auto Scaling groups to increase the number of Amazon EC2 instances delivering the web
application
B. Use Auto Scaling groups to increase the size of the Amazon RDS instances delivering the
database
C. Use Amazon DynamoDB strongly consistent reads to adjust for the increase in traffic
D. Use Amazon DynamoDB Accelerator (DAX) to cache read operations to the database
Answer: D
正在创建一个三层应用程序来托管小新闻。该应用程序有望
为数百万用户提供服务。当发生重大新闻时,该网站必须处理
流量,而不会显着影响数据库性能。
哪种设计可以在最小化成本的同时满足这些要求?
A.使用Auto Scaling组来增加交付Web的Amazon EC2实例的数量
应用
B.使用Auto Scaling组来增加交付的Amazon RDS实例的大小
数据库
C.使用Amazon DynamoDB高度一致的读取来调整流量的增长
D.使用Amazon DynamoDB Accelerator(DAX)将读取操作缓存到数据库
答案:D

Explanation: DAX has in memory cache. If breaking news happens, majority of the users searching will look for the exact same thing. That being said, requests will query the Memory Cache first and will not need to fetch the data from the DB directly.说明: DAX具有内存缓存。如果发生重大新闻,大多数搜索用户将寻找 完全一样。也就是说,请求将首先查询内存缓存,而不会查询 需要直接从数据库中获取数据。

QUESTION 19
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
During a review of business applications, a Solutions Architect identifies a critical application with
a relational database that was built by a business user and is running on the user's desktop. To
reduce the risk of a business interruption, the Solutions Architect wants to migrate the application
to a highly available, multi-tiered solution in AWS.
What should the Solutions Architect do to accomplish this with the LEAST amount of disruption to
the business?
A. Create an import package of the application code for upload to AWS Lambda, and include a
function to create another L ambda function to migrate data into an Amazon RDS database
B. Create an image of the user's desktop, migrate it to Amazon EC2 using VM Import, and place the
EC2 instance in an Auto Scaling group
C. Pre-stage new Amazon EC2 instances running the application code on AWS behind an
Application Load Balancer and an Amazon RDS Multi-AZ DB instance
D. Use AWS DMS to migrate the backend database to an Amazon RDS Multi-AZ DB instance.
Migrate the application code to AWS Elastic Beanstalk
Answer: D
在审查业务应用程序时解决方案架构师使用以下命令识别关键应用程序
由业务用户构建并在用户桌面上运行的关系数据库
降低业务中断的风险解决方案架构师希望迁移应用程序
到AWS中高度可用的多层解决方案
解决方案架构师应该怎么做才能做到最少
这生意
A.创建应用程序代码的导入包以上传到AWS Lambda并包括一个
函数创建另一个Lmbda函数以将数据迁移到Amazon RDS数据库
B.创建用户桌面的映像使用VM Import将其迁移到Amazon EC2然后放置
Auto Scaling组中的EC2实例
C.在新的Amazon EC2实例的预备阶段在AWS后面的AWS上运行应用程序代码
应用程序负载均衡器和Amazon RDS Multi-AZ数据库实例
D.使用AWS DMS将后端数据库迁移到Amazon RDS Multi-AZ数据库实例
将应用程序代码迁移到AWS Elastic Beanstalk
答案D

AWS Database Migration Service 可帮助您快速并安全地将数据库迁移至 AWS。源数据库在迁移过程中可继续正常运行,从而最大程度地减少依赖该数据库的应用程序的停机时间。AWS Database Migration Service 可以在广泛使用的开源商业数据库之间迁移您的数据。

Multi-AZ参照RDS部分,高可用。

https://www.cloudcared.cn/1892.html

QUESTION 20
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
A company has thousands of files stored in an Amazon S3 bucket that has a well-defined access
pattern. The files are accessed by an application multiple times a day for the first 30 days. Files
are rarely accessed within the next 90 days. After that, the files are never accessed again. During
the first 120 days, accessing these files should never take more than a few seconds.
Which lifecycle policy should be used for the S3 objects to minimize costs based on the access
pattern?
A. Use Amazon S3 Standard-lnfrequent Access (S3 Standard-lA) storage for the first 30 days. Then
move the files to the GLACIER storage class for the next 90 days. Allow the data to expire after
that.
.B. Use Amazon S3 Standard storage for the first 30 days. Then move the files to Amazon S3
Standard- Infrequent Access (S3 Standard-lA) for the next 90 days. Allow the data to expire after
that.
C. Use Amazon S3 Standard storage for first 30 days. Then move the files to the GL ACIER storage
class for the next 90 days. Allow the data to expire after that.
D. Use Amazon S3 Standard-Infrequent Access (S3 Standard-lA) for the first 30 days. After that,
move the data to the GLACIER storage class, where is will be deleted automatically.

Answer: B

一家公司在一个具有明确定义的访问权限的Amazon S3存储桶中存储了数千个文件
模式。在前30天内,应用程序每天多次访问文件。档案
在接下来的90天内很少访问。之后,将不再访问文件。中
在最初的120天内,访问这些文件的时间绝对不应超过几秒钟。
应该为S3对象使用哪种生命周期策略以最大程度地减少基于访问的成本
模式?
A.前30天使用Amazon S3标准不频繁访问(S3 Standard-1A)存储。然后
在接下来的90天内将文件移至GLACIER存储类。允许数据过期后
那。
 B.前30天使用Amazon S3 Standard存储。然后将文件移至Amazon S3
未来90天的标准不频繁访问(S3 Standard-IA)。允许数据过期后
那。
C.前30天使用Amazon S3 Standard存储。然后将文件移至GL ACIER存储
下90天的课程。之后允许数据过期。
D.前30天使用Amazon S3 Standard-Infrequent Access(S3 Standard-IA)。之后,
将数据移至GLACIER存储类,该类将被自动删除。

Explanation: lt is mentioned that they need to access data in few seconds during the 120 days.

QUESTION 21
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company creates business-critical 3D images every night. The images are batch-processed
every Friday and require an uninterrupted 48 hours to complete.
What is the MOST cost-effective Amazon EC2 pricing model for this scenario?
A. On-Demand Instances
B. Scheduled Reserved Instances
C. Reserved Instances
D. Spot Instances
Answer: B

公司每天晚上都会创建关键业务3D图像。 图像被批处理
每个星期五,需要不间断的48小时才能完成。
在这种情况下,最具成本效益的Amazon EC2定价模型是什么?
A.按需实例
B.预定的预留实例
C.预留实例
D.竞价型实例
答案:B

亚马孙网络服务(AWS)推出“定期预留实例”(Scheduled Reserved Instances),使得 EC2计算容量能够以优惠的价格为定期使用预留。例如,某个 EC2实例类型可以为世界时01:00到05:00之间的日常运行而预留,从而执行整夜的数据分析,或者每周或者每月执行计算密集型计算。

QUESTION 22
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
An application generates audit logs of operational activities. Compliance requirements mandate
that the application retain the logs for 5 years.How can these requirements be met?
A.Save the logs in an Amazon S3 bucket and enable Multi-Factor Authentication Delete (MFA
Delete) on the bucket.
B. Save the logs in an Amazon EFS volume and use Network File System version 4 (NFSv4) locking
with the volume.
C. Save the logs in an Amazon Glacier vault and use the Vault Lock feature.
D. Save the logs in an Amazon EBS volume and take monthly snapshots.
Answer: C
应用程序生成操作活动的审核日志。 合规要求规定
该应用程序将日志保留5年。
如何满足这些要求?
A将日志保存在Amazon S3存储桶中并启用多因素身份验证删除(MFA)
删除)。
B.将日志保存在Amazon EFS卷中,并使用网络文件系统版本4(NFSv4)锁定
与音量。
C.将日志保存在Amazon Glacier保管库中,并使用保管库锁定功能。
D.将日志保存在Amazon EBS卷中,并每月拍摄一次快照。

Explanation: Amazon Glacier, which enables long-term storage of mission-critical data, has added Vault Lock. This new feature allows you to lock your vault with a variety of compliance controls that are designed to support such long-term records retention.

说明: 可以长期存储关键任务数据的Amazon Glacier添加了Vault Lock。 这项新功能使您可以使用各种合规性控件来锁定保管库 旨在支持此类长期记录保留。

QUESTION 23
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A Solutions Architect is creating an application running in an Amazon VPC that needs to access
AWS Systems Manager Parameter Store. Network security rules prohibit any route table entry
with a 0.0.0.0/0 destination.
What infrastructure addition will allow access to the AWS service while meeting the
requirements?
A. VPC peering
B. NAT instance
C. NAT gateway
D. AWS PrivateLink

解决方案架构师正在创建运行在需要访问的Amazon VPC中的应用程序
AWS Systems Manager参数存储。 网络安全规则禁止任何路由表条目
目的地为0.0.0.0/0。
在满足以下条件的同时,添加了哪些基础架构可以访问AWS服务:
要求?
A.VPC对等
B. NAT实例
C.NAT网关
D.AWS PrivateLink

Answer: D

Explanation: To publish messages to Amazon SNS topics from an Amazon VPC, create an interface VPC endpoint. Then, you can publish messages to SNS topics while keeping the traffic within the network that you manage with the VPC. This is the most secure option as traffic does not need to traverse the Internet.

使用PrivateLink支持的服务,流量不经过公网,客户也可管理大量实例,创建并管理IT服务分类,存储并处理数据。

QUESTION 24
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A photo-sharing website running on AWS allows users to generate thumbnail images of photos
stored in Amazon S3. **An Amazon DynamoDB tabl**e maintains the locations of photos, and
thumbnails are easily re- created from the originals if they are accidentally deleted.
How should the thumbnail images be stored to ensure the LOWEST cost?
A. Amazon S3 Standard-lnfrequent Access (S3 Standard-lA) with cross-region replication
B. Amazon S3
C. Amazon Glacier
D. Amazon S3 with cross-region replication
Answer: B
在AWS上运行的照片共享网站允许用户生成存储在Amazon S3中的照片的缩略图。 Amazon DynamoDB表可维护照片的位置,
如果不小心删除了缩略图,则可以轻松地从原始照片重新创建缩略图。缩略图应如何存储以确保最低成本?
A.具有跨区域复制的Amazon S3标准不频繁访问(S3 Standard-IA)B.Amazon S3
C.Amazon Glacier D.具有跨区域复制的Amazon S3
QUESTION 25
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company is implementing **a data lake solution** on Amazon S3. lts security policy mandates that
the data stored in Amazon S3 should be **encrypted** at rest.
Which options can achieve this? (Select TWO.)
A. Use S3 server-side encryption with an Amazon EC2 key pair.
B. Use S3 server-side encryption with customer-provided keys (**SSE**-C).
C. Use S3 bucket policies to restrict access to the data at rest,
D. Use **client-side encryption** before ingesting the data to Amazon S3 using encryption keys.
E. Use SSL to encrypt the data while in transit to Amazon S3.

一家公司正在Amazon S3上实施数据湖解决方案。 安全政策规定
存储在Amazon S3中的数据应在静止状态下进行加密。
哪些选项可以实现这一目标? (选择两个。)
A.将S3服务器端加密与Amazon EC2密钥对一起使用。
B.使用带有客户提供的密钥(SSE-C)的S3服务器端加密。
C.使用S3存储桶策略来限制对静态数据的访问,
D.在使用加密密钥将数据提取到Amazon S3之前,使用客户端加密。
E.在传输到Amazon S3时,使用SSL加密数据。

Answer: BD

Data lakes built on AWS primarily use two types of encryption: Server-side encryption (SSE) and client-side encryption. SSE provides data-at-rest encryption 在AWS上构建的数据湖主要使用两种加密类型:服务器端加密(SSE)和客户端加密。 SSE提供静态数据加密

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

QUESTION 26
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A solutions architect has created a new AWS account and must **secure AWS account root user**
**access**.解决方案架构师创建了一个新的AWS账户,必须保护AWS账户根用户
访问。
Which combination of actions will accomplish this? (Select TWO.)
A. Ensure the root user uses a **strong password**
B. Enable **multi-factor authentication** to the root user
C. Store root user access keys in an encrypted Amazon S3 bucket
D. Add the root user to a group containing administrative permissions.
E. Apply the required permissions to the root user with an inline policy document
Answer: AB

哪种动作组合可以达到目的? (选择两个。)
A.确保root用户使用强密码
B.对根用户启用多因素身份验证
C.将root用户访问密钥存储在加密的Amazon S3存储桶中
D.将root用户添加到包含管理权限的组中。
E.使用内联策略文档将所需权限应用于根用户Explanation:

“Enable MFA” The AWS Account Root User - https://docs .aws,amazon.com/lAM/atest/UserGuide/id_ root- user.html “Choose a strong password” Changing the AWS Account Root User Password - https://docs.aws.amazon.com/lAM/latest/UserGuide/id_ credentials_ passwords_ change-root.html

QUESTION 27
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company's application runs on Amazon EC2 instances behind an Application Load Balancer
(ALB).
The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones.
**On the first day öf every month at midnight the application becomes much slower when the**
**month-end financial calculation batch executes**.
This causes the CPU utilization of the EC2 instances to immediately peak to 100% which disrupts the application.
What should a solutions architect recommend to ensure the application is able to handle the
workload and avoid downtime?
公司的应用程序在Application Load Balancer后面的Amazon EC2实例上运行(ALB)。
实例在多个可用区中的Amazon EC2 Auto Scaling组中运行。在每个月的午夜第一天,当执行月末财务计算批处理。
这会导致EC2实例的CPU使用率立即达到100%,从而中断应用程序。解决方案架构师应建议什么以确保应用程序能够处理工作量和避免停机?

A. Configure an Amazon CloudFront distribution in front of the ALB
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization
C. Configure an **EC2 Auto Scaling scheduled scaling policy** based on the monthly schedule.
D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances
A.在ALB之前配置Amazon CloudFront分配
B.根据CPU使用率配置EC2自动扩展简单扩展策略
C.根据月度计划配置EC2自动扩展计划的扩展策略。
D.配置Amazon ElastiCache以从EC2实例中删除一些工作负载
Answer: C

Explanation: Scheduled scaling allows you to set your own scaling schedule. In this case the scaling action can be scheduled to occur just prior to the time that the reports will be run each month. Scaling actions are performed automatically as a function of time and date. This will ensure that there are enough EC2 instances to serve the demand and prevent the application from slowing down. 预定缩放比例允许您设置自己的缩放时间表。 在这种情况下,缩放操作可以安排在每个月运行报表之前进行。 缩放比例根据时间和日期自动执行操作。 这将确保有 足够的EC2实例来满足需求并防止应用程序变慢。

QUESTION 28
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company is migrating from an on-premises infrastructure to the AWS Cloud.
One of the company's applications stores files on a Windows file server farm that uses Distributed
File System Replication (DFSR) to keep data in sync.
A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?
A. Amazon EFS
B. Amazon FSx
C. Amazon S3
D. AWS Storage Gateway
Answer: B
一家公司正在从内部部署基础架构迁移到AWS云。该公司的应用程序之一将文件存储在Windows文件服务器场中,
该服务器场使用分布式文件系统复制(DFSR)保持数据同步。解决方案架构师需要替换文件服务器场。解决方案架构师应使用哪种服务? 
A.Amazon EFS B.Amazon FSx C.Amazon S3 D.AWS Storage Gateway

Explanation: Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. Amazon FSx is built on Windows Server and provides a rich set of administrative features that include end-user file restore, user quotas, and Access Control Lists (ACLs). Additionally, Amazon FSX for Windows File Server supports Distributed File System Replication (DFSR) in both Single-AZ and Multi-AZ deployments as can be seen in the feature comparison table below.

QUESTION 29
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company's website is used to sell products to the public.
The site runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load
Balancer (ALB).
There is also an Amazon CloudFront distribution and AWS WAF is being used to protect against
SQL injection attacks.
The ALB is the origin for the CloudFront distribution.
A recent review of security logs revealed an external malicious IP that needs to be blocked from
accessing the website.
What should a solutions architect do to protect the application?
A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP
address
B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP
address
C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the
malicious IP address
D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the
malicious IP address
Answer: B
公司的网站用于向公众销售产品。该站点在应用程序负载平衡器(ALB)后面的Auto Scaling组中的Amazon EC2实例上运行。还有一个Amazon CloudFront发行版,AWS WAF被用来防御SQL注入攻击。 ALB是CloudFront分发的来源。 最近对安全日志的审查显示,需要阻止外部恶意IP访问该网站。解决方案架构师应该怎么做才能保护应用程序? A.修改CloudFront分发上的网络ACL以添加针对恶意IP地址的拒绝规则B.修改AWS WAF的配置以添加IP匹配条件以阻止恶意IP地址C.修改EC2实例的网络ACL在ALB后面的目标组中拒绝恶意IP地址D。在ALB后面的目标组中修改EC2实例的安全组以拒绝恶意IP地址

Explanation: 2019年11月发布了新版本的AWS Web应用程序防火墙。使用AWS WAF classic,您可以创建“ lP匹配条件”,而使用AWS WAF(新版本),您可以创建“ IP设置匹配语句”。注意考试的措辞。 IP匹配条件l IP设置匹配语句根据一组IP地址和地址范围检查Web请求源的IP地址。使用此选项可基于请求源自的IP地址来允许或阻止Web请求。

QUESTION 30
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis.
An application on an Amazon EC2 instance needs permission to efficiently process the CSV data
stored in the S3 bucket.
Which action will MOST securely grant the EC2 instance access to the S3 bucket?
A. Attach a resource- based policy to the S3 bucket
B. Create an IAM user for the application with specific permissions to the S3 bucket
C. Associate an IAM role with least privilege permissions to the EC2 instance profile
D. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API
calls
Answer: C
一家营销公司将CSV文件存储在Amazon S3存储桶中,以进行统计分析。 
Amazon EC2实例上的应用程序需要权限才能有效处理S3存储桶中存储的CSV数据。 MOST将安全地授予EC2实例对S3存储桶的访问权限是什么? 
A.将基于资源的策略附加到S3存储桶
B.为具有S3存储桶特定权限的应用程序创建IAM用户
C.将IAM角色与对EC2实例配置文件的最小特权权限相关联
D.将AWS凭证直接存储在EC2实例,该实例上的应用程序可用于API调用

Explanation: Keyword: Privilege Permission + IAM Role AWS ldentity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your users.特权权限+ IAM角色AWS身份和访问管理(IAM)使您能够安全地管理对AWS服务和资源的访问。使用IAM,您可以创建和管理AWS用户和组,并使用权限来允许和拒绝他们对AWS资源的访问。 IAM是您的AWS账户的一项功能,无需额外付费。您只需为用户使用其他AWS服务付费。

IAM roles for Amazon EC2

Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting your credentials from other users. However, it’s challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials. We designed IAM roles so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles as follows: Create an IAM role. Define which accounts or AWS services can assume the role. Define which API actions and resources the application can use after assuming the role. Specify the role when you launch your instance, or attach the role to an existing instance. . Have the application retrieve a set of temporary credentials and use them. For example, you can use IAM roles to grant permissions to applications running on your instances that need to use a bucket in Amazon S3. You can specify permissions for IAM roles by creating a policy in JSON format. These are similar to the policies that you create for IAM users. If you change a role, the change is propagated to all instances. When creating IAM roles, associate least privilege IAM policies that restrict access to the specific API calls the application requires.

QUESTION 31
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A solutions architect is designing a solution where users will De directed to a backup static error
page it the primary website is unavailable.
The primary website's DNS records are hosted in Amazon Route 53 where their domain is
pointing to an Application Load Balancer (ALB),
Which configuration should the solutions architect use to meet the company's needs while
minimizing changes and infrastructure overhead?
A. Point a Route 53 alias record to an Amazon CloudFront distribution with the ALB as one of its
origins.
Then, create custom error pages for the distribution.
B. Set up a Route 53 active-passive failover configuration.
Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health
checks determine that the ALB endpoint is unhealthy.
C. Update the Route 53 record to use a latency-based routing policy.
Add the backup static error page hosted within an Amazon S3 bucket to the record So the traffic is
sent to the most responsive endpoints.
D. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting
a static error page as endpoints.
Route 53 will only send requests to the instance if the health checks fail for the ALB.
Answer: B 
解决方案架构师正在设计一种解决方案,在该解决方案中,如果主网站不可用,用户将被定向到备份静态错误页面。
主要网站的DNS记录托管在Amazon Route 53中,其域指向一个应用程序负载平衡器(ALB),
解决方案架构师应使用哪种配置来满足公司的需求,同时最大程度地减少更改和基础架构开销? 
A.将Route 53别名记录指向以ALB作为其起源之一的Amazon CloudFront分配。然后,为分发创建自定义错误页面。 
B.设置Route 53主动-被动故障转移配置。当Route 53运行状况检查确定ALB终端节点不健康时,将流量定向到Amazon S3存储桶中托管的静态错误页面。 
C.更新Route 53记录以使用基于延迟的路由策略。将托管在Amazon S3存储桶中的备份静态错误页面添加到记录中,以便将流量发送到响应最快的终端节点。
D.使用ALB和托管静态错误页面的Amazon EC2实例,设置Route 53主动-主动配置。仅当ALB的运行状况检查失败时,路由53才会将请求发送到实例。

Explanation: Using Amazon CloudFront as the front-end provides the option to specify a custom message instead of the default message. To specify the specific file that you want to return and the errors for which the file should be returned, you update your CloudFront distribution to specify those values. For example, the following is a customized error message:

The CloudFront distribution can use the ALB as the origin, which will cause the website content to be cached on the CloudFront edge caches. This solution represents the most operationally efficient choice as no action is required in the event of an issue, other than troubleshooting the root cause.

QUESTION 32
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
A solutions architect is designing the cloud architecture for a new application being deployed on
AWS
The process should run in parallel while adding and removing application nodes as needed based
on the number of jobs to be processed.
The processor application is stateless.
The solutions architect must ensure that the application is loosely coupled and the job items are
durably stored.
Which design should the solutions architect use?
A. Create an Amazon SNS topic to send the jobs that need to be processed.
Create an Amazon Machine Image (AMI) that consists of the processor application.
Create a launch configuration that uses the AMl.
Create an Auto Scaling group using the launch configuration.
Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage
B. Create an Amazon SQS queue to hold the jobs that need to be processed.
Create an Amazon Machine Image (AMI) that consists of the processor application.
Create a launch configuration that uses the AMl.
Create an Auto Scaling group using the launch configuration.
Set the scaling policy for the Auto Scaling group to add and remove nodes based on network
usage
C. Create an Amazon SQS queue to hold the jobs that needs to be processed.
Create an Amazon Machine Image (AMI) that consists of the processor application.
Create a launch template that uses the AMI.
Create an Auto Scaling group using the launch template.
Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number
of items in the SQS queue
D. Create an Amazon SNS topic to send the jobs that need to be processed.
Create an Amazon Machine Image (AMI) that consists of the processor application.
Create a launch template that uses the AMI.
Create an Auto Scaling group using the launch template.
Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number
of messages published to the SNS topic.
Answer: C
解决方案架构师正在为在AWS上部署的新应用程序设计云架构。该过程应并行运行,同时根据要处理的作业数根据需要添加和删除应用程序节点。
处理器应用程序是无状态的。解决方案架构师必须确保应用程序松散耦合,并且持久存储作业项。解决方案架构师应使用哪种设计?
A.创建一个Amazon SNS主题以发送需要处理的作业。创建一个由处理器应用程序组成的Amazon Machine Image(AMI)。创建使用AM1的启动配置。使用启动配置创建一个Auto Scaling组。设置Auto Scaling组的扩展策略,以根据CPU使用情况
B添加和删除节点。创建Amazon SQS队列以容纳需要处理的作业。
创建一个由处理器应用程序组成的Amazon Machine Image(AMI)。创建使用AM1的启动配置。使用启动配置创建一个Auto Scaling组。为Auto Scaling组设置缩放策略,以根据网络使用情况
C添加和删除节点。创建Amazon SQS队列以保存需要处理的作业。创建一个由处理器应用程序组成的Amazon Machine Image(AMI)。创建使用AMI的启动模板。使用启动模板创建一个Auto Scaling组。设置Auto Scaling组的缩放策略,以根据SQS队列D中的项目数添加和删除节点。

创建Amazon SNS主题以发送需要处理的作业。创建一个由处理器应用程序组成的Amazon Machine Image(AMI)。创建使用AMI的启动模板。使用启动模板创建一个Auto Scaling组。设置Auto Scaling组的缩放策略,以根据发布到SNS主题的消息数添加和删除节点

Explanation: In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue. To configure this scaling you can use the backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet’s running capacity, which for an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance. Acceptable backlog per instance: To calculate your target value, first determine what your application can accept in terms of latency. Then, take the acceptable latency value and divide it by the average time that an EC2 instance takes to process a message. This solution will scale EC2 ·instances using Auto Scaling based on the number of jobs waiting in the SQS queue.

QUESTION 33
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company has a legacy application that processes data in two parts.
The second part of the process takes longer than the first, so the company has decided to rewrite
the application as two microservices running on Amazon ECS that can scale independently.
How should a solutions architect integrate the microservices?
A. Implement code in microservice 1to send data to an Amazon S3 bucket.
Use S3 event notifications to invoke microservice 2.
B. Implement code in microservice 1 to publish data to an Amazon SNS topic.
Implement code in microservice 2 to subscribe to this topic.
C. Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose.
Implement code in microservice 2 to read from Kinesis Data Firehose.
D. Implement code in microservice 1 to send data to an Amazon SQS queue.
Implement code in microservice 2 to process messages from the queue.
Answer: D
公司有一个遗留应用程序,该应用程序分两部分处理数据。该过程的第二部分需要比第一部分更长的时间,
因此该公司决定将应用程序重写为在Amazon ECS上运行的两个可独立扩展的微服务。解决方案架构师应如何集成微服务?
A.在微服务1中实施代码以将数据发送到Amazon S3存储桶。使用S3事件通知调用微服务2。
B.在微服务1中实现代码以将数据发布到Amazon SNS主题。在微服务2中实现代码以订阅该主题。
C.在微服务1中实施代码以将数据发送到Amazon Kinesis Data Firehose。在微服务2中实现代码以从Kinesis Data Firehose读取。 
D.在微服务1中实施代码以将数据发送到Amazon SQS队列。在微服务2中实现代码以处理来自队列的消息

Explanation: This is a good use case for Amazon SQS. The microservices must be decoupled so they can scale independently. An Amazon SQS queue will enable microservice 1 to add messages to the queue. Microservice 2 can then pick up the messages and process them. This ensures that if there’s a spike in traffic on the frontend, messages do not get lost due to the backend process not being ready to process them.

Amazon Simple Queue Service (SQS) 是一项快速可靠、可扩展且完全托管的消息队列服务。SQS 使得云应用程序的组件解藕大大简化,并且具有较高的成本效益。您可以使用 SQS 在任意吞吐量级别传输任何规模的数据,而不会丢失消息,并且无需其他服务即可保持可用。

使用 SQS,您不必承担运行和扩展高度可用消息集群的管理工作,只需以较低的价格仅为您使用的部分付费。

QUESTION 34
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A solutions architect at an ecommerce company wants to back up application log data to Amazon
S3.
The solutions architect is unsure how frequently the logs will be accessed or which logs will be
accessed the most.
The company wants to keep costs as low as possible by using the appropriate S3 storage class.
Which S3 storage class should be implemented to meet these requirements?
A. S3 Glacier
B. S3 Intelligent-Tiering
C. S3 Standard-lnfrequent Access (S3 Standard-lA)
D. S3 One Zone-Infrequent Access (S3 One Zone-lA)
Answer: B
一家电子商务公司的解决方案架构师希望将应用程序日志数据备份到Amazon S3。解决方案架构师无法确定日志的访问频率或访问最多的日志。
该公司希望通过使用适当的S3存储类别来尽可能降低成本。应该实现哪种S3存储类别以满足这些要求?
A. S3冰川B. S3智能分层C. S3标准不频繁访问(S3 Standard-1A)D. S3一区不频繁访问(S3 One Zone-1A)

Explanation: The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost- effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. This is an ideal use case for intelligent-tiering as the access patterns for the log files are not known. 说明: S3智能分层存储类旨在通过自动移动来优化成本 数据传输到最具成本效益的访问层,而不会影响性能或运营开销。 它通过将对象存储在两个访问层中来工作:为频繁访问而优化的一层;以及 另一个针对不频繁访问进行优化的低成本层。这是一个理想的用例 智能分层,因为日志文件的访问模式未知。

以下是六个AmazonS3存储类,按成本和访问频率的降序列出,以及它们的显着特征:

  • **Standard:**StandardS3是一种通用对象存储平台,专为必须立即持续可用的应用程序数据而设计。
  • **Intelligent-Tiering:**许多应用程序都有大量数据集,具有一系列访问模式。这些模式取决于数据类型,季节性变化和内部业务需求等因素。Intelligent-Tiering可自动识别并将不常访问的数据(30天内未访问的数据)移动到成本较低的基础架构中。当访问不频繁层中的对象时,它会自动移回更高性能层,并且30天时钟重新启动。
  • **StandardInfrequentAccess(IA):**一些数据很少被访问,但在用户需要时需要快速性能。Standard-IA以此方案为目标,提供与标准S3类似的性能,但可用性较低。
  • **OneZone-IA:**与Standard-IA不同,此类别不会自动在至少三个AZ上存储数据。但是,OneZone-IA都提供与StandardS3相同的毫秒级数据延迟。
  • Glacier:虽然它使用对象存储,但Glacier与其他S3版本不同,因为它是专为数据存档而设计的。AWS从未透露过Glacier的基础技术。无论Glacier使用低性能硬盘驱动器,磁带,光盘还是其他产品,其性能和可用性参数都与企业磁带库类似。但是,与磁带库不同,Glacier用户可以指定数据检索的最长时间,范围从几分钟到几小时不等。
  • **GlacierDeepArchive:**DeepArchive专为长期存档而设计,考虑到常年存储,并且在12小时内不经常访问数据
QUESTION 35
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A security team wants to limit access to specific services or actions in all of the team's AWS
accounts.
All accounts belong to a large organization in AWS Organizations.
The solution must be scalable and there must be a single point where permissions can be
maintained.
What should a solutions architect do to accomplish this?
A. Create an ACL to provide access to the services or actions.
B. Create a security group to allow accounts and attach it to user groups
C. Create cross-account roles in each account to deny access to the services or actions.
D. Create a service control policy in the root organizational unit to deny access to the services or
actions
Answer: D
安全团队希望限制对该团队所有AWS账户中特定服务或操作的访问。所有账户均属于AWS Organizations中的大型组织。
该解决方案必须是可扩展的,并且必须在单个点上可以维护权限。
解决方案架构师应该怎么做才能做到这一点? 
A.创建一个ACL以提供对服务或操作的访问。 B.创建一个安全组以允许帐户并将其附加到用户组
C.在每个帐户中创建跨帐户角色以拒绝对服务或操作的访问。 D.在根组织单位中创建服务控制策略以拒绝对服务或操作的访问

Explanation: Service control policies (SCPs) offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines,

SCPs alone are not sufficient for allowing access in the accounts in your organization. Attaching an SCP to an AWS Organizations entity (root, OU, or account) defines a guardrail for what actions the principals can perform. You still need to attach identity-based or resource-based policies to principals or resources in your organization’s accounts to actually grant permissions to them.

服务控制策略(SCP)提供对以下项的最大可用权限的集中控制 组织中的所有帐户,可确保您将帐户保留在自己的帐户中 组织的访问控制准则,

仅SCP尚不足以允许访问您组织中的帐户。 附加 一个AWS Organizations实体(根,OU或帐户)的SCP定义了什么的护栏 委托人可以执行的操作。 您仍然需要附加基于身份的或基于资源的 组织帐户中的委托人或资源策略,以实际授予 他们。

参照Organization部分。

QUESTION 36
1
2
3
4
5
6
7
8
9
You are trying to launch an EC2 instance, however the instance seems to go into a terminated
status immediately. What would probably not be a reason that this is happening?
A. The AMI is missing a required part.
B. The snapshot is corrupt.
C. You need to create storage in EBS first.
D. You've reached your volume limit.
Answer: C
您正在尝试启动EC2实例,但是该实例似乎立即进入终止状态。发生这种情况的原因可能不是什么原因?
答:AMI缺少必需的部分。 B.快照已损坏。 C.您需要首先在EBS中创建存储。 D.您已达到音量限制

Explanation: Amazon EC2 provides a virtual computing environments, known as an instance. After you launch an instance, AWS recommends that you check its status to confirm that it goes from the pending status to the running status, the not terminated status. The following are a few reasons why an Amazon EBS-backed instance might immediately terminate: You’ve reached your volume limit. The AMI is missing a required part. The snapshot is corrupt. Reference: http://docs. aws. amazon.com/AWSEC2/latest/UserGüide/Using_ InstanceStraightToTerminated.ht ml

QUESTION 37
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
You have set up an Auto Scaling group. The cool down period for the Auto Scaling group is 7
minutes. The first instance is launched after 3 minutes, while the second instance is launched
after 4 minutes. How many minutes after the first instance is launched will Auto Scaling accept
another scaling activity request?
A.11 minutes
B.7 minutes
C.10 minutes
D.14 minutes
Answer: A 
您已设置一个Auto Scaling组。 Auto Scaling组的冷却时间为7分钟。
第一个实例在3分钟后启动,而第二个实例在4分钟后启动。第一个实例启动后多少分钟,Auto Scaling会接受另一个扩展活动请求? 
A.11分钟B.7分钟C.10分钟D.14分钟

Explanation: If, an Auto Scaling group is launching more than one instance, the cool down period for each instance starts after that instance is launched. The group remains locked until the last instance that was launched has completed its cool down period. In this case the cool down period for the first instance starts after 3 minutes and finishes at the 10th minute (3+7 cool down), while for the second instance it starts at the 4th minute and finishes at the 1 1th minute (4+7 cool down). Thus, the Auto Scaling group will receive another request only after 11 minutes.

如果一个Auto Scaling组正在启动多个实例,则每个实例的冷却期将在该实例启动后开始。该组将保持锁定状态,直到启动的最后一个实例完成其冷静期为止。在这种情况下,第一个实例的冷却时间在3分钟后开始,并在第10分钟结束(3 + 7冷却),而第二个实例的冷却时间在第4分钟开始,并在第1个1分钟(4+ 7冷静下来)。因此,Auto Scaling组仅会在11分钟后收到另一个请求。

Reference: http://docs.aws.amazon.com/AutoScalinglatest/DeveloperGuide/AS_ Concepts.html

QUESTION 38
1
2
3
4
5
6
7
8
9
In Amazon EC2 Container Service components, what is the name of a logical grouping of
container instances on which you can place tasks?
A. A cluster
B. A container instance
C. A container
D. A task definition
Answer: A
在Amazon EC2容器服务组件中,可以放置任务的容器实例的逻辑分组的名称是什么? 
A.集群B.容器实例C.容器D.任务定义

Explanation: Amazon ECS contains the following components: A Cluster is a logical grouping of container instances that you can place tasks on. A Container instance is an Amazon EC2 instance that is running the Amazon ECS agent and has been registered into a cluster. A Task definition is a description of an application that contains one or more container definitions. A Scheduler is the method used for placing tasks on container instances. A Service is an Amazon ECS service that allows you to run and maintain a specified number of instances of a task definition simultaneously. A Task is an instantiation of a task definition that is running on a container instance. A Container is a Linux container that was created as part of a task. Reference: http://docs .aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Amazon ECS集群是任务或服务的逻辑分组。如果您正在运行使用EC2启动类型的任务或服务,则群集也是容器实例的分组。如果使用容量提供程序,则群集也是容量提供程序的逻辑分组。首次使用Amazon ECS时,Å会为您创建一个默认集群,但您可以在一个账户中创建多个集群以保持资源独立。

QUESTION 39
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
In the context of AWS support, why must an EC2 instance be unreachable for 20 minutes rather
than allowing customers to open tickets immediately?
A. Because most reachability issues are resolved by automated processes in less than 20 minutes
B. Because all EC2 instances are unreachable for 20 minutes every day when AWS does routine
maintenance
C. Because all EC2 instances are unreachable for 20 minutes when first launched
D. Because of all the reasons listed here
Answer: A
在AWS支持的情况下,为什么必须在20分钟内无法访问EC2实例,而不是允许客户立即打开票证? 
A.因为大多数可及性问题都可以在不到20分钟的时间内由自动化流程解决B.因为当AWS执行例行维护时,每天所有EC2实例在20分钟内都无法访问
C.因为首次启动时,所有EC2实例在20分钟内都无法访问D.由于这里列出的所有原因

Explanation: An EC2 instance must be unreachable for 20 minutes before opening a ticket, because most reachability issues are resolved by automated processes in less than 20 minutes and will not require any action on the part of the customer. If the instance is still unreachable after this time frame has passed, then you should open a case with support. Reference: https://aws.amazon.com/premiumsupport/faqs/

QUESTION 40
1
2
3
4
5
6
7
8
Can a user get a notification of each instance start 1 terminate configured with Auto Scaling?
A. Yes, if configured with the Launch Config
B. Yes, always
C. Yes, if configured with the Auto Scaling group
D. No
Answer: C
用户是否可以收到有关每个实例的通知,该实例从1开始配置为Auto Scaling终止?
A.是,如果配置了启动配置B.是,始终是C.是,如果配置了Auto Scaling组D.否

Explanation: The user can get notifications using SNS if he has configured the notifications while creating the Auto Scaling group. Reference: http://docs. .aws .amazon.com/AutoScaling/latest/DeveloperGuide/GettingStartedTutorial.html

QUESTION 41
1
2
3
4
5
6
7
8
Amazon EBS provides the ability to create backups of any Amazon EC2 volume into what is
known as___.
A. snapshots
B. images
C. instance backups
D. mirrors
Answer: A
Amazon EBS提供了将任何Amazon EC2卷的备份创建到称为___的功能。 A.快照B.映像C.实例备份D.镜像

Explanation: Amazon allows you to make backups of the data stored in your EBS volumes through snapshots that can later be used to create a new EBS volume, Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html

QUESTION 42
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
To specify a resource in a policy statement, in Amazon EC2, can you use its Amazon Resource
Name (ARN)?
A. Yes, you can.
B. No, you can't because EC2 is not related to ARN.
C. No, you can't because you can't specify a particular Amazon EC2 resource in an IAM policy.
D. Yes, you can but only for the resources that are not affected by the action.
Answer: A
要在策略声明中指定资源,您可以在Amazon EC2中使用其Amazon Resource Name(ARN)吗?
答:可以。 B.不,您不能,因为EC2与ARN不相关。
C.不,您不能,因为您无法在IAM策略中指定特定的Amazon EC2资源。 D.是的,您只能但不受操作影响的资源

Explanation: Some Amazon EC2 API actions allow you to include specific resources in your policy that can be created or modified by the action. To specify a resource in the statement, you need to use its Amazon Resource Name (ARN). Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-ug.pdf

QUESTION 43
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
After you recommend Amazon Redshift to a client as an alternative solution to paying data
warehouses to analyze his data, your client asks you to explain why you are recommending
Redshift. Which of the following would be a reasonable response to his request?
A. It has high performance at scale as data and query complexity grows.
B. It prevents reporting and analytic processing from interfering with the performance of OLTP
workloads.
C. You don't have the administrative burden of running your own data warehouse and dealing with
setup, durability, monitoring, scaling, and patching.
D. All answers listed are a reasonable response to his question
Answer: D
在向客户推荐Amazon Redshift作为支付数据仓库分析其数据的替代解决方案之后,您的客户会要求您解释为什么推荐Redshift。
以下哪项是对他的要求的合理回应?
答:随着数据和查询复杂性的增长,它具有大规模的高性能。 B.它可以防止报告和分析处理干扰OLTP工作负载的性能。
C.您没有运行自己的数据仓库以及处理设置,持久性,监视,扩展和修补的管理负担。 D.列出的所有答案都是对他问题的合理回答

Explanation: Amazon Redshift delivers fast query performance by using columnar storage technology to improve l/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host. AWS recommends Amazon Redshift for customers who have a combination of needs, such as: High performance at scale as data and query complexity grows Desire to prevent reporting and analytic processing from interfering with the performance of OLTP workloads Large volumes of structured data to persist and query using standard SQL and existing BI tools Desire to the administrative burden of running one’s own data warehouse and dealing with setup, durability, monitoring, scaling and patching Reference: https://aws .amazon.com/running databases/#redshift_ anchor

Redshift,就一个字,好!

QUESTION 44
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
One of the criteria for a new deployment is that the customer wants to use AWS Storage
Gateway. However you are not sure whether you should use gateway-cached volumes or
gateway-stored volumes or even what the differences are. Which statement below best describes
those differences?
A. Gateway-cached lets you store your data in Amazon Simple Storage Service (Amazon S3) and
retain a copy of frequently accessed data subsets locally.Gateway-stored enables you to configure your on-premises gateway to store all your data locally
and then asynchronously back up point-in-time snapshots of this data to Amazon S3.
B. Gateway-cached is free whilst gateway-stored is not.
C. Gateway-cached is up to 10 times faster than gateway-stored.
D. Gateway-stored lets you store your data in Amazon Simple Storage Service (Amazon S3) and
retain a copy of frequently accessed data subsets locally.
Gateway-cached enables you to configure your on-premises gateway to store all your data locally
and then asynchronously back up point-in-time snapshots of this data to Amazon S3.
Answer: A
新部署的标准之一是客户要使用AWS Storage Gateway。但是,您不确定应该使用网关缓存的卷还是网关存储的卷,
甚至不确定它们之间的区别。以下哪个陈述最能说明这些差异?
A.网关缓存使您可以将数据存储在Amazon Simple Storage Service(Amazon S3)中并在本地保留经常访问的数据子集的副本。
网关存储使您可以配置本地网关以在本地存储所有数据,然后将此数据的时间点快照异步备份到Amazon S3。 
B.网关缓存是免费的,而网关存储不是免费的。
C.网关缓存的速度比网关存储快10倍。 
D.网关存储使您可以将数据存储在Amazon Simple Storage Service(Amazon S3)中,并在本地保留经常访问的数据子集的副本。
网关缓存使您可以将本地网关配置为在本地存储所有数据,然后将该数据的时间点快照异步备份到Amazon S3。

Explanation: Volume gateways provide cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application servers. The gateway supports the following volume configurations: Gateway-cached volumes ?You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on- premises. You also retain low-latency access to your frequently accessed data. Gateway-stored volumes ?lf you need low-latency access to your entire data set, you can configure your on- premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive off-site backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2. Reference; http://docs.aws.amazon.com/storagegateway/latest/userguide/volume-gateway.html

  • 文件网关(File Gateway):通过 NFS 连接直接访问存储在 Amazon S3 或者 Amazon Glacier上的文件,并且本地进行缓存

  • Volume Gateway

    :使用 iSCSI 作为本地磁盘连接到本地服务器上,让本地服务器可以访问到 Amazon S3 内的文件,其中,Volume Gateway 又分为以下两种

    • Stored Volumes:所有的数据都将保存到本地,但是会异步地将数据备份到AWS S3上
    • Cached Volumes:所有的数据都会保存到S3,但是会将最经常访问的数据缓存到本地
  • Tape Gateway:用来取代传统的磁带备份,通过 Tape Gateway 可以使用NetBackup,Backup Exec或Veeam 等备份软件将文件备份到 Amazon S3 或者 Amazon Glacier 上

QUESTION 45
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
A user is launching an EC2 instance in the US East region. Which of the below mentioned
options is recommended by AWS with respect to the selection of the availability zone?
A. Always select the AZ while launching an instance
B. Always select the US-East-1-a zone for HA
C. Do not select the AZ; instead let AWS select the AZ
D. The user can never select the availability zone while launching an instance
Answer: C
用户正在美国东部地区启动EC2实例。关于可用性区域的选择,AWS建议使用以下哪个选项?
A.启动实例时始终选择AZ B.始终为HA 
C选择US-East-1-a区域。而是让AWS选择AZD。用户在启动实例时永远不能选择可用区

Explanation: When launching an instance with EC2, AWS recommends not to select the availability zone (AZ). AWS specifies that the default Availability Zone should be accepted. This is because it enables AWS to select the best Availability Zone based on the system health and available capacity. If the user launches additional instances, only then an Availability Zone should be specified. This is to specify the same or different AZ from the running instances. Reference: http://docs.aws .amazon.com/AWSEC2/latest/UserGuide/using-regions-availability- zones. html

使用EC2启动实例时,AWS建议不要选择可用区(AZ)。 AWS指定应接受默认可用区。这是因为它使AWS能够根据系统运行状况和可用容量选择最佳的可用区。如果用户启动其他实例,则仅应指定一个可用区。这是为了指定与正在运行的实例相同或不同的可用区。

QUESTION 46
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A company's website runs on Amazon EC2 instances behind an Application Load Balancer
(ALB).
The website has a mix of dynamic and static content Users around the globe are reporting that
the website is slow.
Which set of actions will improve website performance for users worldwide?
A. Create an Amazon CloudFront distribution and configure the ALB as an origin.
Then update the Amazon Route 53 record to point to the CloudFront distribution.
B. Create a latency-based Amazon Route 53 record for the ALB.
Then launch new EC2 instances with larger instance sizes and register the instances with the
ALB.
C. Launch nev. EC2 instances hosting the same web application in different Regions closer to the
users.
Then register the instances with the same ALB using cross-Region VPC peering.
D. Host the website in an Amazon S3 bucket in the Regions closest to the users and delete the ALB
and EC2 instances.
Then update an Amazon Route 53 record to point to the S3 buckets.
Answer: A
公司的网站在Application Load Balancer(ALB)后面的Amazon EC2实例上运行。
该网站混合了动态和静态内容,全球各地的用户都在报告该网站运行缓慢。哪些措施可以改善全球用户的网站性能? 
A.创建一个Amazon CloudFront发行版并将ALB配置为来源。 然后更新Amazon Route 53记录以指向CloudFront发行版。
B.为ALB创建基于延迟的Amazon Route 53记录。然后启动具有更大实例大小的新EC2实例,并在ALB中注册实例。
C.启动新版本。在距离用户更近的不同区域中托管同一Web应用程序的EC2实例。然后,使用跨区域VPC对等向相同的ALB注册实例。 
D.将网站托管在离用户最近的区域中的Amazon S3存储桶中,并删除ALB和EC2实例。然后更新Amazon Route 53记录以指向S3存储桶。

Explanation: Amazon CloudFront is a content delivery network (CDN) that improves website performance by caching content at edge locations around the world. It can serve both dynamic and static content. This is the best solution for improving the performance of the website,

QUESTION 47
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company wants to migrate a high performance computing (HPC) application and data from on-
premises to the AWS Cloud.
The company uses tiered storage on premises with hot high-performance parallel storage to
support the application during periodic runs of the application and more economical cold storage
to hold the data when the application is not actively running.
Which combination of solutions should a solutions architect recommend to support the storage
needs of the application? (Select TWO ),
A. Amazon S3 for cold data storage
B. Amazon EFS for cold data storage
C. Amazon S3 for high-performance parallel storage
D. Amazon FSx for Lustre for high-performance parallel storage
E. Amazon FSx for Windows for high-performance parallel storage
Answer: AD
一家公司希望将高性能计算(HPC)应用程序和数据从本地迁移到AWS云。
该公司在具有热的高性能并行存储的前提下使用分层存储,以在应用程序的定期运行期间为应用程序提供支持,
而在应用程序未主动运行时,更经济的冷存储来保存数据。
解决方案架构师应建议哪种解决方案组合来支持应用程序的存储需求? (选择两个),
A。Amazon S3用于冷数据存储B. Amazon EFS用于冷数据存储
C. Amazon S3用于高性能并行存储D. Amazon FSx用于Lustre用于高性能并行存储
E. Amazon FSx用于Windows高性能并行存储

Explanation: Amazon FSx for Luster提供了一种高性能文件系统,该文件系统经过优化,可快速处理工作负载,例如机器学习,高性能计算(HPC),视频处理,财务建模和电子设计自动化(EDA)。这些工作负载通常需要通过快速且可扩展的文件系统界面来呈现数据,并且通常将数据集存储在诸如Amazon S3之类的长期数据存储中。

QUESTION 48
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company has on-premises servers running a relational database. 
The current database serves high read traffic for users in different locations.
The company wants to migrate to AWS with the least amount of effort.
The database solution should support disaster recovery and not affect the company's current
traffic flow.
Which solution meets these requirements?
A. Use a database in Amazon RDS with Multi-AZ and at least one read replica
B. Use a database in Amazon RDS with Multi-AZ and at least one standby replica
C. Use databases hosted on multiple Amazon EC2 instances in different AWS Regions
D. Use databases hosted on Amazon EC2 instances behind an Application Load Balancer in
different Availability Zones
Answer: A
公司有运行关系数据库的本地服务器。当前数据库为不同位置的用户提供高读取流量。
该公司希望以最少的工作量迁移到AWS。
数据库解决方案应支持灾难恢复,并且不影响公司当前的流量。 哪种解决方案满足这些要求?
A.在具有Multi-AZ和至少一个只读副本的Amazon RDS中使用数据库
B.在具有Multi-AZ和至少一个备用副本的Amazon RDS中使用数据库
C.在不同AWS区域中的多个Amazon EC2实例上托管数据库
D.在不同可用区中的应用程序负载均衡器后面使用Amazon EC2实例上托管的数据库

Explanation: https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon- rds/

QUESTION 49
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A media streaming company collects real-time data and stores it in a disk-optimized database
system.
The company is not getting the expected throughput and wants an in-memory database storage
solution that performs faster and provides high availability using data replication.
Which database should a solutions architect recommend'?
A. Amazon RDS for MySQL
B. Amazon RDS for PostgreSQL
C. Amazon ElastiCache for Redis
D. Amazon ElastiCache for Memcached
Answer; C
一家媒体流传输公司收集实时数据,并将其存储在磁盘优化的数据库系统中。
该公司没有达到预期的吞吐量,而是需要一种内存数据库存储解决方案,该解决方案执行速度更快,并使用数据复制提供高可用性。
解决方案架构师应该建议哪个数据库?
A.MySQL的Amazon RDS B.PostgreSQL的Amazon RDS 
C.Redis的Amazon ElastiCache D.Memcached的Amazon ElastiCache

Explanation: Amazon ElastiCache is an in-memory database. With ElastiCache Memcached there is no data replication or high availability. As you can see in the diagram, each node is a separate partition of data:

AWS Amazon Elasticache for Redis上的内存中数据库。 Amazon ElastiCache for Redis是一种快速的内存中数据存储,可提供亚毫秒级的延迟,以支持Internet规模的实时应用程序。 开发人员可以将ElastiCache for Redis用作内存中的非关系数据库。 ElastiCache for Redis群集配置最多支持15个分片,并使客户能够在单个群集中运行内存高达6.1 TB的Redis工作负载。 ElastiCache for Redis还提供了从正在运行的群集中添加和删除碎片的功能。您可以动态扩展,甚至可以扩展Redis集群工作负载,以适应需求的变化。

QUESTION 50
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company's application is running on Amazon EC2 instances within an Auto Scaling group
behind an Elastic Load Balancer.
Based on the application's history, the company anticipates a spike in traffic during a holiday
each year.
A solutions architect must design a strategy to ensure that the Auto Scaling group proactively
increases capacity to minimize any performance impact on application users.
Which solution will meet these requirements?
A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization
exceeds 90%
B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected
period of peak demand
C.Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during
the peak demand period
D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when
there are auto scaling EC2_ INSTANCE_ LAUNCH events
Answer: B 
公司的应用程序正在Elastic Load Balancer后面的Auto Scaling组内的Amazon EC2实例上运行。
根据该应用程序的历史记录,该公司预计每年假期期间的流量会激增。解决方案架构师必须设计一种策略,以确保Auto Scaling组能够主动增加容量,以最大程度地降低对应用程序用户的性能影响。哪种解决方案可以满足这些要求? 
A.在CPU使用率超过90%时创建Amazon CloudWatch警报以扩展EC2实例
B.在预期的峰值需求期之前创建一个定期计划操作以扩展Auto Scaling组
C增大EC2实例的最小和最大数量在高峰需求期间
D中,在Auto Scaling组中进行配置。配置Amazon Simple Notification Service(Amazon SNS)通知以在发生EC2_ INSTANCE_ LAUNCH事件自动缩放时发送警报

Explanation:

AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. AWS Auto Scaling refers to a collection of Auto Scaling capabilities across several AWS services. The services within the AWS Auto Scaling family include: " Amazon EC2 (known as Amazon EC2 Auto Scaling). . Amazon ECS. . Amazon DynamoDB. . Amazon Aurora. The scaling options define the triggers and when instances should be provisioned/de-provisioned. There are four scaling options: . Maintain - keep a specific or minimum number of instances running, . Manual - use maximum, minimum, or a specific number of instances. . Scheduled - increase or decrease the number of instances based on a schedule. Dynamic - scale based on real-time system metrics (e.g. CloudWatch metrics). The following table describes the scaling options available and when to use them:

QUESTION 51
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company has a two-tier application architecture that runs in public and private subnets Amazon
EC2 instances running the web application are in the public subnet and a database runs on the
private subnet.
The web application instances and the database are running in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this
architecture? (Select TWO.)
A. Create new public and private subnets in the same AZ for high availability
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs
C. Add the existing web application instances to an Auto Scaling group behind an Application Load
Balancer
D. Create new public and private subnets in a new AZ Create a database using Amazon EC2 in one
AZ
E. Create new public and private subnets in the same VPC each in a new AZ Migrate the database
to an Amazon RDS multi-AZ deployment
Answer: BE
公司具有在公共子网和私有子网中运行的两层应用程序架构,运行Web应用程序的Amazon EC2实例位于公共子网中,
而数据库在私有子网中运行。 Web应用程序实例和数据库在单个可用区(AZ)中运行。解决方案架构师应采取哪些步骤组合才能为该架构提供高可用性?
(选择两个。)A.在同一可用区中创建新的公共和私有子网以实现高可用性
B.创建跨越多个可用区的Amazon EC2 Auto Scaling组和Application Load Balancer 
C.将现有Web应用程序实例添加到后面的Auto Scaling组中一个应用程序负载平衡器
D.在新的可用区中创建新的公共和私有子网在一个可用区E中使用Amazon EC2创建数据库。在新的可用区中的每个VPC中分别创建新的公共和私有子网将数据库迁移到Amazon RDS multi -AZ部署
Explanation:
You would the EC2 instances to have high availability by placing them in multiple AZs.
不是C,因为您无法将现有实例添加到自动伸缩组。您需要创建启动模板/配置,ASG将从该模板创建实例。 但是您可以做的是从现有EC2实例创建AMI,然后从中创建启动模板/配置。
QUESTION 52
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A financial services company has a web application that serves users in the United States and
Europe.
The application consists of a database tier and a web server tier.
The database tier consists of a MySQL database hosted in us-east-1 Amazon Route 53
geoproximity routing is used to direct traffic to instances in the closest Region.
A performance review of the system reveals that European users are not receiving the same level
of query performance as those in the United States.
Which changes should be made to the database tier to improve performance?
A. Migrate the database to Amazon RDS for MySQL.
Configure Multi-AZ in one of the European Regions.
B. Migrate the database to Amazon DynamoDB.
Use DynamoDB global tables to enable replication to additional Regions.
C. Deploy MySQL instances in each Region.
Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary
instance.
D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode.
Configure read replicas in one of the European Regions.
Answer: D
一家金融服务公司拥有一个网络应用程序,可为美国和欧洲的用户提供服务。
该应用程序由数据库层和Web服务器层组成。数据库层由us-east-1中托管的MySQL数据库组成。
Amazon Route 53地理邻近路由用于将流量定向到最近的Region中的实例。对该系统的性能检查发现,
欧洲用户所获得的查询性能与美国用户不同。应该对数据库层进行哪些更改以提高性能? 
A.将数据库迁移到Amazon RDS for MySQL。在欧洲地区之一中配置多可用区。 
B.将数据库迁移到Amazon DynamoDB。使用DynamoDB全局表来启用复制到其他区域的功能。 
C.在每个区域中部署MySQL实例。在MySQL前面部署应用程序负载平衡器,以减少主实例上的负载。
D.以MySQL兼容模式将数据库迁移到Amazon Aurora全局数据库。在欧洲地区之一中配置只读副本

Explanation: 这里的问题是读取查询从澳大利亚指向英国的延迟,这是很大的物理距离。需要一种解决方案来提高澳大利亚的读取性能。一个Aurora全局数据库由一个主要的AWS区域(用于管理您的数据)和最多五个只读的次要AWS区域组成。 Aurora以典型的延迟不到一秒的时间将数据复制到辅助AWS区域。您直接向主要AWS Regio中的主要数据库实例发出写入操作

QUESTION 53
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A solutions architect is tasked with transferring 750 TB of data from a network-attached file
system located at a branch office to Amazon S3 Glacier.
The solution must avoid saturating the branch office's low-bandwidth internet connection.
What is the MOST cost-effective solution1?
A. Create a site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly.
Create a bucket policy to enforce a VPC endpoint.
B. Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destination.
Create a bucket policy to enforce a VPC endpoint.
C. Mount the network-attached file system to Amazon S3 and copy the files directly.
Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
D. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination.
Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
Answer: D
解决方案架构师的任务是将750 TB的数据从分支机构的网络连接文件系统传输到Amazon S3 Glacier。
解决方案必须避免使分支机构的低带宽Internet连接饱和。什么是最具成本效益的解决方案1?
A.创建到Amazon S3存储桶的站点到站点VPN隧道,然后直接传输文件。创建存储桶策略以强制执行VPC端点。 
B.订购10台AWS Snowball设备,然后选择一个S3 Glacier保管库作为目的地。创建存储桶策略以强制执行VPC端点。 
C.将网络连接的文件系统安装到Amazon S3并直接复制文件。创建生命周期策略以将S3对象过渡到Amazon S3 Glacier。
D.订购10台AWS Snowball设备,然后选择一个Amazon S3存储桶作为目的地。创建生命周期策略以将S3对象过渡到Amazon S3 Glacier。

Explanation:As the company’s internet link is low-bandwidth uploading directly to Amazon S3 (ready for transition to Glacier) would saturate the link. The best alternative is to use AWS Snowball appliances. The Snowball edge appliance can hold up to 75 TB of data so 10 devices would be required to migrate 750 TB of data. Snowball moves data into AWS using a hardware device and the data is then copied into an Amazon S3 bucket of your choice. From there, lifecycle policies can transition the S3 objects to Amazon S3 Glacier.

QUESTION 54
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company's production application runs online transaction processing (OLTP) transactions on an
Amazon RDS MySQL DB instance.
The company is launching a new reporting tool that will access the same data.
The reporting tool must be highly available and not impact the performance of the production
application
How can this be achieved'?
A. Create hourly snapshots of the production RDS DB instance.
B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance.
C. Create multiple RDS Read Replicas of the production RDS DB instance.
Place the Read Replicas in an Auto Scaling group.
D. Create a Single-AZ RDS Read Replica of the production RDS DB instance.
Create a second Single-AZ RDS Read Replica from the replica.
Answer: B
公司的生产应用程序在Amazon RDS MySQL数据库实例上运行在线事务处理(OLTP)事务。该公司正在启动一种新的报告工具,该工具将访问相同的数据。报告工具必须具有高可用性,并且不影响生产应用程序的性能。如何实现? A.创建生产RDS数据库实例的每小时快照。 B.创建生产RDS数据库实例的多可用区RDS只读副本。 C.创建生产RDS数据库实例的多个RDS只读副本。将只读副本放置在Auto Scaling组中。 D.创建生产RDS数据库实例的单可用区RDS只读副本。从副本创建第二个单可用区RDS只读副本

Explanation: You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB instance.

QUESTION 55
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company allows its developers to attach existing IAM policies to existing IAM roles to enable
faster experimentation and agility.
However the security operations team is concerned that the developers could attach the existing
administrator policy, which would allow the developers to circumvent any other security policies.
How should a solutions architect address this issue?
A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy
B. Use service control policies to disable IAM activity across all accounts in the organizational unit
C. Prevent the developers from attaching any policies and assign all IAM duties to the security
operations team
D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the
administrator policy
Answer: D
公司允许其开发人员将现有IAM策略附加到现有IAM角色,以实现更快的实验和敏捷性。
但是,安全运营团队担心开发人员可以附加现有的管理员策略,这将使开发人员可以规避其他任何安全策略。
解决方案架构师应如何解决此问题? 
A.创建一个Amazon SNS主题,以在开发人员每次创建新策略时发送警报
B.使用服务控制策略来禁用组织单位中所有帐户的IAM活动
C.防止开发人员附加任何策略并分配所有IAM职责给安全操作团队
D。在开发人员IAM角色上设置一个IAM权限边界,该边界明确拒绝附加管理员策略

Explanation: The permissions boundary for an IAM entity (user or role) sets the maximum permissions that the entity can have. This can change the effective permissions for that user or role. The effective permissions for an entity are the permissions that are granted by all the policies that affect the user or role. Within an account, the permissions for an entity can be affected by identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, or session policies.Therefore, the solutions architect can set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.

IAM实体(用户或角色)的权限边界设置该实体可以具有的最大权限。这可以更改该用户或角色的有效权限。实体的有效权限是影响用户或角色的所有策略所授予的权限。在帐户中,实体的权限可能会受到基于身份的策略,基于资源的策略,权限边界,组织SCP或会话策略的影响。因此,解决方案架构师可以在开发人员IAM角色上设置IAM权限边界,明确拒绝附加管理员策略

QUESTION 56
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
A user is storing a large number of objects on AWS S3. The user wants to implement the search
functionality among the objects. How can the user achieve this?
A. Use the indexing feature of S3.
B. Tag the objects with the metadata to search on that.
c. Use the query functionality of S3.
D. Make your own DB system which stores the S3 metadata for the search functionality.
Answer: D
用户正在AWS S3上存储大量对象。用户想要在对象之间实现搜索功能。
用户如何实现呢?答:使用S3的索引功能。 B.用元数据标记对象以进行搜索。 
C。使用S3的查询功能。 D.制作自己的数据库系统,该系统存储用于搜索功能的S3元数据

Explanation: 在Amazon Web Services中,AWS S3不提供任何查询功能。要检索特定对象,用户需要知道确切的存储桶l对象键。在这种情况下,建议使用自己的数据库系统来管理S3元数据和键映射 Reference: htp://media.amazonwebservices.com/AWS_ Storage_ Options.pdf

QUESTION 57
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
After setting up a Virtual Private Cloud (VPC) network, a more experienced cloud engineer
suggests that to achieve low network latency and high network throughput you should look into
setting up a placement group. You know nothing about this, but begin to do some research about
it and are especially curious about its limitations. Which of the below statements is wrong in
describing the limitations of a placement group?
A. Although launching multiple instance types into a placement group is possible, this reduces the
likelihood that the required capacity will be available for your launch to succeed.
B. A placement group can span multiple Availability Zones.
C. You can't move an existing instance into a placement group.
D. A placement group can span peered VPCs
Answer: B
设置了虚拟私有云(VPC)网络后,一位经验更为丰富的云工程师建议,要实现低网络延迟和高网络吞吐量,
您应该考虑设置一个展示位置组。您对此一无所知,但开始对其进行一些研究,
并对它的局限性特别好奇。在描述展示位置组的局限性时,以下哪种说法是错误的?答:
尽管可以将多个实例类型启动到放置组中,但是这降低了成功启动所需容量所需的可能性。
B.放置组可以跨越多个可用区。 C.您不能将现有实例移动到展示位置组中。 
D.展示位置组可以跨越对等VPC

参考Placement GroupEC2置放群组部分

Explanation: A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking. 放置组是单个可用区内的实例的逻辑分组。使用放置组可使应用程序参与低延迟的10 Gbps网络。建议将布局组用于受益于低网络延迟,高网络吞吐量或两者兼而有之的应用程序。要为您的展示位置组提供最低的延迟和最高的每秒数据包网络性能,请选择一个支持增强联网的实例类型。展示位置组具有以下限制:您为展示位置组指定的名称在您的AWS账户内必须唯一。展示位置组不能跨越多个可用区。尽管可以将多个实例类型启动到放置组中,但这降低了成功启动所需容量所需的可能性。我们建议为展示位置组中的所有实例使用相同的实例类型。您无法合并展示位置组。相反,您必须终止一个放置组中的实例,然后将这些实例重新启动到另一个放置组中。展示位置组可以跨越对等的VPC;但是,您不会在对等VPC中的实例之间获得全等带宽。有关VPC对等连接的更多信息,请参阅Amazon VPC用户指南中的VPC对等连接。您不能将现有实例移动到展示位置组中。您可以从现有实例创建AMI,然后从AMI启动新实例到展示位置组 Reference: http://docs .aws. amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

QUESTION 58
1
2
3
4
5
6
7
8
What is a placement group in Amazon EC2?
A. It is a group of EC2 instances within a single Availability Zone.
B. It the edge location of your web content.
C. It is the AWS region where you run the EC2 instance of your web content.
D. It is a group used to span multiple Availability Zones.
Answer: A
Amazon EC2中的展示位置组是什么?答:它是单个可用区内的一组EC2实例。
B.它是您的Web内容的边缘位置。 C.在AWS区域中运行Web内容的EC2实例。 D.这是一个用于跨越多个可用区的组

参考Placement GroupEC2置放群组部分

Explanation: A placement group is a logical grouping of instances within a single Availability Zone. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

QUESTION 59
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
You are migrating an internal server on your DC to an EC2 instance with EBS volume. Your
server disk usage is around 500GB so you just copied all your data to a 2TB disk to be used with
AWS Import/Export. Where will the data be imported once it arrives at Amazon?
A. toa 2TB EBS volume
B. to an S3 bucket with 2 objects of 1TB
C. to an 500GB EBS volume
D. to an S3 bucket as a 2TB snapshot
Answer: B
您正在将DC上的内部服务器迁移到具有EBS卷的EC2实例。服务器磁盘使用量约为500GB,
因此您仅将所有数据复制到2TB磁盘上即可与AWS Import / Export结合使用。数据到达亚马逊后将被导入哪里?
A.将2TB EBS卷B.到2个1TB对象的S3存储桶C.将500GB EBS卷D.作为2TB快照复制到S3存储桶

Explanation: 取决于存储设备的容量是小于还是等于1 TB还是大于1 TB,导入Amazon EBS的结果将有所不同。 Amazon EBS快照的最大大小为1 TB,因此,如果设备映像大于1 TB,则会对映像进行分块并将其存储在Amazon S3上。目标位置是根据设备的总容量而不是设备上的数据量确定的 Reference: http://docs.aws.amazon.com/AWSImportExpor/latest/DG/Concepts.html

QUESTION 60
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A client needs you to import some existing infrastructure from a dedicated hosting provider to
AWS to try and save on the cost of running his current website. He also needs an automated
process that manages backups, software patching, automatic failure detection, and recovery. You
are aware that his existing set up currently uses an Oracle database. Which of the following AWS
databases would be best for accomplishing this task?
A. Amazon RDS
B. Amazon Redshift
C. Amazon SimpleDB
D. Amazon ElastiCache
Answer: A
客户端需要您将一些现有的基础架构从专用托管提供商导入到AWS,以尝试并节省运行其当前网站的成本。
他还需要一个自动过程来管理备份,软件修补,自动故障检测和恢复。
您知道他的现有设置当前使用Oracle数据库。以下哪个AWS数据库最适合完成此任务? 
A.Amazon RDS B.Amazon Redshift C.Amazon SimpleDB D.Amazon ElastiCache

Explanation: Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL Server, or PostgreSQL database engine. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user- defined retention period and enabling point-in-time recovery. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

QUESTION 61
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
True or false: A VPC contains multiple subnets, where each subnet can span multiple Availability
Zones.
A. This is true only if requested during the set-up of VPC.
B. This is true.
C. This is false.
D. This is true only for US regions.
Answer: C
是非题:VPC包含多个子网,其中每个子网可以跨越多个可用区。
答:只有在设置VPC期间提出要求时,这才是正确的。 B.是真的。 
C.这是错误的。 D.这仅适用于美国地区

Explanation: A VPC can span several Availability Zones. In contrast, a subnet must reside within a single Availability Zone. Reference: https://aws .amazon.com/vpc/faqs/

QUESTION 62
1
2
3
4
5
6
7
8
9
An edge location refers to which, Amazon Web Service?
A. An edge location is refered to the network configured within a Zone or Region
B. An edge location is an AWS Region
C. An edge location is the location of the data center used for Amazon CloudFront.
D. An edge location is a Zone within an AWS Region
Answer: C
边缘位置指的是哪个Amazon Web Service?
A.边缘位置是指在区域或区域B中配置的网络。边缘位置是AWS区域
C。边缘位置是用于Amazon CloudFront的数据中心的位置。 D.边缘位置是AWS区域内的区域

参考 CloudFront CDN部分 Explanation: Amazon CloudFront is a content distribution network. A content delivery network or content distribution network (CDN) is a large distributed system of servers deployed in multiple data centers across the world. The location of the data center used for CDN is called edge location. Amazon CloudFront can cache static content at each edge location. This means that your popular static content (e.g., your site’s logo, navigational images, cascading style sheets, JavaScript code, etc.) will be available at a nearby edge location for the browsers to download with low latency and improved performance for viewers. Caching popular static content with Amazon CloudFront also helps you offload requests for such files from your origin sever · CloudFront serves the cached copy when available and only makes a request to your origin server if the edge location receiving the browser’s request does not have a copy of the file. Reference: http://aws .amazon.com/cloudfront/

QUESTION 63
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
You are looking at ways to improve some existing infrastructure as it seems a lot of engineering
resources are being taken up with basic management and monitoring tasks and the costs seem
to be excessive. You are thinking of deploying Amazon ElasticCache to help. Which of the
following statements is true in regards to ElasticCache?
A. You can improve load and response times to user actions and queries however the cost
associated with scaling web applications will be more.
B. You can't improve load and response times to user actions and queries but you can reduce the
cost associated with scaling web applications.
You can improve load and response times to user actions and queries however the cost
associated with scaling web applications will remain the same.
D. You can improve load and response times to user actions and queries and also reduce the cost
associated with scaling web applications.
Answer: D
您正在寻找改善现有基础架构的方法,因为基本的管理和监视任务似乎占用了大量工程资源,而且成本似乎过高。
您正在考虑部署Amazon ElasticCache来提供帮助。关于ElasticCache,以下哪个陈述是正确的?
答:您可以改善对用户操作和查询的负载和响应时间,但是与扩展Web应用程序相关的成本会更高。 
B.您无法改善对用户操作和查询的负载和响应时间,但可以减少与扩展Web应用程序相关的成本。
您可以改善对用户操作和查询的负载和响应时间,但是与扩展Web应用程序相关的成本将保持不变。
D.您可以改善对用户操作和查询的负载和响应时间,还可以减少与扩展Web应用程序相关的成本。

Elasticache是AWS提供的分布式对象缓存系统,可以有效地提升现有应用程序的性能。利用Elasticache,用户可以从高吞吐和低延迟的内存数据存储中检索数据,

Explanation: Amazon ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory caching system, instead of relying entirely on slower disk-based databases. The service simplifies and offloads the management, monitoring and operation of in-memory cache environments, enabling your engineering resources to focus on developing applications. Using Amazon ElastiCache, you can not only improve load and response times to user actions and queries, but also reduce the cost associated with scaling web applications. Reference: https://aws .amazon.com/elasticache/faqs/

QUESTION 64
1
2
3
4
5
6
7
8
9
Do Amazon EBS volumes persist independently from the running life of an Amazon EC2
instance?
A. Yes, they do but only if they are detached from the instance.
B. No, you cannot attach EBS volumes to an instance.
C. No, they are dependent.
D. Yes, they do.
Answer: D
Amazon EBS卷是否独立于Amazon EC2实例的运行寿命而持久存在?
答:是的,但只有在与实例分离时才这样做。 B.不能,您不能将EBS卷附加到实例。 C.不,他们是依赖的。 D.是的,他们有

Explanation: Amazon EBS卷的行为类似于可以附加到单个实例的原始,未格式化的外部块设备。该卷的持久性独立于Amazon EC2实例的运行寿命 Reference: http://docs. amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html

QUESTION 65
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Your supervisor has asked you to build a simple file synchronization service for your department.
He doesn't want to spend too much money and he wants to be notified of any changes to files by
email. What do you think would be the best Amazon service to use for the email solution?
A. Amazon SES
B. Amazon CloudSearch
C. Amazon SWF
D. Amazon AppStream
Answer: A
您的主管要求您为部门建立一个简单的文件同步服务。他不想花太多钱,而且想通过电子邮件将文件更改通知他。
您认为将哪种最佳的Amazon服务用于电子邮件解决方案?

Explanation: File change notifications can be sent via email to users following the resource with Amazon Simple Email Service (Amazon SES), an easy-to-use, cost-effective email solution. Reference: http://media.amazonwebservices.com/architecturecenter/AWS_ ac_ ra_ filesync_ 08.pdf

QUESTION 66
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A product team is creating a new application that will store a large amount of data.
The data will be analyzed hourly and modified by multiple Amazon EC2 Linux instances.
The application team believes the amount of space needed will continue to grow for the next 6
months.
Which set of actions should a solutions architect take to support these needs'?

A. Store the data in an Amazon EBS volume.
Mount the EBS volume on the application instances
B. Store the data in an Amazon EFS file system.
Mount the file system on the application instances.
C. Store the data in Amazon S3 Glacier.
Update the vault policy to allow access to the application instances.
D. Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-lA).
Update the bucket policy to allow access to the application instances.
Answer: B
产品团队正在创建一个新应用程序,该应用程序将存储大量数据。数据将每小时进行分析,
并由多个Amazon EC2 Linux实例进行修改。应用程序团队认为,所需的空间量将在未来6个月内继续增长。
解决方案架构师应采取哪些行动来满足这些需求? 
A.将数据存储在Amazon EBS卷中。在应用程序实例
B上安装EBS卷。将数据存储在Amazon EFS文件系统中。在应用程序实例上挂载文件系统。
C.将数据存储在Amazon S3 Glacier中。更新库策略以允许访问应用程序实例。
D.将数据存储在Amazon S3 Standard-Infrequent Access(S3 Standard-IA)中。更新存储桶策略以允许访问应用程序实例

Explanation: Amazon Elastic File System(Amazon EFS)提供了一个简单,可扩展,完全托管的弹性NFS文件系统,可与AWS Cloud服务和本地资源一起使用。 “它的构建目的是按需扩展到PB,而不会中断应用程序”,“在添加和删除文件时会自动增长和收缩”,从而无需配置和管理容量来适应增长。”多个Amazon EC2 Linux实例。”

QUESTION 67
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A gaming company has multiple Amazon EC2 instances in a single Availability Zone for its
multiplayer game that communicates with users on Layer 4.
The chief technology officer (CTo) wants to make the architecture highly available and cost-
effective.
What should a solutions architect do to meet these requirements? (Select TWO.)
A. Increase the number of EC2 instances.
B. Decrease the number of EC2 instances
C. Configure a Network Load Balancer in front of the EC2 instances.
D. Configure an Application Load Balancer in front of the EC2 instances
E. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones
automatically.
Answer: CE
一家游戏公司的单个多人游戏在一个可用区中具有多个Amazon EC2实例,该实例与第4层上的用户进行通信。
首席技术官(CTo)希望使该架构高度可用且具有成本效益。解决方案架构师应该怎么做才能满足这些要求? 
(选择两个。)A.增加EC2实例的数量。 B.减少EC2实例的数量
C.在EC2实例的前面配置网络负载平衡器。 
D.在EC2实例之前配置应用程序负载平衡器
E.配置一个Auto Scaling组以自动在多个可用区中添加或删除实例。

Explanation: 解决方案架构师必须为架构提供高可用性,并确保其具有成本效益。要启用高可用性,应创建一个Amazon EC2 Auto Scaling组以跨多个可用性区域添加和删除实例。为了将流量分配给实例,该体系结构应使用在第4层运行的网络负载平衡器。该体系结构还将具有成本效益,因为Auto Scaling组将确保根据需求运行正确数量的实例。 CORRECT: “Configure a Network Load Balancer in front of the EC2 instances” is a correct answer. CORRECT: “Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically” is also a correct answer. INCORRECT: “Increase the number of instances and use smaller EC2 instance types” is incorrect as this is not the most cost-effective option. Auto Scaling should be used to maintain the right number of active instances. INCORRECT: “Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically” is incorrect as this is not highly available as it’s a single AZ. INCORRECT: “Configure an Application Load Balancer in front of the EC2 instances” is incorrect as an ALB operates at Layer 7 rather than Layer 4. References: https://docsaws. amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html Save

QUESTION 68
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company hosts an application on multiple Amazon EC2 instances.
The application processes messages from an Amazon SQS queue writes to an Amazon RDS
table and deletes the message from the queue Occasional duplicate records are found in the
RDS table.
The SQS queue does not contain any duplicate messages.
What should a solutions archived do to ensure messages are being processed once only?
A. Use the CreateQueue API call to create a new queue
B. Use the AddPermission API call to add appropriate permissions
C. Use the ReceiveMessage API call to set an appropriate wait time.
D. Use the ChangeMessageVisibility API call to increase the visibility timeout
Answer: D
一家公司在多个Amazon EC2实例上托管应用程序。该应用程序处理来自Amazon SQS队列写入Amazon RDS表的消息,
并从队列中删除该消息偶尔在RDS表中找到重复记录。 SQS队列不包含任何重复的消息。
归档的解决方案应该做什么以确保仅处理一次消息?
A.使用CreateQueue API调用创建新队列B.使用AddPermission API调用添加适当的权限
C.使用ReceiveMessage API调用设置适当的等待时间。 D.使用ChangeMessageVisibility API调用来增加可见性超时

Explanation: Keyword: SQS queue writes to an Amazon RDS From this, Option D best suite & other Options ruled out [Option A- You can’t intruduce one more Queue in the existing one; Option B - only Permission & Option C - Only Retrieves Messages] FIFO queues are designed to never introduce duplicate messages. However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and then resends the same message. Amazon SQS APls provide deduplication functionality that prevents your message producer from sending duplicates. Any duplicates introduced by the message producer are removed within a 5-minute dedüplication interval. For standard queues, you might occasionally receive a duplicate copy of a message (at-least- once delivery). If you use a standard queue, you must design your applications to be idempotent (that is, they must not be affected adversely when processing the same message more than once). CreateQueue - You can’t change the queue type after you create it and you can’t convert an existing standard queue into a FIFO queue. You must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue. AddPermission - You create a queue, you have full control access rights for the queue. Only you, the owner of the queue, can grant or deny permissions to the queue. ReceiveMessage - Retrieves one or more messages (up to 10), from the specified queue. FIFO queues provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it.

SQS队列写入Amazon RDS从中,排除了Option D最佳套件和其他选项[Option A –您不能在现有队列中再引入一个Queue;选项B-仅许可和选项C-仅检索消息] FIFO队列设计为从不引入重复的消息。但是,您的消息生产者可能会在某些情况下引入重复项:例如,如果生产者发送了一条消息,没有收到响应,然后重新发送了同一条消息。 Amazon SQS APls提供重复数据删除功能,可防止消息生产者发送重复数据。消息生成者引入的所有重复项将在5分钟的重复删除间隔内删除。对于标准队列,您可能偶尔会收到消息的重复副本(至少一次传递)。如果使用标准队列,则必须将应用程序设计为幂等的(也就是说,在多次处理同一条消息时,它们不会受到不利影响)。 CreateQueue-创建队列后无法更改队列类型,也无法将现有标准队列转换为FIFO队列。您必须为应用程序创建新的FIFO队列,或者删除现有的标准队列并将其重新创建为FIFO队列。 AddPermission-创建一个队列,您对该队列具有完全控制访问权限。只有您(队列的所有者)才能授予或拒绝该队列的权限。 ReceiveMessage-从指定的队列中检索一条或多条消息(最多10条)。 FIFO队列提供一次精确的处理,这意味着每条消息仅传递一次并保持可用状态,直到使用者处理并删除它。

QUESTION 69
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A solutions architect is designing an application for a two-step order process.
The first step is synchronous and must return to the user with ittle latency.
The second step takes longer, so it will be implemented in a separate component Orders must be
processed exactly once and in the order in which they are received.
How should the solutions architect integrate these components?
A. Use Amazon SQS FIFO queues.
B. Use an AWS Lambda function along with Amazon SQS standard queues
C. Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic
D. Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic.
Answer: A
解决方案架构师正在为两步定购流程设计应用程序。第一步是同步的,必须以微弱的延迟返回给用户。
第二步需要花费更长的时间,因此将在单独的组件中实施。必须按接收到的顺序准确地处理一次订单。
解决方案架构师应如何集成这些组件?
A.使用Amazon SQS FIFO队列。 B.将AWS Lambda函数与Amazon SQS标准队列一起使用
C.创建SNS主题并将Amazon SQS FIFO队列订阅该主题D.创建SNS主题并将Amazon SQS Standard队列订阅该主题

Explanation: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

“标准队列至少提供一次传递,这意味着每个消息至少传递一次。 FIFO队列提供一次精确的处理,这意味着每条消息仅传递一次并保持可用状态,直到使用者处理并删除它。没有将重复项引入队列。

QUESTION 70
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A solutions architect is designing a high performance computing (HPC) workload on Amazon
EC2.
The EC2 instances need to communicate to each other frequently and require network
performance with low latency and high throughput.
Which EC2 configuration meets these requirements?
A. Launch the EC2 instances in a cluster placement group in one Availability Zone
B. Launch the EC2 instances in a spread placement group in one Availability Zone
C. Launch the EC2 instances in an Auto Scaling group in two Regions and peer the VPCs
D. Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones
Answer: A
决方案架构师正在Amazon EC2上设计高性能计算(HPC)工作负载。 EC2实例需要经常相互通信,并需要低延迟和高吞吐量的网络性能。
哪种EC2配置符合这些要求? 
A.在一个可用区B中的群集放置组中启动EC2实例。
在一个可用区C中的扩展放置组中启动EC2实例。
在两个区域中的Auto Scaling组中启动EC2实例并与VPC 
D对等。在跨越多个可用区的Auto Scaling组中启动EC2实例

Explanation: 当启动新的EC2实例时,EC2服务将尝试以所有实例都分布在基础硬件中的方式放置实例,以最大程度地减少相关故障。您可以使用放置组来影响一组相互依赖的实例的放置,以满足工作负载的需求。根据工作负载的类型,可以使用以下放置策略之一创建放置组:集群。在可用区中将实例打包在一起。该策略使工作负载能够实现HPC应用程序中典型的紧密耦合的节点到节点通信所必需的低延迟网络性能。划分 。将您的实例分布在多个逻辑分区上,这样一个分区中的实例组就不会与不同分区中的实例组共享底层硬件。大型分布式和复制工作负载(例如Hadoop,Cassandra和Kafka)通常使用此策略。传播 。严格将一小组实例放置在不同的基础硬件上,以减少相关的故障。对于这种情况,应使用群集放置组,因为这是为HPC应用程序提供低延迟网络性能的最佳选择。

QUESTION 71
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company is planning to use Amazon S3 lo store images uploaded by its users.
The images must be encrypted at rest in Amazon S3.
The company does not want to spend time managing and rotating the keys, but it does want to
control who can access those keys.
What should a solutions architect use to accomplish this?
A. Server-Side Encryption with keys stored in an S3 bucket
B. Server-Side Encryption with Customer-Provided Keys (SSE-C)
C. Server- Side Encryption with Amazon S3-Managed Keys (SSE-S3)
D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
Answer: D
一家公司计划使用其用户上传的Amazon S3 lo存储图像。图像必须在Amazon S3中静态加密。
该公司不想花费时间来管理和旋转密钥,但是它希望控制谁可以访问这些密钥。解决方案架构师应使用什么来完成此任务? 
A.使用存储在S3存储桶中的密钥进行服务器端加密B.使用客户提供的密钥进行服务器端加密(SSE-C)
C.使用Amazon S3管理的密钥进行服务器端加密(SSE-S3)D.使用AWS KMS托管密钥(SSE-KMS)进行侧面加密

Explanation: SSE-KMS要求AWS管理数据密钥,但您需要管理AWS KMS中的客户主密钥(CMK)。您可以在账户中选择客户托管的CMK或适用于Amazon S3的AWS托管的CMK。客户管理的CMK是您在AWS账户中创建,拥有和管理的CMK。您可以完全控制这些CMK,包括建立和维护其关键策略,IAM策略和授权,启用和禁用它们,旋转其加密材料,添加标签,创建引用CMK的别名以及安排CMK删除。对于这种情况,解决方案架构师应将SSE-KMS与客户管理的CMK结合使用。这样,KMS将管理数据密钥,但是公司可以配置密钥策略,定义谁可以访问密钥 CORRECT: “Server. Side Encryption with AWS KMS-Managed Keys (SSE-KMS)” is the correct answer. INCORRECT: “Server-Side Encryption with keys stored in an S3 bucket” is incorrect as you cannot store your keys in a bucket with server-side encryption INCORRECT: “Server-Side Encryption with Customer-Provided Keys (SSE-C)” is incorrect as the company does not want to manage the keys. INCORRECT: “Server- Side Encryption with Amazon S3-Managed Keys (SSE-S3)” is incorrect as the company needs to manage access control for the keys which is not possible when they’re managed by Amazon. References: https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.htm#sse https://docs.aws.amazon.com/kmslatest/developerguide/concepts.tm#master keys Save time with our exam-specific cheat sheets:

QUESTION 72
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
An Amazon EC2 administrator created the following policy associated with an IAM group
containing several users.
"Version": "2012-10-17",
"Statement": [
"Effect": "Allow" ,
"Action": "ec2 :Te rminateInstances" ,
"Resources": "*",
"Condition": (
"IpAddress": 
"aws:SourceIp": "10.100.100.0/24"
)
]
],*

*
"Effect": "Deny" ,
"Action": "ec2:*",
"Resources" :
"Condition":
(
"StringNotEquals": (
"ec2:Region": "us-east-1"

 What is the effect of this policy?
 A. Users can terminate an EC2 instance in any AWS Region except us-east-1.
 B. Users can terminate an EC2 instance with the IP address 10.100. 1001 in the us-east-1 Region 
C. Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is10.100.100.254.
 D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is10.100.100.254.
Answer: C
这项政策有什么作用? 答:A用户可以终止除us-east-1之外的任何AWS区域中的EC2实例。
B.用户可以终止IP地址为10.100的EC2实例。美国东部1地区1001 
C.当用户的源IP为10.100.100.254时,用户可以在us-east-1区域终止EC2实例。
D.当用户的源IP为10.100.100.254时,用户无法在us-east-1区域中终止EC2实例。

Explanation: What the policy means:

  1. Allow termination of any instance if user’s source ip address is 10. 100.100.254.
  2. Deny termination of instances that are not in the us-east-1 region. Combining this two, you get"Allow instance termination in the us-east-1 region if the user’s source ip address is 10.100.100.254. Deny termination operation on other regions.”

如果用户的源IP地址为10. 100.100.254,则允许终止任何实例。100.100.254。

拒绝终止不在us-east-1地区的实例。结合这两者,您将获得“如果用户的源ip地址为10.100.100.254,则在us-east-1区域中允许实例终止。拒绝在其他区域上的终止操作。

QUESTION 73
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company is running an ecommerce application on Amazon EC2.
The application consists of a stateless web tier that requires a minimum of 10 instances, and a
peak of 250 instances to support the application's usage,
The application requires 50 instances 80% of the time.
Which solution should be used to minimize costs?
A. Purchase Reserved Instances to cover 250 instances
B. Purchase Reserved Instances to cover 80 instances.
Use Spot Instances to cover the remaining instances
C. Purchase On-Demand Instances to cover 40 instances.
Use Spot Instances to cover the remaining instances
D. Purchase Reserved Instances to cover 50 instances.
Use On-Demand and Spot Instances to cover the remaining instances
Answer: D
一家公司正在Amazon EC2上运行电子商务应用程序。该应用程序由一个无状态Web层组成,
该层至少需要10个实例,并且最多需要250个实例来支持该应用程序的使用。该应用程序在80%的时间内需要50个实例。
应该使用哪种解决方案以最小化成本?
A.购买保留实例以覆盖250个实例
B.购买保留实例以覆盖80个实例。使用竞价型实例覆盖其余实例
C.按需购买实例来覆盖40个实例。使用竞价型实例覆盖其余实例
D.购买预留实例以覆盖50个实例。使用按需实例和竞价型实例覆盖其余实例

Reserved Instances 具有50个EC2 RI可以提供折扣的每小时费率,并为EC2实例提供可选的容量预留。当EC2实例使用情况的属性与活动RI的属性匹配时,AWS Billing会自动应用RI的折扣率。 如果指定了可用区,则EC2保留与RI属性匹配的容量。通过运行与这些属性匹配的实例,可以自动利用RI的容量预留。 您还可以选择放弃容量预留,并购买适用于某个区域的RI。范围为区域的RI会自动将RI的折扣应用于区域中各个可用区和实例大小的实例使用情况,从而使您更容易利用RI的折现率。

On-Demand Instance -按需实例使您可以按小时或秒(最少60秒)来支付计算能力,而无需长期承诺。这使您摆脱了计划,购买和维护硬件的成本和复杂性,并将通常较大的固定成本转换为较小的可变成本。 以下价格包括在指定操作系统上运行私有和公共AMI的成本(“ Windows用法”价格适用于Windows Server 2003 R2, 2008、2008 R2、2012、2012 R2、2016和2019)。 Amazon还为您提供了运行带有SQL的Microsoft Windows的Amazon EC2的其他实例 服务器,运行SUSE Linux Enterprise Server的Amazon EC2,运行Red Hat Enterprise Linux的Amazon EC2和运行IBM的Amazon EC2的价格不同。

Spot Instances 竞价型实例- 竞价型实例是未使用的EC2实例,其价格低于按需价格。由于竞价型实例使您能够以折扣价请求未使用的EC2实例,因此可以显着降低Amazon EC2成本。竞价型实例的每小时价格称为竞价价格。每个可用区中每种实例类型的竞价价格由Amazon EC2设置,并根据竞价实例的长期供需情况逐步调整。你的 竞价型实例在容量可用时运行,并且您的请求的每小时最高价格超过竞价价格。

QUESTION 74
1
2
3
4
5
6
7
Does DynamoDB support in-place atomic updates?
A. Yes
B. No
C. It does support in-place non-atomic updates
D. It is not defined
Answer: A
DynamoDB是否支持就地原子更新? A.是B.否C.它确实支持就地非原子更新D.未定义

Explanation: DynamoDB supports in-place atomic updates. Reference: http://docs. aws. amazon.com/amazondynamodb/latest/developerguideWorkingWithltems .html#W orkingWithltems.AtomicCounters

QUESTION 75
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
Your manager has just given you access to multiple VPN connections that someone else has recently set up between all your company's offices. She needs you to make sure that the
communication between the VPNs is secure. Which of the following services would be best for
providing a low-cost hub-and-spoke model for primary ör backup connectivity between these
remote offices?
A. Amazon CloudFront
B. AWS Direct Connect
C. AWS CloudHSM
D. AWS VPN CloudHub
Answer: D
您的经理刚刚授予您访问其他人最近在公司所有办公室之间建立的多个VPN连接的权限。
她需要您确保VPN之间的通信是安全的。
以下哪项服务最适合为这些远程办公室之间的主要备份连接提供低成本的中心辐射型模型?
A.Amazon CloudFront B.AWS Direct Connect C.AWS CloudHSM D.AWS VPN CloudHub

Explanation: If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub, The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices., Reference: http://docs. aws .amazon.com/AmazonVPC/latest/UserGuide/VPN_ CloudHub.htmI

QUESTION 76
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
Amazon EC2 provides a_ . It is an HTTP or HTTPS request that uses the HTTP verbs GET or POST. 
A.
web database
в.
.net framework
C.
Query API
D
C library
Answer: C
Explanation:
Amazon EC2 provides a Query ÀPI. These requests are HTTP or HTTPS requests that use the
HTTP verbs GET or POST and a Query parameter named Action.
Reference: http://docs.aws.amazon.comAWSEC2/latestAPlReference/making-api-requests.html
QUESTION 77
1
2
3
4
5
6
7
8
9
In Amazon AWS, which of the following statements is true of key pairs?
A. Key pairs are used only for Amazon SDKs.
B. Key pairs are used only for Amazon EC2 and Amazon CloudFront,
C. Key pairs are used only for Elastic Load Balancing and AWS IAM.
D. Key pairs are used for all Amazon services.
Answer: B
在Amazon AWS中,以下哪个语句对密钥对是正确的?
答:密钥对仅用于Amazon SDK。 B.密钥对仅用于Amazon EC2和Amazon CloudFront,
C.密钥对仅用于Elastic Load Balancing和AWS IAM。 D.密钥对用于所有Amazon服务

Explanation: Key pairs consist of a public and private key, where you use the private key to create a digital signature, and then AWS uses the corresponding public key to validate the signature. Key pairs are used only for Amazon EC2 and Amazon CloudFront. Reference: htp://docs .aws. .amazon.com/generalatest/gr/aws-sec-cred-types.html

QUESTION 78
1
2
3
4
5
6
7
8
9
Does Amazon DynamoDB support both increment and decrement atomic operations?
A. Only increment, since decrement are inherently impossible with DynamoDB's data model.
B. No, neither increment nor decrement operations.
C. Yes, both increment and decrement operations.
D. Only decrement, since increment are inherently impossible with DynamoDB's data model.
Answer: C
Amazon DynamoDB是否同时支持递增和递减原子操作?
A.仅增量,因为DynamoDB的数据模型本来就不可能减小。 B.不,既不递增也不递减操作。
C.是的,增量和减量操作都可以。 D.仅减少,因为DynamoDB的数据模型本来就不可能增加。

Explanation: Amazon DynamoDB supports increment and decrement atomic operations. Reference: http://docs. .aws. amazon.com/amazondynamodb/latest/developerguide/APlSummary.html

QUESTION 79
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
An organization has three separate AWS accounts, one each for development, testing, and
production. The organization wants the testing team to have access to certain AWS resources in
the production account. How can the organization achieve this?
A. Itis not possible to access resources of one account with another account.
B. Create the IAM roles with cross account access.
C. Create the IAM user in a test account, and allow it access to the production environment with the
IAM policy.
D. Create the IAM users with cross account access.
Answer: B
一个组织拥有三个独立的AWS账户,每个账户分别用于开发,测试和生产。
组织希望测试团队可以访问生产帐户中的某些AWS资源。组织如何实现这一目标?
答:不能访问另一个帐户的一个帐户的资源。 B.创建具有跨帐户访问权限的IAM角色。
C.在测试帐户中创建IAM用户,并允许其使用IAM策略访问生产环境。 D.创建具有交叉帐户访问权限的IAM用户

Explanation: 一个组织拥有多个AWS账户,以将开发环境与测试或生产环境隔离开。有时来自一个帐户的用户需要访问另一个帐户中的资源,例如将更新从开发环境升级到生产环境。在这种情况下,具有交叉帐户访问权限的IAM角色将提供解决方案。跨账户访问使一个账户可以与另一个AWS账户中的用户共享对其资源的访问 Reference: htp://media .amazonwebservices.com/AWS_ Security_ _Best_ Practices. .pdf

QUESTION 80
1
2
3
4
5
6
7
8
9
You need to import several hundred megabytes of data from a local Oracle database to an
Amazon RDS DB instance. What does AWS recommend you use to accomplish this?
A. Oracle export/import utilities
B. Oracle SQL Developer
C. Oracle Data Pump
D. DBMS_ FILE_ TRANSFER
Answer: C
您需要将数百兆字节的数据从本地Oracle数据库导入到Amazon RDS数据库实例。 AWS建议您使用什么来完成此任务? 
A. Oracle导出/导入实用程序B. Oracle SQL Developer C. Oracle数据泵D. DBMS_ FILE_ TRANSFER

Explanation: 如何将数据导入Amazon RDS数据库实例取决于您拥有的数据量以及数据库中数据库对象的数量和种类。例如,您可以使用Oracle SQL Developer导入一个20 MB的简单数据库。您想要使用Oracle Data Pump导入复杂的数据库或大小为几百兆字节或几兆字节的数据库 Reference: http://docs. aws. amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.lmporting.html

QUESTION 81
1
2
3
4
5
6
7
8
9
A user has created an EBS volume with 1000 lOPS. What is the average lOPS that the user will
get for most of the year as per EC2 SLA if the instance is attached to the EBS optimized
instance?
A.950
B. 990
C, 1000
D.900
Answer: D
用户创建了一个具有1000 lOPS的EBS卷。如果将实例附加到EBS优化实例,则根据EC2 SLA,用户一年中大部分时间将获得的平均lOPS是多少?

Explanation: As per AWS SLA if the instance is attached to an EBS-Optimized instance, then the Provisioned lOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time in a given year. Thus, if the user has created a volume of 1000 lOPS, the user will get a minimum 900 lOPS 99.9% time of the year.

根据AWS SLA,如果将实例附加到EBS优化的实例,则预配置的lOPS卷旨在在给定年份的99.9%的时间内交付10%的预配置IOPS性能。因此,如果用户创建了1000 lOPS的体积,则用户一年中将至少获得900 lOPS 99.9%的时间。

Reference: http://aws.amazon.com/ec2/faqs/

QUESTION 82
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
You need to migrate a large amount of data into the cloud that 'you have stored on a hard disk
and you decide that the best way to accomplish this is with AWS Import/Export and you mail the
hard disk to AWS. Which of the following statements 'is incorrect in regards to AWS
Import/Export?
A. It can export from Amazon S3
B. It can Import to Amazon Glacier
C. It can export from Amazon Glacier.
D. It can Import to Amazon EBS
Answer: C
您需要将存储在硬盘上的大量数据迁移到云中,然后确定实现此目标的最佳方法是使用AWS Import / Export,
然后将硬盘邮寄到AWS。关于AWS Import / Export,以下哪个陈述不正确? 
A.可以从Amazon S3导出B。可以导入到Amazon Glacier
C。可以从Amazon Glacier导出。 D.它可以导入到Amazon EBS

Explanation: AWS Import/Export supports: Import to Amazon S3 Export from Amazon S3 Import to Amazon EBS Import to Amazon Glacier AWS Import/Export does not currently support export from Amazon EBS or Amazon Glacier. Reference: https://docs .aws .amazon.com/AWSImportExport/latest/DG/whatisdisk.html

QUESTION 83
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
You are in the process of creating a Route 53 DNS failover to direct traffic to two EC2 zones.
Obviously, if one fails, you would like Route 53 to direct traffic to the other region, Each region
has an ELB with some instances being distributed. What is the best way for you to configure the
Route 53 health check?
A. Route 53 doesn't support ELB with an internal health check.You need to create your own Route
53 health check of the ELB
B. Route 53 natively supports ELB with an internal health check. Turn "Evaluate target health" off
and "Associate with Health Check" on and R53 will use the ELB's internal health check.
C. Route 53 doesn't support EL B with an internal health check. You need to associate your resource
record set for the ELB with your own health check
D. Route 53 natively supports ELB with an internal health check. Turn "Evaluate target health" on
and "Associate with Health Check" off and R53 will use the ELB's internal health check.
Answer: D
您正在创建Route 53 DNS故障转移,以将流量定向到两个EC2区域。
显然,如果一个失败,您希望Route 53将流量引导到另一个区域,每个区域都有一个ELB,
其中分布了一些实例。什么是配置Route 53健康检查的最佳方法?
答:Route 53不支持带有内部运行状况检查的ELB。您需要创建自己的ELB的Route 53运行状况检查。
B. Route 53本机支持带有内部运行状况检查的ELB。关闭“评估目标健康”并打开“与健康检查关联”,R53将使用ELB的内部健康检查。 
C. Route 53不支持带有内部健康检查的ELB。您需要将ELB的资源记录集与您自己的运行状况检查
D相关联。Route 53本机支持ELB进行内部运行状况检查。打开“评估目标健康”并关闭“与健康检查关联”,R53将使用ELB的内部健康检查

Explanation: With DNS Failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly, When you enable this feature, Route 53 uses health checks–regularly making Internet requests to your application’s endpoints from multiple locations around the world–to determine whether each endpoint of your application is up or down. To enable DNS Failover for an EL B endpoint, create an Alias record pointing to the ELB and set the “Evaluate Target Health” parameter to true. Route 53 creates and manages the health checks for your EL B automatically. You do not need to create your own Route 53 health check of the ELB, You also do not need to associate your resource record set for the ELB with your own health check, because Route 53 automatically associates it with the health checks that Route 53 manages on your behalf, The ELB health check will also inherit the health of your backend instances behind that ELB .

借助DNS故障转移,Amazon Route 53可以帮助检测您的网站故障并将您的最终用户重定向到您的应用程序正常运行的其他位置。启用此功能时,Route 53会使用运行状况检查-定期向您的应用程序的Internet发出Internet请求来自世界各地的多个端点-确定应用程序的每个端点是打开还是关闭。要为EL B端点启用DNS故障转移,请创建一个指向ELB的别名记录,并将“评估目标运行状况”参数设置为true。路线53自动为EL B创建和管理运行状况检查。您不需要创建自己的ELB的Route 53运行状况检查,也不需要将ELB的资源记录集与您自己的运行状况检查相关联,因为Route 53是自动的

Reference: http://aws.amazon.com/about-aws/whats-new/2013/05/30/amazon-route-53-adds-elb-integration-for-dns-failover/

QUESTION 84
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A user wants to use an EBS-backed Amazon EC2 instance for a temporary job. Based on the
input data, the job is most likely to finish within a week. Which of the following steps should be
followed to terminate the instance automatically once the job is finished?
A. Configure the EC2 instance with a stop instance to terminate it.
B. Configure the EC2 instance with ELB to terminate the instance when it remains idle.
C.Configure the CloudWatch alarm on the instance that should perform the termination action once
the instance is idle.
D. Configure the Auto Scaling schedule activity that terminates the instance after 7 days.

Answer: C
用户希望将EBS支持的Amazon EC2实例用于临时作业。 根据输入的数据,该工作最有可能在一周之内完成。 
作业完成后,应遵循以下哪些步骤自动终止实例? 
A.使用停止实例配置EC2实例以终止它。 B.用ELB配置EC2实例,使其在空闲时终止该实例。 在实例闲置后,在应执行终止操作的实例上配置CloudWatch警报。
D.配置Auto Scaling计划活动,该活动将在7天后终止实例。

Explanation: Auto Scaling can start and stop the instance at a pre-defined time. Here, the total running time is unknown, Thus, the user has to use the CloudWatch alarm, which monitors the CPU utilization. The user can create an alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 24 hours, signaling that it is idle and no longer in use, When the utilization is below the threshold limit, it will terminate the instance as a part of the instance action.

Auto Scaling可以在预定义的时间启动和停止实例。在这里,总运行时间是未知的,因此,用户必须使用CloudWatch警报,该警报监视CPU利用率。用户可以创建一个警报,该警报在24小时的平均CPU利用率百分比低于10%时触发,表明它处于空闲状态并且不再使用。当利用率低于阈值限制时,它将终止实例作为实例动作的一部分

Reference: http://docs. aws. amazon.com/AmazonCloudWatch/latest/DeveloperGuide/UsingAlarmActions.html

QUESTION 85
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
Which of the following is true of Amazon EC2 security group?
A. You can modify the outbound rules for EC2-Classic.
B. You can modify the rules for a security group only if the security group controls the traffic for just
one instance.
C. You can modify the rules for a security group only when a new instance is created.
D. You can modify the rules for a security group at any time.
Answer: D
Amazon EC2安全组符合以下哪个条件?
答:您可以修改EC2-Classic的出站规则。 B.仅当安全组仅控制一个实例的流量时,才可以修改安全组的规则。 
C.您只​​能在创建新实例时修改安全组的规则。 D.您可以随时修改安全组的规则

Explanation: A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group.

安全组充当虚拟防火墙,可控制一个或多个实例的流量。启动实例时,将一个或多个安全组与该实例相关联。您将规则添加到每个安全组,以允许往返于其关联实例的流量。您可以随时修改安全组的规则。新规则将自动应用于与安全组关联的所有实例

Reference: http://docs.amazonwebservices,com/AWSEC2/latest/UserGuide/using-network- security.htmI

QUESTION 86
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
An Elastic IP address (EIP) is a static P address designed for dynamic cloud computing. With an
EIP, you can mask the failure of an instance or software by rapidly remapping the address to
another instance in your account. Your EIP is associated with your AWS account, not a particular
EC2 instance, and it remains associated with your account until you choose to explicitly release it.
By default how many ElPs is each AWS account limited to on a per region basis?

A. 1
B. 5
C. Unlimited
D. 10
Answer: B
弹性IP地址(EIP)是为动态云计算设计的静态P地址。使用EIP,
您可以通过将地址快速重新映射到帐户中的另一个实例来掩盖实例或软件的故障。您的EIP与您的AWS账户(而不是特定的EC2实例)关联
,并且在您选择明确释放它之前,它仍与您的账户关联。默认情况下,每个AWS帐户在每个区域中限制为多少ElP?

Explanation: By default, all AWS accounts are limited to 5 Elastic IP addresses per region for each AWS account, because public (IPv4) Internet addresses are a scarce public resource. AWS strongly encourages you to use an EIP primarily for load balancing use cases, and use DNS hostnames for all other inter-node communication. If you feel your architecture warrants additional ElPs, you would need to complete the Amazon EC2 Elastic IP Address Request Form and give reasons as to your need for additional addresses. Reference: http://docs. aws .amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses- eip.html#using-instance-ad dressing-limit

QUESTION 87
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database.
When evaluating performance metrics, a solutions architect discovered that the database reads
are causing high I/O and adding latency to the write requests against the database,
What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Amazon Aurora database
B. Update the application to read from the Multi-AZ standby instance
C. Create a read replica and modify the application to use the appropriate endpoint
D. Create a second Amazon Aurora database and link it to the primary database as a read replica.
Answer: C
在AWS上运行的应用程序对其数据库使用Amazon Aurora多可用区部署。
在评估性能指标时,解决方案架构师发现数据库读取了
导致高I / O,并增加了针对数据库的写入请求的延迟,
解决方案架构师应该怎么做才能将读取请求与写入请求分开?
A.在Amazon Aurora数据库上启用通读缓存
B.更新应用程序以从多可用区备用实例读取
C.创建一个只读副本并修改应用程序以使用适当的端点
D.创建第二个Amazon Aurora数据库并将其作为只读副本链接到主数据库。
答案:C

Explanation: Aurora副本是Aurora数据库群集中的独立端点,最适合用于扩展读取操作并提高可用性。最多15个Aurora副本可以分布在AWS区域内数据库集群所跨越的可用区中。数据库集群卷由数据库集群的多个数据副本组成。但是,群集卷中的数据表示为单个实例的逻辑卷,该逻辑卷指向数据库群集中的主实例和Aurora副本。

As well ลร providing scaling for reads, Aurora Replicas are also targets for multi-AZ. In this case the solutions architect can update the application to read from the Multi-AZ standby instance. References: https://docs. aws.amazon.com/AmazonRDSatesAuoraerGuide/Aurora.Rplicati. Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-anng associate/database/amazon-aurora/

QUESTION 88
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
An application runs on Amazon EC2 instances acroรs multiple Availability Zones.
The instances run in an Amazon EC2 Auto Scaling group behind an Application Load Balancer.
The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances m
the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group
B. Use a target tracking policy to dynamically scale the Auto Scaling group
C. Use an AWS Lambda function to update the desired Auto Scaling group capacity
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group
Answer: B
一个应用程序在Amazon EC2实例上运行,并具有多个可用区。
实例在Application Load Balancer后面的Amazon EC2 Auto Scaling组中运行。当EC2实例的CPU利用率达到或接近40%时,
该应用程序的性能最佳。解决方案架构师应如何在小组中的所有实例上保持期望的性能?
A.使用简单的扩展策略动态扩展Auto Scaling组B.使用目标跟踪策略动态扩展Auto Scaling组
C.使用AWS Lambda函数更新所需的Auto Scaling组容量D.使用计划的扩展操作来放大和缩小Auto Scaling组

Explanation: With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to the changes in the metric due to a changing load pattern.

使用目标跟踪缩放策略,您可以选择缩放指标并设置目标值。 Amazon EC2 Auto Scaling创建和管理CloudWatch警报,这些警报触发扩展策略并根据指标和目标值计算扩展调整。缩放策略可根据需要添加或删除容量,以将指标保持在指定的目标值或接近指定的目标值。除了使度量保持接近目标值之外,目标跟踪缩放策略还根据由于负载模式变化而导致的度量变化进行调整。

CORRECT: “Use a target tracking policy to dynamically scale the Auto Scaling group” is the correct answer. INCORRECT: “Use a simple scaling policy to dynamically scale the Auto Scaling group” is incorrect as target tracking is a better way to keep the aggregate CPU usage at around 40% INCORRECT: “Use an AWS Lambda function to update the desired Auto Scaling group capacity” is incorrect ลร this can be done automatically. INCORRECT: “Use scheduled scaling actions to scale up and scale down the Auto Scaling group” is incorrect as dynamic scaling is required to respond to changes ‘in utilization. References: htts://docs .aws.amazon.com/autoscaling/ec2usergudacgt-r. Save time with our exam-specific cheat sheets: https://digitalcloud .training/certification-training/aws-solutions-ac associate/compute/aws- auto-scaling/

QUESTION 89
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company runs a multi-tier web application that hosts news content.
The application runs on Amazon EC2 instances behind an Application Load Balancer.
The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an
Amazon Aurora database.
A solutions architect needs to make the application more resilient to periodic increases in request
rates.
Which architecture should the solutions architect implement? (Select TWO )
A. Add AWS Shield.
B. Add Aurora Replicas
C. Add AWS Direct Connect
D. Add AWS Global Accelerator.
E. Add an Amazon CloudFront distribution in front of the Application Load Balancer
Answer: BE
公司运行承载新闻内容的多层Web应用程序。该应用程序在Application Load Balancer后面的Amazon EC2实例上运行。
实例在多个可用区中的EC2 Auto Scaling组中运行,并使用Amazon Aurora数据库。解决方案架构师需要使应用程序更具弹性,
以应对请求率的定期增加。解决方案架构师应采用哪种架构? (选择两个)
A.添加AWS Shield。 B.添加Aurora副本C.添加AWS Direct Connect 
D.添加AWS Global Accelerator。 E.在Application Load Balancer前面添加Amazon CloudFront分配

Explanation The architecture is already highly resilient but the may be subject to performance degradation if there are sudden increases in request rates. To resolve this situation Amazon Aurora Read Replicas can be used to serve read traffic which offloads requests from the main database. On the frontend an Amazon CloudFront distribution can be placed in front of the ALB and this will cache content for better performance and also offloads requests from the backend. 该体系结构已经具有很高的弹性,但是如果请求速率突然增加,则性能可能会下降。为了解决这种情况,Amazon Aurora只读副本可用于提供读取流量,以减轻主数据库的请求。在前端,可以将Amazon CloudFront发行版放置在ALB的前面,这将缓存内容以获得更好的性能,还可以卸载来自后端的请求 INCORRECT: “Add an Amazon Global Accelerator endpoint” is incorrect as this service is used for directing users to different instances of the application in different regions based on latency. References: https://docs. aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html https://docs. aws. amazon.com/AmazonCloudFront/latest/DeveloperGuide/lntroduction.html Save time with our exam-specific cheat sheets: https://digitalcloud .training/certification-training/aws-solutions-architect- associate/database/amazon-aurora/ https://digitalcloud .training/certification-training/aws-solutions-architect-associatenetworking-and- content-delivery/amazon-cloudfron

QUESTION 90
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
A solutions architect is optimizing a website for an upcoming musical event Videos of the
performances will be streamed in real time and then will be available on demand.
The event is expected to attract a global online audience.
Which service will improve the performance of both the real-time and on-demand streaming?
A. Amazon CloudFront
B. AWS Global Accelerator
C. Amazon Route 53
D. Amazon S3 Transfer Acceleration
Answer: A
解决方案架构师正在针对即将举行的音乐活动优化网站,将实时播放表演视频,然后按需提供。该活动有望吸引全球在线观众。
哪种服务将同时改善实时流和点播流的性能? 
A.Amazon CloudFront B.AWS Global Accelerator C.Amazon Route 53 D.Amazon S3传输加速

Explanation: Amazon CloudFront可用于使用基于ÖfHTTP的各种协议向全球用户流式传输视频。这可能包括点播视频和实时流式视频,正确:“ Amazon CloudFront”是正确答案,错误:“ AWS Global Accelerator”不正确,因为这是使内容更接近用户的昂贵方法与使用CloudFront相比。由于这是CloudFront的用例,并且边缘位置太多,因此是更好的选择 INCORRECT: “Amazon Route 53” is incorrect as you still need a solution for getting the content closer to users. INCORRECT: “Amazon S3 Transfer Acceleration” is incorrect as this is used to accelerate uploads of data to Amazon S3 buckets. References: https://aws. .amazon.com/cloudfront/streaming/ https://docs. aws.amazon.com/AmazonCloudFrontlatest/DeveloperGuide/on-demand-streaming- video.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and- content-delivery/amazon-cloudfront

QUESTION 91
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company serves content to its subscribers across the world using an application running on
AWS.
The application has several Amazon EC2 'instances in a private subnet behind an Application
Load Balancer (ALB).
Due to a recent change in copyright restrictions the chief information officer (CIO) wants to block
access for certain countries.
Which action will meet these requirements?
A. Modify the ALB security group to deny incoming traffic from blocked countries
B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries
C. Use Amazon CloudFront to serve the application and deny access to blocked countries
D. Use ALB listener rules to return access denied responses to incoming traffic from blocked
countries
Answer: C
一家公司使用在AWS上运行的应用程序向其全球的订户提供内容。该应用程序在应用程序负载平衡器(ALB)后的专用子网中具有多个Amazon EC2'实例。由于版权限制的最新变化,首席信息官(CIO)希望阻止某些国家/地区的访问。哪些动作可以满足这些要求? A.修改ALB安全组以拒绝来自阻止国家的传入流量修改EC2实例的安全组以拒绝来自阻止国家的传入流量C.使用Amazon CloudFront服务该应用程序并拒绝对阻止国家的访问D.使用ALB侦听器规则返回访问被拒绝对来自阻止国家的传入流量的响应

Explanation: 当用户请求您的内容时,CloudFront通常会提供请求的内容,而不管用户位于何处。如果需要阻止特定国家/地区的用户访问您的内容,则可以使用CloudFront地理限制功能执行以下操作之一:Ä仅当用户位于以下国家/地区的白名单中的一个国家/地区时,才允许他们访问您的内容批准的国家。如果用户位于被禁止的国家/地区黑名单中的国家/地区之一,则可以阻止他们访问您的内容。例如,如果某个请求来自出于版权原因未获授权分发内容的国家/地区,则可以使用CloudFront地理限制来阻止该请求。这是对内容交付实施地理限制的最简单,最有效的方法。 CORRECT: “Use Amazon CloudFront to serve the application and deny access to blocked countries” is the correct answer. INCORRECT: “Use a Network ACL to block the IP address ranges associated with the specific countries” is incorrect as this would be extremely difficult to manage. INCORRECT: “Modify the ALB security group to deny incoming traffic from blocked countries” is incorrect as security groups cannot block traffic by country, INCORRECT: “Modify the security group for EC2 instances to deny incoming traffic from blocked countries” is incorrect as security groups cannot block traffic by country. . References: https://docs. aws.amazon.com/AmazonCloudFrontlatest/DeveloperGuide/georestrictions.html Save time with our exam-specific cheat sheets: https://digitalcloud. .training/certification-training/aws-solutions-architect associate/networking-and- content-deliverylamazon-cloudfront

QUESTION 92
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A manufacturing company wants to implement predictive maintenance on its machinery
equipment.
The company will install thousands of loT sensors that will send data to AWS in real time.
A solutions architect is tasked with implementing a solution that will receive events in an ordered
manner for each machinery asset and ensure that data is saved for further processing at a later
time.
Which solution would be MOST efficient?
A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset.
Use Amazon Kinesis Data Firehose to save data to Amazon S3.
B. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset.
Use Amazon Kinesis Data Firehose to save data to Amazon EBS .
C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset.
Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
D. Use an Amazon SQS standard queue for real-time events with one queue for each equipment
asset.
Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3.
Answer: A
一家制造公司希望对其机械设备实施预测性维护。该公司将安装数千个loT传感器,这些传感器会实时将数据发送到AWS。解决方案架构师的任务是实施一种解决方案,该解决方案将按顺序接收每个机械资产的事件,并确保保存数据以供以后进行进一步处理。哪种解决方案最有效? A.使用Amazon Kinesis Data Streams进行实时事件,并为每个设备资产分配一个分区。使用Amazon Kinesis Data Firehose将数据保存到Amazon S3。 B.使用Amazon Kinesis Data Streams实时事件,并为每个设备资产分配一个碎片。使用Amazon Kinesis Data Firehose将数据保存到Amazon EBS。 C.将Amazon SQS FIFO队列用于实时事件,每个设备资产都有一个队列。触发SQS队列的AWS Lambda函数,以将数据保存到Amazon EFS。 D.使用Amazon SQS标准队列处理实时事件,每个设备资产使用一个队列。从SQS队列触发AWS Lambda函数以将数据保存到Amazon S3。

Explanation: Amazon Kinesis Data Streams实时收集和处理数据。 Kinesis数据会散布一组碎片,每个碎片都有一系列数据记录。每个数据记录都有一个序列号,该序列号由Kinesis Data Streams分配。分片是流中唯一标识的数据记录序列。分区键用于按流中的碎片对数据进行分组。 Kinesis Data Streams将属于一个流的数据记录分成多个碎片。它使用与每个数据记录关联的分区键来确定给定数据记录属于哪个分片。 对于这种情况,解决方案架构师可以为每个设备使用分区键。这将确保该设备的记录按碎片分组,并且碎片将确保排序。 Amazon S3是保存数据记录的有效目的地。

QUESTION 93
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
A company has deployed an API in a VPC behind an internet-facing Application Load Balancer
(ALB).
An application that consumes the API as a client is deployed in a second account in private
subnets behind a NAT gateway.
When requests to the client application increase, the NAT gateway costs are higher than
expected.
A solutions architect has configured the ALB to be internal.
Which combination of architectural changes will reduce the NAT gateway costs'? (Select TWO )
一家公司已在面向互联网的应用程序负载均衡器后面的VPC中部署了API
(ALB)。
客户端使用API的应用程序部署在私有的第二个帐户中
NAT网关后面的子网。
当对客户端应用程序的请求增加时,NAT网关成本将高于
预期。
解决方案架构师已将ALB配置为内部的。
哪种架构更改组合可以降低NAT网关的成本? (选择两个)
A. Configure a VPC peering connection between the two VPCs.
Access the API using the private address
B. Configure an AWS Direct Connect connection between the two VPCs.
Access the API using the private address.
C. Configure a ClassicLink connection for the API into the client VPC.
Access the API using the ClassicLink address.
D. Configure a PrivateLink connection for the API into the client VPC,
Access the API using the PrivateLink address.
E. Configure an AWS Resource Access Manager connection between the two accounts.
Access the API using the private address
A.在两个VPC之间配置VPC对等连接。
使用私有地址访问API
B.在两个VPC之间配置一个AWS Direct Connect连接。
使用私有地址访问API。
C.为该API配置到客户端VPC的ClassicLink连接。
使用ClassicLink地址访问API。
D.配置API到客户端VPC的PrivateLink连接,
使用PrivateLink地址访问API。
E.在两个帐户之间配置一个AWS Resource Access Manager连接。
使用私有地址访问API
Answer: AD

Explanation: 通过PrivateLink,可以轻松地跨不同帐户和VPC连接服务,从而显着简化网络架构 https://www.levvel.io/resource-library/aws-api-gateway-for-multi-account ·architecture There is no API listed in shareable resources for RAM. https://docs. ,aws .amazon.com/ram/latest/userguide/shareable.htmI

QUESTION 94
1
2
3
4
5
6
7
8
In Amazon EC2, partial instance-hours are billed
A. per second used in the hour
B. per minute used
C. by combining partial segments into full hours
D. as full hours
Answer: D
在Amazon EC2中,通过将部分段组合成完整小时
A.每小时使用小时B.每分钟使用C.通过将部分分段合并为完整小时D.作为完整小时

Explanation: Partial instance-hours are billed to the next hour, Reference: http://aws.amazon.com/ec2/faqs/

QUESTION 95
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
In EC2, what happens to the data in an instance store if an instance reboots (either intentionally
or unintentionally)?
A. Data is deleted from the instance store for security reasons.
B. Data persists in the instance store.
C. Data is partially present in the instance store.
D. Data in the instance store will be lost.
Answer: B

在EC2中,如果实例重新启动,实例存储中的数据会发生什么(有意重启
或无意间)?
答:出于安全原因,将从实例存储中删除数据。
B.数据保留在实例存储中。
C.数据部分存在于实例存储中。
D.实例存储中的数据将丢失。

Explanation: 实例存储中的数据仅在其关联实例的生存期内存在。如果实例重新启动(有意或无意),则实例存储中的数据会保留,但是,在以下情况下,实例存储卷上的数据会丢失。基础驱动器发生故障停止Amazon EBS支持的实例终止实例 Reference: http://docs, amazonwebservices.com/AWSEC2atestUseGuide/nstanceSoae.m

QUESTION 96
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
You are setting up a VPC and you need to set up a public subnet within that VPC. Which
following requirement must be met for this subnet to be considered a public subnet?
A. Subnet's traffic is not routed to an internet gateway but has its traffic routed to a virtual private
gateway.
B. Subnet's traffic is routed to an internet gateway.
C. Subnet's traffic is not routed to an internet gateway.
D. None of these answers can be considered a public subnet.
Answer; B
您正在设置VPC,并且需要在该VPC中设置公共子网。 哪一个
要将此子网视为公共子网,必须满足以下要求?
答:A子网的流量不会路由到Internet网关,但会将其流量路由到虚拟专用网络
网关。
B.子网的流量被路由到Internet网关。
C.子网的流量未路由到Internet网关。
D.这些答案都不能视为公共子网。

Explanation: A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. You can configure your VPC: you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings. A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a subnet that you select. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won’t be connected to the Internet. If a subnet’s traffic is routed to an internet gateway, the subnet is known as a public subnet. If a subnet doesn’t have a route to the internet gateway, the subnet is known as a private subnet. If a subnet doesn’t have a route to the internet gateway, but has its traffic routed to a virtual private gateway, the subnet is known as a VPN-only subnet.

虚拟私有云(VPC)是专用于您的AWS账户的虚拟网络。 从逻辑上讲 与AWS云中的其他虚拟网络隔离。 您可以启动您的AWS资源,例如 作为Amazon EC2实例,进入您的VPC。 您可以配置VPC:可以选择其IP 地址范围,创建子网以及配置路由表,网络网关和安全性 设置。 子网是VPC中的IP地址范围。 您可以将AWS资源启动到 您选择的子网。 使用公共子网获取必须连接到Internet的资源, 还有一个专用子网,用于存储不会连接到Internet的资源。 如果子网的流量是 路由到Internet网关的子网称为公共子网。 如果子网没有 路由到Internet网关,则该子网称为专用子网。 如果子网没有 路由到Internet网关,但将其流量路由到虚拟专用网关,子网是 称为仅VPN子网。

Reference: http://docs .aws .amazon.com/AmazonVPClatest/UserGuide/VPC_ Subnets.html

QUESTION 97
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
Can you specify the security group that you created for a VPC when you launch an instance in
EC2-Classic?
A. No, you can specify the security group created for EC2-Classic when you launch a VPC instance.
B. No
C. Yes
D. No, you can specify the security group created for EC2-Classic to a non-VPC based instance
only,
Answer: B
在以下情况下启动实例时,可以指定为VPC创建的安全组吗?
EC2-Classic?
答:否,您可以在启动VPC实例时指定为EC2-Classic创建的安全组。
B.不
C.是的
D.否,您可以将为EC2-Classic创建的安全组指定为基于非VPC的实例
只要,

如果您使用的是EC2-Classic,则必须使用专门为EC2-Classic创建的安全组。在EC2-Classic中启动实例时,必须在与实例相同的区域中指定安全组。在EC2-Classic中启动实例时,无法指定为VPC创建的安全组。 Reference: http://docs. aws.amazon.com/AWSEC2/latest/UserGuide/using-network- security.html#ec2-classic- securit y-groups

QUESTION 98
1
2
3
4
5
6
7
8
While using the EC2 GET requests as URLs, the
is the URL that serves as the entry point
for the web service,
A. token
B. endpoint
C. action
D. None of these
Answer: B

Explanation: The endpoint is the URL that serves as the entry point for the web service, Reference: http://docs. amazonwebservices.com/AWSEC2/latest/UserGuide/using-query-api.html

QUESTION 99
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
You have been asked to build a database warehouse using Amazon Redshift. You know a ltte
about it, including that it is a SQL data warehouse solution, and uses industry standard ODBC
and JDBC connections and PostgreSQL drivers, However you are not sure about what sort of
storage it uses for database tables. What sort of storage does Amazon Redshift use for database
tables?
A. InnoDB Tables
B. NDB data storage
C. Columnar data storage
D. NDB CLUSTER Storage
Answer: C
您被要求使用Amazon Redshift构建数据库仓库。您了解它,包括它是SQL数据仓库解决方案,
并使用行业标准的ODBC和JDBC连接以及PostgreSQL驱动程序。但是,您不确定它用于数据库表的存储类型。 
Amazon Redshift对数据库表使用哪种存储方式?
A.InnoDB表B.NDB数据存储C.列数据存储D.NDB集群存储

Explanation: Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data storage, and very efficient, targeted data compression encoding schemes. Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk. Reference: http://docs. aws, amazon.com/redshift/latest/dg/c columnar_ storage_ _disk_ _mem_ mgmnt.html

QUESTION 100
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
You are checking the workload on some of your General Purpose (SSD) and Provisioned IOPS
(SSD) volumes and it seems that the I/O latency is higher than you require. You should probably
check the_ to make sure that your application is not trying to drive more IOPS
than you have provisioned.

A. Amount of IOPS that are available
b Acknowledgement from the storage subsystem
c. Average queue length
D. Time it takes for the I/O operation to complete
Answer: C
您正在检查某些通用(SSD)卷和预配置IOPS(SSD)卷上的工作负载,并且I / O延迟似乎比您所需的高。
您可能应该检查the_,以确保您的应用程序未尝试驱动比您提供的更多的IOPS。
A.可用的IOPS数量b来自存储子系统的确认c。平均队列长度D。I/ O操作完成所需的时间

Explanation: In EBS workload demand plays an important role in getting the most out of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes. In order for your volumes to deliver ‘the amount of IOPS that are available, they need to have enough IO requests sent to them. There is a relationship between the demand on the volumes, the amount of IOPS that are available to them, and the latency of ‘the request (the amount of time it takes for the 1/O operation to complete). Latency is the true end-to-end client time of an I/O operation; in other words, when the client sends a IO, how long does it take to get an acknowledgement from the storage subsystem that the IO read or write is complete. If your /O latency is higher than you require, check your average queue length to make sure that your application is not trying to drive more IOPS than you have provisioned. You can maintain high IOPS while keeping latency down by maintaining a low average queue length (which is achieved by provisioning more IOPS for your volume).

在EBS中,工作量需求在充分利用通用(SSD)和预配置IOPS(SSD)卷中起着重要作用。为了使您的卷交付可用的IOPS数量,它们需要发送足够的IO请求。对卷的需求,对它们可用的IOPS数量与请求的延迟(完成1 / O操作所花费的时间)之间存在关系。延迟是I / O操作的真正的端到端客户端时间。换句话说,当客户端发送IO时,需要多长时间才能从存储子系统获得IO读写已完成的确认。如果您的/ O延迟时间超过您的要求,请检查平均队列长度,以确保您的应用程序不会驱动比您预配置的更多的IOPS。您可以通过保持较低的平均队列长度来保持较高的IOPS,同时降低延迟(这可以通过为卷配置更多的IOPS来实现)

Reference: http//docs. aws .amazon.com/AWSEC2/latestUserGuide.

QUESTION 101

1
2
3
4
5
6
7
8
Which of the below mentioned options is not available when an instance is launched by Auto
Scaling with EC2 Classic?
A. Public IP
B. Elastic IP
C. Private DNS
D. Private IP
Answer: B
通过使用EC2 Classic的Auto Scaling启动实例时,以下哪一个选项不可用

Explanation: Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the private IP and DNS. Reference: http://docs. aws .amazon.com/AutoScaling/lates/DeveloperGud/Getin.

QUESTION 102
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
You have been given a scope to deploy some AWS infrastructure for a large organisation. The
requirements are that you will have a lot of EC2 instances but may need to add more when the
average utilization of your Amazon EC2 fleet is high and conversely remove them when CPU
utilization is low. Which AWS services would be best to use to accomplish this?
A. Auto Scaling, Amazon CloudWatch and AWS Elastic Beanstalk
B.Auto Scaling, Amazon CloudWatch and Elastic Load Balancing.
C. Amazon CloudFront, Amazon CloudWatch and Elastic Load Balancing.
D. AWS Elastic Beanstalk , Amazon CloudWatch and Elastic Load Balancing.
Answer: B
您已经获得了为大型组织部署一些AWS基础设施的范围。要求是您将有很多EC2实例,但是当Amazon EC2队列的平均利用率较高时,可能需要添加更多实例;相反,在CPU利用率较低时,则将它们删除。最好使用哪种AWS服务来完成此任务? A. Auto Scaling,Amazon CloudWatch和AWS Elastic Beanstalk B.Auto Scaling,Amazon CloudWatch和Elastic Load Balancing C. Amazon CloudFront,Amazon CloudWatch和弹性负载平衡。 D.AWS Elastic Beanstalk,Amazon CloudWatch和Elastic Load Balancing

Explanation: Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to manually provision Amazon EC2 capacity in advance. For example, you can set a

condition to add new Amazon EC2 instances in increments to the Auto Scaling group when the average utilization of your Amazon EC2 fleet is high; and similarly, you can set a condition to remove instances in the same increments when CPU utilization is low. If you have predictable load changes, you can set a schedule through Auto Scaling to plan your scaling activities. You can use Amazon CloudWatch to send alarms to trigger scaling activities and Elastic Load Balancing to help distribute traffic to your instances within Auto Scaling groups. Auto Scaling enables you to run your Amazon EC2 fleet at optimal utilization. Reference: http://aws.amazon.com/autoscaling/

QUESTION 103
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company's legacy application is currently relying on a single-instance Amazon RDS MySQL
database without encryption,
Due to new compliance requirements, all existing and new data in this database must be
encrypted.
How should this be accomplished?
A. Create an Amazon S3 bucket with server-side encryption enabled.
Move all the data to Amazon S3 Delete the RDS instance.
B. Enable RDS Multi-AZ mode with encryption at rest enabled.
Perform a failover to the standby instance to delete the original instance.
C. Take a snapshot of the RDS instance Create an encrypted copy of the snapshot.
Restore the RDS instance from the encrypted snapshot.
D. Create an RDS read replica with encryption at rest enabled.
Promote the read replica to master and switch the application over to the new master Delete the
old RDS instance.
Answer: C
公司的旧版应用程序当前依赖于未加密的单实例Amazon RDS MySQL数据库。由于新的合规性要求,必须对该数据库中的所有现有数据和新数据进行加密。应该如何完成? A.创建一个启用了服务器端加密的Amazon S3存储桶。将所有数据移至Amazon S3。删除RDS实例。 B.启用RDS多可用区模式,并启用静态加密。对备用实例执行故障转移以删除原始实例。 C.拍摄RDS实例的快照创建快照的加密副本。从加密的快照还原RDS实例。 D.创建一个启用了静态加密的RDS只读副本。提升只读副本为主副本,然后将应用程序切换到新的主副本上。删除旧的RDS实例

您无法加密现有的数据库,需要创建快照,对其进行复制,对副本进行加密,然后从快照中构建加密的数据库。 您可以通过启用Amazon RDS数据库实例的加密选项来加密静态的Amazon RDS实例和快照

您只能在创建Amazon RDS数据库实例时启用加密,而不能在创建数据库实例后启用加密。 但是,由于您可以加密未加密数据库快照的副本,因此可以有效地将加密添加到未加密数据库实例。也就是说,您可以创建数据库实例的快照,然后创建该快照的加密副本。然后,您可以从加密的快照还原数据库实例,因此您拥有原始数据库实例的加密副本。

QUESTION 104
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A company has a three-tier image-sharing application it uses an Amazon EC2 instance for the
front-end layer, another for the backend tier, and a third for the MySQL database.
A solutions architect has been tasked with designing a solution that is highly available, and
requires the least amount of changes to the application.
Which solution meets these requirements"?
A. Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer.
Move the database to an Amazon DynamoDB table and use Amazon S3 to store and serve users'
images.
B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend
layers.
Move the database to an Amazon RDS instance with multiple read replicas to store and serve
users' images.
c. Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto
Scaling group for the backend layer.
Move the database to a memory optimized instance type to store and serve users' images.
D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend
layers.
Move the database to an Amazon RDS instance with a Multi-AZ deployment Use Amazon S3 to
store and serve users' images.
Answer: D
一家公司有一个三层的图像共享应用程序,它在前端层使用Amazon EC2实例,在后端层使用另一个实例,而对MySQL数据库使用第三个实例。
解决方案架构师的任务是设计一个高度可用的解决方案,并且需要对应用程序进行的更改最少。哪个解决方案满足这些要求?”
A.使用Amazon S3托管前端层,并为后端层托管AWS Lambda函数。将数据库移至Amazon DynamoDB表,并使用Amazon S3存储和提供用户图像。对前端层和后端层使用负载平衡的多可用区AWS Elastic Beanstalk环境;
B将数据库移至具有多个只读副本的Amazon RDS实例,以存储和提供用户图像; 
c。使用Amazon S3托管前端数据库。后端层和后端层的Auto Scaling组中的一系列Amazon EC2实例;
D将数据库移至内存优化实例类型以存储和提供用户图像D.使用负载平衡的Multi-AZ AWS Elastic Beanstalk环境将数据库移动到具有多可用区部署的Amazon RDS实例使用Amazon S3存储和提供用户图像

Explanantion Keyword: Highly available + Least amount of changes to the application

" High Availability = Multi-AZ Least amount of changes to the application = Elastic Beanstalk Automatically handles the deployment, from Capacity provisioning, Load Balancing, Auto Scaling to application health monitoring Option - D will be the right choice and Option - A; Option- B and Option - C out of race due to Cost & inter-operability. HA with Elastic Beanstalk and RDS

“高可用性=多可用区对应用程序的更改量最少= Elastic Beanstalk自动处理部署,从容量配置,负载平衡,自动扩展到应用程序运行状况监视

AWS Elastic Beanstalk AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-sJsolPass00@@caling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. There is no additional charge for Elastic Beanstalk - you pay only for the AWS resources needed to store and run your applications. AWS RDS Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need. Amazon RDS is available on several database instance types - optimized for memory, performance or I/O - and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS,

AWS S3 Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry- leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, loT devices, and big data analytics. Amazon S3 provides easy-to-use management features SO you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.99999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world. References: https://aws. amazon.com/elasticbeanstalk/?nc2=h_ ql_ _prod_ _cp_ _ebs https://aws. .amazon.com/rds/?nc2=h_ _ql_ prod_ db_ rds https://aws. .amazon.com/s3/?nc2=h_ _qÏ_ prod_ st_ s3 Save time with our exam-specific cheat ‘sheets: https://digitalcloud .training/certification-training/aws-solutions-architect-associate/compute/aws- elastic-beanstalk/ https://digitalcloud.training/certification-training/aws-solutions-architect- associate/database/amazon-rds/

QUESTION 105
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A web application is deployed in the AWS Cloud It consists of a two-tier architecture that includes
a web layer and a database layer.
The web server is vulnerable to cross-site scripting (XSS) attacks.
What should a solutions architect do to remediate the vulnerability?
A. Create a Classic Load Balancer.
Put the web layer behind the load balancer and enable AWS WAF.
B. Create a Network Load Balancer.
Put the web layer behind the load balancer and enable AWS WAF.
C. Create an Application Load Balancer.
Put the web layer behind the load balancer and enable AWS WAF.
D. Create an Application Load Balancer.
Put the web layer behind the load balancer and use AWS Shield Standard.
Answer: C
Web应用程序部署在AWS Cloud中,它由两层体系结构组成,该体系结构包含Web层和数据库层。 Web服务器容易受到跨站点脚本(XSS)攻击。
解决方案架构师应采取什么措施来补救此漏洞? 
A.创建一个经典的负载均衡器。将Web层放在负载均衡器后面,然后启用AWS WAF。 
B.创建一个网络负载平衡器。将Web层放在负载均衡器后面,然后启用AWS WAF。
C.创建一个应用程序负载平衡器。将Web层放在负载均衡器后面,然后启用AWS WAF。
D.创建一个应用程序负载均衡器

Explanation: The AWS Web Application Firewall (WAF) is available on the Application Load Balancer (ALB). You can use AWS WAF directly on Application Load Balancers (both internal and external) in a VPC, to protect your websites and web services. Attackers sometimes insert scripts into web requests in an effort to exploit vulnerabilities in web applications. You can create one or more cross-site scripting match conditions to identify the parts of web requests, such as the URI or the query string, that you want AWS WAF to inspect for possible malicious scripts. CORRECT: “Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF” is the correct answer. INCORRECT: “Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF” is incorrect as you cannot use AWS WAF with a classic load balancer. INCORRECT: “Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF” is incorrect as you cannot use AWS WAF with a network load balancer. INCORRECT: “Create an Application Load Balancer, Put the web layer behind the load balancer and use AWS Shield Standard” is incorrect as you cannot use AWS Shield to protect against

XSS attacks. Shield is used to protect against DDoS attacks.

应用程序负载均衡器(ALB)上提供了AWS Web Application Firewall(WAF)。您可以直接在VPC中的Application Load Balancer(内部和外部)上使用AWS WAF,以保护您的网站和Web服务。攻击者有时会在Web请求中插入脚本,以利用Web应用程序中的漏洞。您可以创建一个或多个跨站点脚本匹配条件,以标识您希望AWS WAF检查可能的恶意脚本的Web请求部分,例如URI或查询字符串。

References: https://docs. .aws.amazon.com/waf/latest/developerguide/classic-web-acl-xss-conditions.html Save time with our exam-specific cheat sheets: https://digitalcloud. .training/certification-training/aws-solutions-architect-associate/security-identity- compliance/aws-waf-and-shield/

QUESTION 106
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A recently acquired company is required to buikl its own infrastructure on AWS and migrate
multiple applications to the cloud within a month.
Each application has approximately 50 TB of data to be transferred.
After the migration is complete this company 'and its parent company will both require secure
network connectivity with consistent throughput from their data centers to the applications.
A solutions architect must ensure one-time data migration and ongoing network connectivity.
Which solution will meet these requirements"
A. AWS Direct Connect for both the initial transfer and ongoing connectivity
B. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity
C.AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity
D. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity

需要最近收购的公司在AWS上构建自己的基础架构并进行迁移
一个月内将多个应用程序迁移到云。
每个应用程序都有大约50 TB的数据要传输。
迁移完成后,“这家公司”及其母公司都将要求安全
从数据中心到应用程序的吞吐量始终保持一致的网络连接。
解决方案架构师必须确保一次性数据迁移和持续的网络连接。
哪种解决方案将满足这些要求”
A.适用于初始传输和持续连接的AWS Direct Connect
B.适用于初始传输和持续连接的AWS Site-to-Site VPN
C.AWS Snowball用于初始传输,AWS Direct Connect用于持续连接
D.AWS Snowball用于初始传输,AWS Site-to-Site VPN用于持续连接
Answer: C

Explanation: “每个应用程序都有大约50 TB的数据要传输” = AWS Snowball; “安全从数据中心到应用程序的吞吐量始终保持一致的网络连接” 使用AWS Direct Connect和专用网络连接有什么好处? 在许多在这种情况下,专用网络连接可以降低成本,增加带宽并提供比基于Internet的连接更一致的网络体验,“更一致的网络经验”,因此是AWS Direct Connect。 Direct Connect比VPN更好; 降低成本+增加带宽+(保持连接或一致的网络)=直接连接

QUESTION 107
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
Organizers for a global event want to put daily reports online as static HTML pages.
The pages are expected to generate millions of views from users around the world The files are
stored in an Amazon S3 bucket.
A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
A. Generate presigned URLs for the files
B. Use cross-Region replication to all Regions
Use the geoproximity feature of Amazon Route 53
D. Use Amazon CloudFront with the S3 bucket as its origin
Answer: D
全球活动的组织者希望将每日报告作为静态HTML页面进行在线发布。这些页面有望产生来自全球用户的数百万个视图。文件存储在Amazon S3存储桶中。解决方案架构师已被要求设计一个有效的解决方案。解决方案架构师应采取什么行动来完成此任务? A.为文件生成预签名URL B.对所有区域使用跨区域复制使用Amazon Route 53的geoproximity功能D.将Amazon CloudFront与S3存储桶作为源

Explanation: Amazon CloudFront can be used to cache the files in edge locations around the world and this will improve the performance of the webpages. To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution using one of these configurations: Using a REST API endpoint as the origin with access restricted by an origin access identity (OAl) Using a website endpoint as the origin with anonymous (public) access allowed Using a website endpoint as the origin with access restricted by a Referer header CORRECT:

“Use Amazon CloudFront with the S3 bucket as its origin” is the correct answer. INCORRECT: “Generate presigned URLs for the files” is incorrect as this is used to restrict access which is not a requirement. INCORRECT: “Use cross-Region replication to all Regions” is incorrect as this does not provide a mechanism for directing users to the closest copy of the static webpages. INCORRECT: “Use the geoproximity feature of Amazon Route 53” is incorrect as this does not include a solution for having multiple copies of the data in different geographic locations. References: https://aws. amazon.com/premiumsuppor/knowledge-center/cloudfront-serve-static-website/ Save time with our exam-specific cheat sheets: https://digitalcloud .training/certification-training/aws-solutions-architect-associate/networking-and- content-delivery/amazon-cloudfront

QUESTION 108
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
A company runs an application on a group of Amazon Linux EC2 instances, 
The application writes log files using standard API calls For compliance reasons, all log files must
be retained indefinitely and, will be analyzed by a reporting tool that must access all files
concurrently.
Which storage service should a solutions architect use to provide the MOST cost-effective
solution?
A. Amazon EBS
B. Amazon EFS
C. Amazon EC2 instance store
D. Amazon S3
Answer: D
一家公司在一组Amazon Linux EC2实例上运行一个应用程序,该应用程序使用标准API调用写入日志文件。出于合规性原因,必须无限期保留所有日志文件,并且将由必须同时访问所有文件的报告工具进行分析。解决方案架构师应使用哪种存储服务来提供最具成本效益的解决方案? A.Amazon EBS B.Amazon EFS C.Amazon EC2实例存储D.Amazon S3

Explanation: The application is writing the files using API calls which means it will be compatible with Amazon S3 which uses a REST API, S3 is a massively scalable key-based object store that is well-suited to allowing concurrent access to the files from many instances. Amazon S3 will also be the most cost-effective choice. A rough calculation using the AWS pricing calculator shows the cost differences between 1TB of storage on EBS, EFS, and S3 Standard.

QUESTION 109
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company's application is running on Amazon EC2 instances m a single Region in the event of a
disaster a solutions architect needs to ensure that the resources can also be deployed to a
second Region.
Which combination of actions should the solutions architect take to accomplish this-? (Select
TWO)
A. Detach a volume on an EC2 instance and copy it to Amazon S3
B. Launch a new EC2 instance from an Amazon Machine image (AMI) in a new Region
C. Launch a new EC2 instance in a new Region and copy a volume from Amazon S3 to the new
instance
D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the
destination
E. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2
instance in the destination Region using that EBS volume
Answer: BD
发生灾难时,公司的应用程序正在单个区域内的Amazon EC2实例上运行,解决方案架构师需要确保资源也可以部署到第二个区域。解决方案架构师应采取哪种行动组合来完成此任务? (选择两个)A.分离EC2实例上的卷并将其复制到Amazon S3B。从新区域C中的Amazon Machine image(AMI)启动新的EC2实例C.在新区域中启动新的EC2实例并将卷从Amazon S3复制到新实例D。复制EC2实例的Amazon机器映像(AMI)并为目标E指定其他区域。从Amazon S3复制Amazon Elastic Block Store(Amazon EBS)卷并启动使用该EBS卷的目标区域中的EC2实例
QUESTION 110
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A solutions architect is designing a two-tier web application.
The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets.
The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet
Security is a high priority for the company,
How should security groups be configured in this situation? (Select TWO)
A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0
B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0
C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the
security group for the web tier
D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433
to the security group for the web tier
E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433
from the security group for the web tier
Answer: AC
解决方案架构师正在设计两层Web应用程序。该应用程序由公共子网中的Amazon EC2托管的面向公众的Web层组成。数据库层由在私有子网中的Amazon EC2上运行的Microsoft SQL Server组成,安全性是公司的高度优先事项,在这种情况下应如何配置安全组? (选择两个)A.配置Web层的安全组,以允许端口443上的入站流量来自0.0.0.0/70B。配置Web层的安全组,以允许端口443上的出站流量从0.0.0.0/0开始C.配置数据库层的安全组,以允许来自Web层D的安全组在端口1433上的入站流量。配置数据库层的安全组,以允许从端口443和1433到安全层的出站流量。 Web层E.配置数据库层的安全组,以允许来自Web层安全组的端口443和1433上的入站流量

Explanation: In this scenario an inbound rule is required to allow traffic from any internet client to the web front end on SSLTLS port 443. The source should therefore be set to 0.0.0.0/0 to allow any inbound traffic. To secure the connection from the web frontend to the database tier, an outbound rule should be created from the public EC2 security group with a destination of the private EC2 security group.

The port should be set to 1433 for MySQL. The private EC2 security group will also need to allow inbound traffic on 1433 from the public EC2 security group. This configuration can be seen in the diagram:

VPC a Public subnets) Security group- PublicEC2 Iinbound: Protocol/Port HTTP/443 Source: 0.0.0.0/0 Outbound: Protocol/Port HTTP5:1433 Destination: PrivateEC2 Web Front-End Private subnet(s) Security group - PrivateEC2 Inbound: Protocol/Port HTTP/1433 Source: PublicALB Web Front-End

在这种情况下,需要一个入站规则以允许从任何Internet客户端到SSLTLS端口443上的Web前端的通信。因此,源应设置为0.0.0.0/0以允许任何入站通信。为了确保从Web前端到数据库层的连接的安全,应从公用EC2安全组创建出站规则,并以私有EC2安全组为目标。 对于MySQL,端口应设置为1433。私有EC2安全组还需要允许来自公共EC2安全组的1433年入站流量。可以在图中看到此配置: VPC(公共子网)安全组-PublicEC2入站:协议/端口HTTP / 443源:0.0.0.0/0出站:协议/端口HTTP5:1433目标:PrivateEC2 Web前端专用子网安全组-PrivateEC2入站:协议/端口HTTP / 1433来源:PublicALB Web前端

CORRECT: “Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0” is a correct answer. CORRECT: “Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier” is also a correct answer. INCORRECT: “Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0” is incorrect as this is configured backwards. INCORRECT: “Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier” is incorrect as the MySQL database instance does not need to send outbound traffic on either of these ports. INCORRECT: “Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier” is incorrect as the database tier does not need to allow inbound traffic on port 443. References: https://docs. aws.amazon.com/vpc/latest/userguide/VPC_ SecurityGroups.htmI Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect- associate/networking-and- content-delivery/amazon-vpc/

QUESTION 111
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
A data science team requires storage for nightly log processing.
The size and number of logs is unknown and will persist for 24 hours only.
What is the MOST cost-effective solution?
A. Amazon S3 Glacier
B. Amazon S3 Standard

C. Amazon S3 intelligent-Tiering
D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-lA)
Answer: B
据科学团队需要存储以进行每晚日志处理。日志的大小和数量是未知的,并将仅保留24小时。什么是最具成本效益的解决方案? A.Amazon S3 Glacier B.Amazon S3标准 C.Amazon S3智能分层D.Amazon S3一区不频繁访问(S3一区lA)

Explanation: S3 standard is the best choice in this scenario for a short term storage solution. In this case the size and number of logs is unknown and it would be difficult to fully assess the access patterns at this stage. Therefore, using S3 standard is best as it is cost-effective, provides immediate access, and there are no retrieval fees or minimum capacity charge per object. CORRECT: “Amazon S3 Standard” is the correct answer. INCORRECT: “Amazon S3 Intelligent-Tiering” is incorrect as there is an additional fee for using this service and for a short-term requirement it may not be beneficial. INCORRECT: “Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)” is incorrect as this storage class has a minimum capacity charge per object (128 KB) and a per GB retrieval fee. INCORRECT: “Amazon S3 Glacier Deep Archive” is incorrect as this storage class is used for archiving data. There are retrieval fees and it take hours to retrieve data from an archive. References: https://aws. amazon.com/s3/storage-classes/ Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect- associate/storage/amazon-s3/

QUESTION112
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores
user- uploaded documents in an Amazon EBS volume.
For better scalability and availability the company duplicated the architecture and created a
second EC2 instance and EBS volume in another Availability Zone: placing both behind an
Application Load Balancer.
After completing this change users reported that each time they refreshed the website they could
see one subset of their documents or the other but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once"
A. Copy the data so both EBS volumes contain all the documents.
B. Configure the Application Load Balancer to direct a user to the server with the documents.
C. Copy the data from both EBS volumes to Amazon EFS.
Modify the application to save new documents to Amazon EPS.
D. Configure the Application Load Balancer to send the request to both servers.
Return each document from the correct server.
Answer: C
一家公司正在使用单个Amazon EC2实例在AWS上托管Web应用程序,该实例将用户上传的文档存储在Amazon EBS卷中。为了获得更好的可伸缩性和可用性,该公司复制了架构,并在另一个可用区中创建了第二个EC2实例和EBS卷:将两者都放置在Application Load Balancer后面。完成此更改后,用户报告说,每次刷新网站时,他们可以看到其文档的一个子集或另一个,但是却无法同时看到所有文档。解决方案架构师应采取什么措施来确保用户立即看到其所有文档。” A.复制数据,以便两个EBS卷都包含所有文档。B.配置应用程序负载平衡器以将用户与文档一起引导到服务器。将两个EBS卷中的数据复制到Amazon EFS。修改应用程序以将新文档保存到Amazon EPS。D.配置应用程序负载平衡器以将请求发送到两个服务器,从正确的服务器返回每个文档。

尽管EBS和EFS都提供了出色的功能,但这两个存储解决方案实际上是为两种完全不同的用途而构建的。 EBS卷仅限于一个实例,更重要的是,一次只能由一个实例访问。使用EFS,您可以使数百或数千个实例同时访问文件系统。这使得AWS EFS非常适合需要良好性能的集中式共享存储的任何使用,例如媒体处理或共享代码存储库。

QUESTION 113
1
2
3
4
5
6
7
8
9
You are building infrastructure for a data warehousing solution and an extra request has come
through that there will be a lot of business reporting queries running all the time and you are not
sure if your current DB instance will be able to handle it. What would be the best solution for this?
A. DB Parameter Groups
B. Read Replicas
C. Multi-AZ DB Instance deployment
D. Database Snapshots
Answer: B
您正在为数据仓库解决方案构建基础结构,并且提出了额外的要求,即始终有大量业务报告查询在运行,并且您不确定当前的数据库实例是否能够处理它。最好的解决方案是什么? A.数据库参数组B.只读副本C.多可用区数据库实例部署D.数据库快照

Explanation: Read Replicas make it easy to take advantage of MySQL’s built-in replication functionality to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. There are a variety of scenarios where deploying one or more Read Replicas for a given source DB Instance may make sense. Common reasons for deploying a Read Replica include: Scaling beyond the compute or I/O capacity of a single DB Instance for read-heavy database workloads. This excess read traffic can be directed to one or more Read Replicas. Serving read traffic while the source DB Instance is unavailable. If your source DB Instance cannot take IO requests (e.g. due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your Read Replica(s). For this use case, keep in mind that the data on the Read Replica may be “stale” since the source DB Instance is unavailable. Business reporting or data warehousing scenarios; you may want business reporting queries to rนn against a Read Replica, rather than your primary, production DB Instance. Reference: https://aws.amazon.com/rds/faqs/

QUESTION 114
1
2
3
4
5
6
7
8
In DynamoDB, could you use IAM to grant access to Amazon DynamoDB resources and API
actions?
A. In DynamoDB there is no need to grant access
b Depended to the type of access
C. No
D. Yes
Answer: D
在DynamoDB中,您可以使用IAM授予对Amazon DynamoDB资源和API操作的访问权限吗? A.在DynamoDB中,无需授予访问权限 b取决于访问类型C。否D.是

Explanation: Amazon DynamoDB integrates with AWS ldentity and Access Management (IAM). You can use AWS IAM to grant access to Amazon DynamoDB resources and API actions. To do this, you first write an AWS IAM policy, which is a document that explicitly lists the permissions you want to grant. You then attach that policy to an AWS IAM user or role. Reference: http://docs. aws. amazon.com/amazondynamodb/latestdevelopergudnB.hm

QUESTION 115
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
Much of your company's data does not need to be accessed often, and can take several hours for
retrieval time, so it's stored on Amazon Glacier. However someone within your organization has
expressed concerns that his data is more sensitive than the other data, and is wondering whether
the high level of encryption that he knows is on S3 is also used on the much cheaper Glacier
service. Which of the following statements would be most applicable in regards to this concern?
A. There is no encryption on Amazon Glacier, that's why it is cheaper.
B. Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than
Amazon S3 but you can change it to AES 256 if you are willing to pay more.
C. Amazon Glacier automatically encrypts the data using AES-256, the same as Amazon S3.
D. Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than
Amazon S3.
Answer: C
公司的许多数据不需要经常访问,并且可能需要花费几个小时才能检索,因此将其存储在Amazon Glacier上。但是,您组织中的某人已经表示担心,他的数据比其他数据更敏感,并且想知道他知道的S3上的高级加密是否也用于便宜得多的Glacier服务上。关于此问题,以下哪种说法最适用?答:Amazon Glacier上没有加密,这就是为什么它更便宜。 B. Amazon Glacier使用比Amazon S3少的加密方法使用AES-128自动加密数据,但是如果您愿意支付更多费用,则可以将其更改为AES 256。 C. Amazon Glacier与Amazon S3一样,使用AES-256自动加密数据。 D. Amazon Glacier使用AES-128自动加密数据,这是比Amazon S3少的加密方法

Explanation: Like Amazon S3, the Amazon Glacier service provides low-cost, secure, and durable storage. But where S3 is designed for rapid retrieval, Glacier is meant to be used as an archival service for data that is not accessed often, and for which retrieval times of several hours are suitable.

Amazon Glacier automatically encrypts the data using AES- 256 and stores it durably in an immutable form. Amazon Glacier is designed to provide average annual durability of 99.99999999% for an archive, It stores each archive in multiple facilities and multiple devices. Unlike traditional systems which can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks, and is built to be automatically self-healing. Reference: http://d0.awsstatic. com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf

QUESTION 116
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Your EBS volumes do not seem to be performing as expected and your team leader has
requested you look into improving their performance. Which of the following is not a true
statement relating to the performance of your EBS volumes?

A. Frequent snapshots provide a higher level of data durability and they will not degrade the
performance of your application while the snapshot is in progress.
B. General Purpose (SSD) and Provisioned IOPS (SSD) volumes have a throughput limit of 128
MB/s per volume.
C. There is a relationship between the maximum performance of your EBS volumes, the amount of
I/O you are driving to them, and the amount of time it takes for each transaction to complete.
D. There is a 5 to 50 percent reduction in IOPS when you first access each block of data on a newly
created or restored EBS volume
Answer: A

您的EBS量似乎表现不理想,而您的团队负责人
要求您研究改善其性能。 以下哪项是不正确的
有关您的EBS卷性能的声明?
答:频繁快照可提供更高级别的数据持久性,并且不会降低快照质量。
快照进行过程中应用程序的性能。
B.通用(SSD)和预配置IOPS(SSD)卷的吞吐量限制为128
每卷MB / s。
C.您的EBS卷的最大性能与
您正在驱动他们的I / O,以及完成每个事务所花费的时间。
D.当您第一次访问新的数据块时,IOPS降低了5%到50%
创建或还原的EBS卷

Explanation: Several factors can affect the performance of Amazon EBS volumes, such as instance configuration, I/O characteristics, workload demand, and storage configuration. Frequent snapshots provide a higher level of data durability, but they may slightly degrade the performance of your application while the snapshot is in progress. This trade off becomes critical when you have data that changes rapidly. Whenever possible, plan for snapshots to occur during off-peak times in order to minimize workload impact. Reference: http://docs .aws..amazon.com/AWSEC2/latest/UserGuide/EBSPerformance ,html

QUESTION 117
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
You've created your first load balancer and have registered your EC2 instances with the load
balancer. Elastic Load Balancing routinely performs health checks on all the registered EC2
instances and automatically distributes all incoming requests to the DNS name of your load
balancer across your registered, healthy EC2 instances. By default, the load balancer uses the
_protocol for checking the health of your instances.

A. HTTPS
B. HTTP
C. ICMP
D. lPv6
Answer: B
您已经创建了第一个负载均衡器,并已向负载注册了EC2实例。
平衡器。 Elastic Load Balancing定期对所有已注册的EC2执行运行状况检查
实例并自动将所有传入请求分发到您的负载的DNS名称
您已注册的健康EC2实例之间的平衡器。 默认情况下,负载均衡器使用
_protocol,用于检查实例的运行状况。

A.HTTPS
B.HTTP
C.ICMP
D.pv6

Explanation: In Elastic Load Balancing a health configuration uses information such as protocol, ping port, ping path (URL), response timeout period, and health check interval to determine the health state of the instances registered with the load balancer. Currently, HTTP on port 80 is the default health check. Reference: http://docs. .aws. amazon.com/ElasticLoadBalancinglatest/DeveloperGuide/TerminologyandKeyCo ncepts.html

QUESTION 118
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A major finance organisation has engaged your company to set up a large data mining
application. Using AWS you decide the best service for this is Amazon Elastic MapReduce(EMR)
which you know uses Hadoop. Which of the following statements best describes Hadoop?
A. Hadoop is 3rd Party software which can be installed using AMI
B. Hadoop is an open source python web framework
C. Hadoop is an open source Java software framework
D. Hadoop is an open source javascript framework
Answer: C
一家大型财务组织已聘请您的公司来建立大数据挖掘
应用。 使用AWS可以确定最适合此服务的服务是Amazon Elastic MapReduce(EMR)
您知道使用Hadoop。 以下哪个语句最能描述Hadoop?
A. Hadoop是可以使用AMI安装的第三方软件
B. Hadoop是一个开源python Web框架
C. Hadoop是一个开源Java软件框架
D. Hadoop是一个开源javascript框架
答案:C

Explanation: Amazon EMR uses Apache Hadoop as its distributed data processing engine. Hadoop is an open source, Java software framework that supports data-intensive distributed applications running on large clusters of commodity hardware. Hadoop implements a programming model named “MapReduce,” where the data is divided into many small fragments of work, each of which may be executed on any node in the cluster. This framework has been widely used by developers, enterprises and startups and has proven to be a reliable software platform for processing up to petabytes of data on clusters of thousands of commodity machines. Reference: http://aws ,amazon.com/elasticmapreduce/faqs/

QUESTION 119
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
In Amazon EC2 Container Service, are other container types supported?
A. Yes, EC2 Container Service supports any container service you need.
b Yes, EC2 Container Service also supports Microsoft container service.
C. No, Docker is the only container platform supported by EC2 Container Service presently,
D. Yes, EC2 Container Service supports Microsoft container service and Openstack.
Answer: C
在Amazon EC2容器服务中,是否支持其他容器类型?
答:是的,EC2容器服务支持您需要的任何容器服务。
b是,EC2容器服务还支持Microsoft容器服务。
C.不,Docker是目前EC2 Container Service支持的唯一容器平台,
D.是的,EC2 Container Service支持Microsoft容器服务和Openstack。
答案:C

Explanation: In Amazon EC2 Container Service, Docker is the only container platform supported by EC2 Container Service presently. Reference: http://aws. amazon.com/ecs/faqs/

QUESTION 120
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A Solutions Architect is designing the architecture for a web application that will be hosted on
AWS. Internet users will access the application using HTTP and HTTPS.
How should the Architect design the traffic control requirements?
A. Use a network ACL to allow outbound ports for HTTP and HTTPS, Deny other traffic for inbound
and outbound.
B. Use a network ACL to allow inbound ports for HTTP and HT TPS. Deny other traffic for inbound
and outbound.
C. Allow inbound ports for HTTP and HTTPS in the security group used by the web servers.
D. Allow outbound ports for HTTP and HTTPS in the security group used by the web servers.
Answer: C
解决方案架构师正在设计将在以下位置托管的Web应用程序的体系结构
AWS。 Internet用户将使用HTTP和HTTPS访问该应用程序。
架构师应如何设计交通控制要求?
A.使用网络ACL允许HTTP和HTTPS的出站端口,拒绝入站的其他流量
和出站。
B.使用网络ACL允许HTTP和HT TPS的入站端口。 拒绝入站的其他流量
和出站。
C.在Web服务器使用的安全组中允许HTTP和HTTPS的入站端口。
D.在Web服务器使用的安全组中允许HTTP和HTTPS的出站端口。
答案:C
QUESTION 121
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A solutions architect is designing a system to analyze the performance of financial markets while
the markets are closed.
The system will run a series of compute-intensive jobs for 4 hours every night.
The time to complete the compute jobs is expected to remain constant, and jobs cannot be
interrupted once started.
Once completed, the system is expected to run for a minimum of 1 year.
Which type of Amazon EC2 instances should be used to reduce the cost of the system?
A. Spot Instances
B. On-Demand Instances
C. Standard Reserved Instances
D. Scheduled Reserved Instances
Answer: D
解决方案架构师正在设计一个系统来分析金融市场的表现,同时
市场关闭。
该系统每晚将运行一系列计算密集型作业,持续4小时。
预计完成计算作业的时间将保持不变,并且作业不能
一旦开始中断。
一旦完成,该系统预计将运行至少一年。
应该使用哪种类型的Amazon EC2实例来降低系统成本?
A.竞价型实例
B.按需实例
C.标准预留实例
D.预定的预留实例

Explanation: Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. You reserve the capacity in advance, so that you know it is available when you need it, You pay for the time that the instances are scheduled, even if you do not use them. Scheduled Instances are a good choice for workloads that do not run continuously, but do run on a regular schedule. For example, you can use Scheduled Instances for an application that runs during business hours or for batch processing that runs at the end of the week.

通过计划的预留实例(计划的实例),您可以购买以一年,一天,每周或每月为基础的,具有指定的开始时间和持续时间的容量预留。您可以预先预留容量,以便知道在需要时可用。您为实例安排了时间,即使您不使用它们也要付费。对于不是连续运行但会定期运行的工作负载,计划实例是不错的选择。例如,您可以将“调度实例”用于在工作时间运行的应用程序或在周末运行的批处理。CORRECT: “Scheduled Reserved Instances” is the correct answer. INCORRECT: “Standard Reserved Instances” is incorrect as the workload only runs for 4 hours a day this would be more expensive. INCORRECT: “On-Demand Instances” is incorrect as this would be much more expensive as there is no discount applied. INCORRECT: “Spot Instances” is incorrect as the workload cannot be interrupted once started. With Spot instances workloads can be terminated if the Spot price changes or capacity is required. References: https://ocs. .aws. ,amazon.com/AWSEC2/latest/UserGuide/ec2-scheduled-instances.html Save time with our exam-specific cheat sheets: https://digitalcloud .training/certification-training/aws-solutions-architect- associate/compute/amazon-ec2/

QUESTION 122
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
A company hosts a static website on-premises and wants to migrate the website to AWS.
The website should load as quickly as possible for users around the world.
The company also wants the most cost- effective solution.
What should a solutions architect do to accomplish this?
A. Copy the website content to an Amazon S3 bucket.
Configure the bucket to serve static webpage content.
Replicate the S3 bucket to multiple AWS Regions
B. Copy the website content to an Amazon S3 bucket.
Configure the bucket to serve static webpage content.
Configure Amazon CloudFront with the S3 bucket as the origin
c. Copy the website content to an Amazon EBS-backed.
Amazon EC2 instance running Apache HTTP Server.
Configure Amazon Route 53 geolocation routing policies to select the closest origin
D. Copy the website content to multiple Amazon EBS-backed.
Amazon EC2 instances running Apache HTTP Server in multiple AWS Regions.
Configure Amazon CloudFront geolocation routing policies to select the closest origin
Answer: B
一家公司在本地托管一个静态网站,并希望将该网站迁移到AWS。
该网站应尽快为世界各地的用户加载。
该公司还希望获得最具成本效益的解决方案。
解决方案架构师应该怎么做才能做到这一点?
A.将网站内容复制到Amazon S3存储桶。
配置存储桶以提供静态网页内容。
将S3存储桶复制到多个AWS区域
B.将网站内容复制到Amazon S3存储桶。
配置存储桶以提供静态网页内容。
以S3存储桶为源配置Amazon CloudFront
C。将网站内容复制到Amazon EBS支持的网站。
运行Apache HTTP Server的Amazon EC2实例。
配置Amazon Route 53地理位置路由策略以选择最接近的来源
D.将网站内容复制到多个由Amazon EBS支持的网站。
在多个AWS区域中运行Apache HTTP Server的Amazon EC2实例。
配置Amazon CloudFront地理位置路由策略以选择最接近的来源
答案:B

Explanation: The most cost-effective option is to migrate the website to an Amazon S3 bucket and configure that bucket for static website hosting. To enable good performance for global users the solutions architect should then configure a CloudFront distribution with the S3 bucket as the origin, This will cache the static content around the world closer to users, CORRECT: “Copy the website content to an Amazon S3 bucket, Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin” is the correct answer. INCORRECT: “Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions” is incorrect as there is no solution here for directing users to the closest region. This could be a more cost-effective (though less elegant) solution if AWS Route 53 latency records are created. INCORRECT: “Copy the website content to an Amazon EC2 instance. Configure Amazon Route 53 geolocation routing policies to select the closest origin” is incorrect as using Amazon EC2 instances is less cost-effective compared to hosting the website on S3. Also, geolocation routing does not achieve anything with only a single record. INCORRECT: “Copy the website content to multiple Amazon EC2 instances in multiple AWS Regions. Configure AWS Route 53 geolocation routing policies to select the closest region” is incorrect as using Amazon EC2 instances is less cost-effective compared to hosting the website on S3. References: https://aws. amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/ Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions architect- associate/storage/amazon-s3/ https://digitalcloud .training/certification-training/aws-solutions- architect-associate/networking-and-content-delivery/amazon-cloudfron/

QUESTION 123
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A solutions architect is implementing a document review application using an Amazon S3 bucket
for storage.
The solution must prevent accidental deletion of the documents and ensure that all versions of
the documents are available.
Users must be able to download, modify, and upload documents.
Which combination of actions should be taken to meet these requirements'? (Select TWO )
A. Enable a read. only bucket ACL
B. Enable versioning on the bucket
C. Attach an IAM policy to the bucket
D. Enable MFA Delete on the bucket
E. Encrypt the bucket using AWS KMS

Answer: BD
解决方案架构师正在使用Amazon S3存储桶实施文档审阅应用程序
用于存储。
解决方案必须防止意外删除文档,并确保所有版本的
这些文件可用。
用户必须能够下载,修改和上传文档。
应该采取哪些行动组合才能满足这些要求? (选择两个)
答:启用读取。 唯一存储桶ACL
B.在存储桶上启用版本控制
C.将IAM策略附加到存储桶
D.在存储桶上启用MFA删除
E.使用AWS KMS加密存储桶

Explanation: None of the options present a good solution for specifying permissions required to write and modify objects So that requirement needs to be taken care of separately. The other requirements are to prevent accidental deletion and the ensure that all versions of the document are available. The two solutions for these requirements are versioning and MFA delete. Versioning will retain a copy of each version of the document and multi-factor authentication delete (MFA delete) will prevent any accidental deletion as you need to supply a second factor when attempting a delete.

没有一个选项为指定编写和修改对象所需的权限提供了一个好的解决方案,因此需要单独处理该要求。其他要求是防止意外删除,并确保文档的所有版本均可用。满足这些要求的两个解决方案是版本控制和MFA删除。版本控制将保留文档每个版本的副本,并且多因素身份验证删除(MFA删除)将防止任何意外删除,因为您在尝试删除时需要提供第二个因素。

CORRECT: “Enable versioning on the bucket” is a correct answer. CORRECT: “Enable MFA Delete on the bucket” is also a correct answer. INCORRECT: “Set read-only permissions on the bucket” is incorrect as this will also prevent any writing to the bucket which is not desired. INCORRECT: “Attach an IAM policy to the bucket” is incorrect as users need to modify documents which will also allow delete. Therefore, a method must be implemented to just control deletes. INCORRECT: “Encrypt the bucket using AWS SSE· S3” is incorrect as encryption doesn’t stop you from deleting an object. References: https://docs. ,aws. amazon.com/AmazonS3/latest/dev/Versioning.htmI https://docs. .aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html Save time with our exam-specific cheat sheets: https://digitalcloud. .training/certification-training/aws-solutions-architect- associate/storage/amazon-s3/

QUESTION 124
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
A company built a food ordering application that captures user data and stores it for future
analysis.
The application's static front end is deployed on an Amazon EC2 instance.
The front-end application sends the requests to the backend application running on separate EC2
instance.
The backend application then stores the data in Amazon RDS
What should a solutions architect do to decouple the architecture and make it scalable"
A. Use Amazon S3 to serve the front-end application which sends requests to Amazon EC2 to
execute the backend application.
The backend application will process and store the data in Amazon RDS
B. Use Amazon S3 to serve the front-end application and write requests to an Amazon Simple
Notification Service (Amazon SNS) topic.
Subscribe Amazon EC2 instances to the HTTP/HTTPS endpoint of the topic and process and
store the data in Amazon RDS
C. Use an EC2 instance to serve the front end and write requests to an Amazon SQS queue.
Place the backend instance in an Auto Scaling group and scale based on the queue depth to
process and store the data in Amazon RDS
D. Use Amazon S3 to serve the static front -end application and send requests to Amazon API
Gateway which writes the requests to an Amazon SQS queue,
Place the backend instances in an Auto Scaling group and scale based on the queue depth to
process and store the data in Amazon RDS
Answer: D
一家公司构建了一个食品订购应用程序,可以捕获用户数据并将其存储以备将来使用
分析。
应用程序的静态前端部署在Amazon EC2实例上。
前端应用程序将请求发送到在单独的EC2上运行的后端应用程序
实例。
然后,后端应用程序将数据存储在Amazon RDS中
解决方案架构师应该怎么做才能使架构脱钩并使其可扩展”
A.使用Amazon S3来服务将请求发送到Amazon EC2的前端应用程序
执行后端应用程序。
后端应用程序将处理数据并将其存储在Amazon RDS中
B.使用Amazon S3来服务前端应用程序并将请求写入Amazon Simple
通知服务(Amazon SNS)主题。
将Amazon EC2实例订阅到主题和过程的HTTP / HTTPS终端节点,以及
将数据存储在Amazon RDS中
C.使用EC2实例服务前端,并将请求写入Amazon SQS队列。
将后端实例放置在Auto Scaling组中,然后根据队列深度进行扩展,以达到
处理数据并将其存储在Amazon RDS中
D.使用Amazon S3服务静态前端应用程序并将请求发送到Amazon API
将请求写入Amazon SQS队列的网关,
将后端实例放置在Auto Scaling组中,然后根据队列深度进行扩展,以达到
处理数据并将其存储在Amazon RDS中

Explanation Keyword: Static + Decouple + Scalable Static=S3 Decouple=SQS Queue Scalable=ASG Option B will not be there in the race due to Auto-Scaling unavailability. Option A will not be there in the race due to Decouple unavailability. Option C & D will be in the race and Option D will be correct answers due to all 3 combination matches [Static=S3; Decouple=SQS Queue; Scalable=ASG] & Option C will loose due to Static option unavailability Reference:

Save time with our exam-specific cheat sheets: hts//digitalcloud .training/certification-training/aws-solutoacw content-delivery/amazon-api-gateway/ https://digitalcloud.training/certification-tann-cspplic integration/amazon-sqs/ https://digitalcloud .training/certification-training/aws-soluton architect associate/compute/aws- auto-scaling/ https//digitalcloud.trainingcertificang- architect- associate/storage/amazon-s3/ https://digitalcloud .training/certification-training/aws-solutions-aci associate/database/amazon-rds/

QUESTION 125
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
A Solutions Architect must design a web application that will be hosted on AWS, allowing users to
purchase access to premium, shared content that is stored in an S3 bucket.
Upon payment, content will be available for download for 14 days before the user is denied
access.
Which of the following would be the LEAST complicated implementation?
A. Use an Amazon CloudFront distribution with an origin access identity (OAl)
Configure the distribution with an Amazon S3 origin to provide access to the file through signed
URLs
Design a Lambda function to remove data that is older than 14 days

b.Use an S3 bucket and provide direct access to the tile
Design the application to track purchases in a DynamoDH table
Configure a Lambda function to remove data that is older than 14 days based oก a query to
Amazon DynamoDB
c.Use an Amazon CloudFront distribution with an OAI
Configure the distribution with an Amazon S3 origin to provide access to the file through signed
URLs
Design the application to sot an expiration of 14 days for the URL
d. Use an Amazon CloudFront distribution with an OAI
Configure the distribution with an Amazon S3 origin to provide access to the file through signed
URLs
Design the application to set an expiration of 60 minutes for the URL and recreate the URL as
necessary
Answer: C
解决方案架构师必须设计一个将托管在AWS上的Web应用程序,以便用户能够
购买对存储在S3存储桶中的高级共享内容的访问权。
付款后,可以在拒绝用户之前的14天内下载内容访问。
以下哪一项是最简单的实现?
A.使用具有原始访问身份(OAl)的Amazon CloudFront分配
使用Amazon S3来源配置分发以通过签名提供对文件的访问
网址设计Lambda函数以删除早于14天的数据
b。使用S3存储桶并直接访问图块
设计应用程序以在DynamoDH表中跟踪购买
配置Lambda函数以根据查询删除超过14天的数据亚马逊DynamoDB
c。将Amazon CloudFront分配与OAI一起使用
使用Amazon S3来源配置分发以通过签名提供对文件的访问
网址将应用程序设计为使URL过期14天
d。将Amazon CloudFront分配与OAI一起使用
使用Amazon S3来源配置分发以通过签名提供对文件的访问
网址设计应用程序以将URL设置为60分钟的到期时间,然后将URL重新创建为
必要

QUESTION 126 A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance. What should a solutions architect do to accomplish this? A. Use Amazon S3 with Transfer Acceleration to host the application. B. Use Amazon S3 with CacheControl headers to host the application. C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.

Answer: A Explanation: The maximum size of a single file that can be delivered through Amazon CloudFront is 20 GB. This limit applies to all Amazon CloudFront distributions,

QUESTION 127
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company captures clickstream data from multiple websites and analyzes it using batch
processing.
The data is loaded nightly into Amazon Redshift and 'is consumed by business analysts.
The company wants to move towards near-real-time data processing for timely insights.
The solution should process the streaming data with minimal effort and operational overhead.
Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)
A. Amazon EC2
B. AWS Lambda
C. Amazon Kinesis Data Streams
D. Amazon Kinesis Data Firehose
E. Amazon Kinesis Data Analytics
Answer: DE

一家公司从多个网站捕获点击流数据并使用批处理对其进行分析
处理。
数据每晚都会加载到Amazon Redshift中,并由业务分析师使用。
该公司希望转向近实时数据处理,以便及时了解情况。
该解决方案应以最少的工作量和操作开销来处理流数据。
对于该解决方案,哪种AWS服务组合最具有成本效益? (选择两个。)

https://d0.awsstatic.com/whitepapers/whitepaper- streaming-data- solutions-on-aws-with- amazonkinesis.pdf (9) https:/laws. amazon.com/kinesis/#Evolve_ _from_ ,batch_ to_ real-time_ _analytics

C和D大多在做类似的事情,获取流数据并传递到下一个流程,而C是更具自定义性的选项,需要额外的精力来手动扩展和配置。 D是更简单的方法,它直接将流数据处理到其他AWS服务(在本例中为Kinesis数据分析)。此外,消防站自然很容易在需要时将数据传递给红移

参照Kinesis一章

  • Kinesis Data Firehose:将数据加载到AWS数据存储上
  • Kinesis Data Analytics:使用SQL分析数据流
  • Kinesis Data Streams (Kinesis Streams):使用自定义的应用程序分析数据流
QUESTION 128
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
A company is migrating a three-tier application to AWS.
The application requires a MySQL database. In the past, the application users reported poor
application performance when creating new entries.
These performance issues were caused by users generating different real-time reports from the
application duringworking hours.
Which solution will improve the performance of the application when it is moved to AWS?
A. Import the data into an Amazon DynamoDB table with provisioned capacity.
Refactor the application to use DynamoDB for reports.
B. Create the database on a compute optimized Amazon EC2 instance.
Ensure compute resources exceed the on-premises database.
C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas.
Configure the application reader endpoint for reports.
D. Create an Amazon Aurora MySQL Multi-AZ DB cluster.
Configure the application to use the backup instance of the cluster as an endpoint for the reports.
Answer: C
一家公司正在将三层应用程序迁移到AWS。
该应用程序需要一个MySQL数据库。 过去,应用程序用户报告的效果不佳
创建新条目时的应用程序性能。
这些性能问题是由用户从
在工作时间内申请。
哪种解决方案将在将应用程序移至AWS时可提高其性能?
A.将数据导入具有预配置容量的Amazon DynamoDB表中。
重构应用程序以使用DynamoDB生成报告。
B.在经过计算优化的Amazon EC2实例上创建数据库。
确保计算资源超出本地数据库。
C.创建具有多个只读副本的Amazon Aurora MySQL Multi-AZ数据库集群。
配置报告的应用程序阅读器端点。
D.创建一个Amazon Aurora MySQL Multi-AZ数据库集群。
配置应用程序以将群集的备份实例用作报告的端点。

Explanation: The MySQL-compatible edition of Aurora delivers up to 5X the throughput of standard MySQL running on the same hardware, and enables existing MySQL applications and tools to run without requiring modification. https://aws. .amazon.com/rds/aurora/mysql-features/

QUESTION 129
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
A star-up company has a web application based in the us-east-1 Region with multiple Amazon
EC2 instances running behind an Application Load Balancer across multiple Availability Zones.
As the company's user base grows in the us-west-1 Region, it needs a solution with low latency
and high availability,
What should a solutions architect do to accomplish this?
A. Provision EC2 instances in us-west-1.
Switch the Application Load Balancer to a Network Load Balancer to achieve cross-Region load
balancing.
B. Provision EC2 instances and an Application Load Balancer in us-west-1.
Make the load balancer distribute the traffic based on the location of the request.
c. Provision EC2 instances and configure an Application Load Balancer in us-west-1.
Create an accelerator in AWS Global Accelerator that uses an endpoint group that includes the
load balancer endpoints in both Regions.
D. Provision EC2 instances and configure an Application Load Balancer in us-west-1.
Configure Amazon Route 53 with a weighted routing policy,
Create alias records in Route 53 that point to the Application Load Balancer.
一家星空公司拥有一个基于us-east-1 Region的Web应用程序,其中包含多个Amazon
在多个可用区中的Application Load Balancer后面运行的EC2实例。
随着公司用户群在美国西部1地区的增长,它需要低延迟的解决方案
和高可用性,
解决方案架构师应该怎么做才能做到这一点?
A.在us-west-1中配置EC2实例。
将应用程序负载平衡器切换到网络负载平衡器以实现跨区域负载
平衡。
B.在us-west-1中配置EC2实例和一个Application Load Balancer。
使负载均衡器根据请求的位置分配流量。
C。在us-west-1中配置EC2实例并配置Application Load Balancer。
在AWS Global Accelerator中创建一个加速器,该加速器使用包含以下内容的端点组:
两个区域中的负载均衡器端点。
D.设置EC2实例并在us-west-1中配置一个应用程序负载平衡器。
使用加权路由策略配置Amazon Route 53,
在Route 53中创建指向Application Load Balancer的别名记录。

Answer: C Explanation: “ELB provides load balancing within one Region, AWS Global Accelerator provides traffic management across multiple Regions [..] AWS Global Accelerator complements ELB by extending these capabilities beyond a single AWS Region, allowing you to provision a global interface for your applications in any number of Regions. If you have workloads that cater to a global client base, we recommend that you use AWS Global Accelerator. If you have workloads hosted in a single AWS Region and used by clients in and around the same Region, you can use an Application Load Balancer or Network Load Balancer to manage your resources.’ htts:/laws. amazon.com/global-accelerator/faqs/

ELB在一个区域内提供负载平衡,AWS Global Accelerator在多个区域之间提供流量管理[..] AWS Global Accelerator通过将这些功能扩展到单个AWS区域之外,对ELB进行了补充,允许您为任意数量的应用程序提供全局接口。地区。如果您有满足全球客户群的工作负载,我们建议您使用AWS Global Accelerator。如果您的工作负载托管在单个AWS区域中,并且由同一区域内及其周围的客户端使用,则可以使用Application Load Balancer或Network Load Balancer来管理资源。

为端点组注册端点:在每个端点组中注册一个或多个区域资源,例如应用程序负载平衡器,网络负载平衡器,EC2实例或弹性IP地址。然后,您可以设置权重以选择路由到每个端点的流量。

QUESTION 130
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A company is planning to migrate a business-critical dataset to Amazon S3.
The current solution design uses a single S3 bucket in the us-east-1 Region with versioning
enabled to store the dataset.
The company's disaster recovery policy states that all data multiple AWS Regions.
How should a solutions architect design the S3 solution?
A. Create an additional S3 bucket in another Region and configure cross-Region replication.
B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing
(CORS).
C. Create an additional S3 bucket with versioning in another Region and configure cross-Region
replication,
D. Create an additional S3 bucket with versioning in another Region and configure cross-origin
resource (CORS).
Answer: C
一家公司计划将关键业务数据集迁移到Amazon S3。
当前的解决方案设计在us-east-1 Region中使用单个S3存储桶并进行版本控制
已启用以存储数据集。
该公司的灾难恢复策略规定,所有数据都属于多个AWS区域。
解决方案架构师应如何设计S3解决方案?
A.在另一个区域中创建另一个S3存储桶,并配置跨区域复制。
B.在另一个区域中创建另一个S3存储桶,并配置跨域资源共享
(CORS)。
C.在另一个区域中创建另一个带有版本控制的S3存储桶,并配置跨区域
复制,
D.在另一个区域中创建另一个带有版本控制的S3存储桶,并配置跨域
资源(CORS)。
答案:C

Explanation: Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can copy objects between different AWS Regions or within the same Region. Both source and destination buckets must have versioning enabled. CORRECT: “Create an additional S3 bucket with versioning in another Region and configure cross-Region replication” is the correct answer. INCORRECT: “Create an additional S3 bucket in another Region and configure cross-Region replication” is incorrect as the destination bucket must also have versioning enabled. INCORRECT: “Create an additional S3 bucket in another Region and configure cross-origin

resource sharing (CORS)”is incorrect as CORS is not related to replication. INCORRECT: “Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS)” is incorrect as CORS is not related to replication. References: https://docs. aพs .amazon.com/AmazonS3/latestdev/replication. Save time with our exam-specific cheat sheets: https://digitalcloud.training/certificatin-tinn architect- associate/storage/amazon-s3/

QUESTION 131
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company has application running on Amazon EC2 instances in a VPC.
One of the applications needs to call an Amazon S3 API to store and read objects,
The company's security policies restrict any internet-bound traffic from the applications.
Which action will fulfil these requirements and maintain security?
A. Configure an S3 interface endpoint.
B. Configure an S3 gateway endpoint.
C. Create an S3 bucket in a private subnet.
D. Create an S3 bucket in the same Region as the EC2 instance.
Answer: B
公司的应用程序在VPC的Amazon EC2实例上运行。
应用程序之一需要调用Amazon S3 API来存储和读取对象,
公司的安全策略限制了来自应用程序的任何互联网绑定流量。
哪些措施可以满足这些要求并维护安全性?
A.配置一个S3接口端点。
B.配置一个S3网关端点。
C.在专用子网中创建一个S3存储桶。
D.在与EC2实例相同的Region中创建一个S3存储桶。

Explanation: Gateway Endpoint for S3 and DynamoDB https://medium.com/tensult/aws-vpc-endpin--4422 https//docs .aws amazon.com/vpc/latestuserguide/vpc-endpoint. https://docs.aws.amazon.com.

QUESTION 132
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A company's web application uses an Amazon RDS PostgreSQL DB instance to store jits
application data.
During the financial closing period at the start of every month. Accountants run large queries that
impact the database's performance due to high usage.
The company wants to minimize the impact that the reporting activity has on the web application.
What should a solutions architect do to reduce the impact on the database with the LEAST
amount of effort?
A. Create a read replica and direct reporting traffic to the replica.
B. Create a Multi-AZ database and direct reporting traffic to the standby.
C. Create a cross-Region read replica and direct reporting traffic to the replica.
D. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift
database.
Answer: A

公司的Web应用程序使用Amazon RDS PostgreSQL数据库实例存储jit
应用程序数据。
在每个月初的财务结算期间。 会计师运行大型查询
高使用率会影响数据库的性能。
该公司希望最大程度地减少报告活动对Web应用程序的影响。
解决方案架构师应采取什么措施以使用LEAST减少对数据库的影响
多少努力?
A.创建一个只读副本并将报告流量定向到该副本。
B.创建一个多可用区数据库,并将报告流量定向到备用数据库。
C.创建跨区域的只读副本,并将报告流量定向到该副本。
D.创建一个Amazon Redshift数据库并将报告流量定向到Amazon Redshift
数据库。

Explanation: Amazon RDS uses the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines’ built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance, Updates made to the source DB instance are asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. https://docs. .aพS .amazon.com/AmazonRDS/latestUserGuide/S ReadRepl.html

QUESTION 133
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company must generate sales reports at the beginning of every month.
The reporting process launches 20 Amazon EC2 instances on the first of the month.
The process runs for 7 days and cannot be interrupted. The company wants to minimize costs.
Which pricing model should the company choose?
A. Reserved Instances
B. Spot Block Instances
C. On-Demand Instances
D. Scheduled Reserved Instances
Answer: D
公司必须在每个月初生成销售报告。
该报告流程在每月的第一天启动20个Amazon EC2实例。
该过程运行7天,不能中断。 该公司希望将成本降到最低。
公司应选择哪种定价模式?
A.预留实例
B.竞价块实例
C.按需实例
D.预定的预留实例

Explanation: Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. You reserve the capacity in advance, so that you know it is available when you need it. You pay for the time that the instances are scheduled, even if you do not use them. Scheduled Instances are a good choice for workloads that do not run continuously, but do run on a regular schedule. For example, you can use Scheduled Instances for an application that runs during business hours or for batch processing that runs at the end of the week. https://docs. .aws.amazon.com/AWSEC2/latest/UserGuide/ec2-scheduled-instances.tml

QUESTION 134
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company is hosting a website behind multiple Application Load Balancers.
The company has different distribution rights for its content around the world.
A solutions architect needs to ensure that users are served the correct content without violating
distribution rights.
Which configuration should the solutions architect choose to meet these requirements?
A. Configure Amazon CloudFront with AWS WAF.
B. Configure Application Load Balancers with AWS WAF.
C. Configure Amazon Route 53 with a geolocation policy,
D. Configure Amazon Route 53 with a geoproximity routing policy.
Answer: C

一家公司正在多个应用程序负载平衡器后面托管一个网站。
该公司对其内容在世界各地具有不同的发行权。
解决方案架构师需要确保为用户提供正确的内容而不会违反
发行权。
解决方案架构师应选择哪种配置来满足这些要求?
A.使用AWS WAF配置Amazon CloudFront。
B.使用AWS WAF配置应用程序负载平衡器。
C.使用地理位置策略配置Amazon Route 53,
D.使用地理邻近性路由策略配置Amazon Route 53。

Explanation: https://docs. aws .amazon.com/Route53/latest/DeveloperGuide/routing-policy.html (geolocation routing)

QUESTION 135
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company's website is using an Amazon RDS MySQL 'Multi-AZ DB instance for its transactional
data storage.
There are other internal systems that query this DB instance to fetch data for internal batch
processing,
The RDS DB instance slows down significantly the internal systems fetch data.
This impacts the website's read and write performance, and the users experience slow response
times.
Which solution will improve the website's performance?
A. Use an RDS PostgreSQL DB instance instead of a MySQL database.
B. Use Amazon ElastiCache to cache the query responses for the website.
C. Add an additional Availability Zone to the current RDS MySQL Multi.AZ DB instance.
D. Add a read replica to the RDS DB instance and configure the internal systems to query the read
replica.
Answer: D
一家公司的网站使用Amazon RDS MySQL'Multi-AZ数据库实例进行交易
数据存储。
还有其他内部系统查询此数据库实例以获取数据以进行内部批处理
处理
RDS数据库实例大大降低了内部系统的数据获取速度。
这会影响网站的读写性能,并且用户的响应速度会很慢
次。
哪种解决方案可以改善网站的性能?
答:使用RDS PostgreSQL数据库实例而不是MySQL数据库。
B.使用Amazon ElastiCache缓存网站的查询响应。
C.向当前的RDS MySQL Multi.AZ数据库实例添加一个额外的可用区。
D.将只读副本添加到RDS数据库实例,并配置内部系统以查询该只读副本
复制品。
QUESTION 136
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A solutions architect is designing storage for ล high performance computing (HPC) environment
based on Amazon Linux.
The workload stores and processes a large amount of engineering drawings that require shared
storage and heavy computing.
Which storage option would be the optimal solution?
A. Amazon Elastic File System (Amazon EFS)
B. Amazon FSx for Lustre
C. Amazon EC2 instance store
D. Amazon EBS Provisioned IOPS SSD (io1)
Answer: B
解决方案架构师正在为ล高性能计算(HPC)环境设计存储
基于Amazon Linux。
工作负载存储和处理大量需要共享的工程图
存储和重型计算。
哪个存储选项将是最佳解决方案?
A.Amazon弹性文件系统(Amazon EFS)
B.适用于Lustre的Amazon FSx
C.Amazon EC2实例存储
D.Amazon EBS预置的IOPS SSD(io1)

Explanation: https://d1.awsstatic.com/whitepapers/AWn HPC%20Storage%20Option s_ 2019_ FINAL.pdf (p.8)

QUESTION 137
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
A company is performing an AWS Well-Architected Framework review of an existing workload
deployed on AWS.
The review identified a public-facing website running on the same Amazon EC2 instance as a
Microsoft Active Directory domain controller that was install recently to support other AWS
services.
A solutions architect needs to recommend a new design that would improve the security of the
architecture and minimize the administrative demand on IT staff.
What should the solutions architect recommend?
A. Use AWS Directory Service to create a managed Active Directory.
Uninstall Active Directory ๐ก the current EC2 instance.
B. Create another EC2 instance in the same subnet and reinstall Active Directory on it.
Uninstall Active Directory.
C. Use AWS Directory Service to create an Active Directory connector.
Proxy Active Directory requests to the Active domain controller running on the current EC2
instance.
D. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0
federation with the current Active Directory controller.
Modify the EC2 instance's security group to deny public access to Active Directory.
一家公司正在对现有工作负载执行AWS架构完善的审查
部署在AWS上
该审查确定了一个面向公众的网站,该网站与Amazon EC2实例在同一Amazon EC2实例上运行
最近安装以支持其他AWS的Microsoft Active Directory域控制器
服务。
解决方案架构师需要推荐一种新的设计,以提高安全性。
体系结构并最小化对IT员工的管理需求。
解决方案架构师应该建议什么?
A.使用AWS Directory Service创建托管Active Directory。
卸载Active Directory ๐ก当前的EC2实例。
B.在同一子网中创建另一个EC2实例,然后在其上重新安装Active Directory。
卸载Active Directory。
C。使用AWS Directory Service创建Active Directory连接器。
对当前EC2上运行的Active域控制器的代理Active Directory请求
实例。
D.使用安全性声明标记语言(SAML)2.0启用AWS Single Sign-On(AWS SSO)
当前Active Directory控制器的联合。
修改EC2实例的安全组以拒绝对Active Directory的公共访问。

Answer: A Explanation: Migrate AD to AWS Managed AD and keep the webserver alone.. Reduce risk = remove AD from that EC2. Minimize admin = remove AD from any EC2 -> use AWS Directory Service Active Directory connector is only for ON-PREM AD. The one they have exists in the cloud already.

AWS Directory Service使您可以将Microsoft Active Directory(AD)作为托管服务运行。适用于Microsoft Active Directory的AWS Directory Service(也称为AWS Managed Microsoft AD)由Windows Server 2012 R2驱动。选择并启动此目录类型后,它将创建为连接到您的虚拟私有云(VPC)的一对高可用性域控制器。域控制器在您选择的区域中的不同可用区中运行。主机监视和恢复,数据复制,快照和软件更新将自动为您配置和管理。

QUESTION 138
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company runs an application in a branch office within a small data closet with no virtualized
compute resources.
The application data is stored on an NFS volume, Compliance standards require a daily offsite
backup of the NFS volume.
Which solution meet these requirements?
A. Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3.
B. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the
data to Amazon S3.
C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate
the data to Amazon S3.
D. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate
the data to Amazon S3.
一家公司在没有虚拟化的小型数据柜中的分支机构中运行应用程序
计算资源。
应用程序数据存储在NFS卷上,合规性标准要求每天在现场
NFS卷的备份。
哪种解决方案满足这些要求?
A.在本地安装一个AWS Storage Gateway文件网关,以将数据复制到Amazon S3。
B.在本地安装AWS Storage Gateway文件网关硬件设备以复制
数据到Amazon S3。
C.在本地安装具有存储卷的AWS Storage Gateway卷网关以进行复制
数据发送到Amazon S3。
D.在本地安装具有缓存卷的AWS Storage Gateway卷网关以进行复制
数据发送到Amazon S3。

Answer: B Explanation Keyword: NFS + Compliance File gateway provides a virtual on-premises file server, which enables you to store and retrieve files as objects in Amazon S3. It can be used for on-premises applications, and for Amazon EC2- resident applications that need file storage in S3 for object based workloads. Used for flat files only, stored directly on S3. File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching.

关键字:NFS + Compliance File Gateway提供了一个虚拟的本地文件服务器,使您能够将文件作为对象存储和检索在Amazon S3中。它可用于本地应用程序以及需要在S3中存储文件以用于基于对象的工作负载的Amazon EC2驻留应用程序。仅用于平面文件,直接存储在S3上。文件网关通过本地缓存提供对Amazon S3中数据的基于SMB或NFS的访问

References: https://aws. .amazon.com/blogs/aws/file-interface-to-aws-storage-gateway/ https://d0.awsstatic.com/whitepapers/aws-storage-gateway-file-gateway-for-hybrid- architectures.pdf Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-assciate/storage/aws- storage-gateway/

QUESTION 139
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
An application hosted on AWS is experiencing performance problems, and the application vendor
wants to perform an analysis of the log file to troubleshoot further. The log file is stored on
Amazon S3 and is 10 GB in size. 
The application owner will make the log file available to the vendor for a limited time.
What is the MOST secure way to do this?
A. Enable public read on the S3 object and provide the link to the vendor.
B. Upload the file to Amazon WorkDocs and share the public link with the vendor.
C. Generate a presigned URL and have the vendor download the log file before it expires.
D. Create an IAM user for the vendor to provide access to the S3 bucket and the application.
Enforce multifactor authentication.
Answer: C
AWS上托管的应用程序遇到性能问题,应用程序供应商
想要对日志文件进行分析以进一步排除故障。 日志文件存储在
Amazon S3,大小为10 GB。
应用程序所有者将在有限的时间内使日志文件对供应商可用。
最安全的方法是什么?
答:启用对S3对象的公共读取,并提供到供应商的链接。
B.将文件上传到Amazon WorkDocs并与供应商共享公共链接。
C.生成一个预签名的URL,并让供应商在日志文件过期之前下载该日志文件。
D.为供应商创建一个IAM用户,以提供对S3存储桶和应用程序的访问。
强制执行多因素身份验证。

默认情况下,所有对象都是私有的。只有对象所有者有权访问这些对象。但是,对象所有者可以选择使用他人自己的安全凭证通过创建预签名URL来与其他对象共享对象,以授予有时间限制的下载对象的权限。 为对象创建预签名URL时,必须提供安全凭证,指定存储桶名称,对象密钥,指定HTTP方法 (获取下载对象)和到期日期和时间。预先签名的URL仅在指定的持续时间内有效。 接收预签名URL的任何人都可以访问该对象。例如,如果存储桶中有视频,并且存储桶和对象都是私有的,则可以通过生成预签名的URL与他人共享视频。

QUESTION 140
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company hosts its product information webpages on AWS,
The existing solution uses multiple Amazon C2 instances behind an Application Load Balancer in
an Auto Scaling group.
The website also uses a custom DNS name and communicates with HTTPS only using a
dedicated SSL certificate.
The company is planning a new product launch and wants to be sure that users from around the
world have the best possible experience on the new website,
What should a solutions architect do to meet these requirements?
A. Redesign the application to use Amazon CloudFront.
B. Redesign the application to use AWS Elastic Beanstalk.
C. Redesign the application to use a Network Load Balancer.
D. Redesign the application to use Amazon S3 static website hosting
一家公司在AWS上托管其产品信息网页,
现有解决方案在以下应用程序负载均衡器后面使用多个Amazon C2实例
Auto Scaling组。
该网站还使用自定义DNS名称,并且仅使用
专用SSL证书。
该公司正在计划推出新产品,并希望确保来自各地的用户
世界在新网站上拥有最好的体验,
解决方案架构师应该怎么做才能满足这些要求?
A.重新设计应用程序以使用Amazon CloudFront。
B.重新设计应用程序以使用AWS Elastic Beanstalk。
C.重新设计应用程序以使用网络负载平衡器。
D.重新设计应用程序以使用Amazon S3静态网站托管

Answer: A

QUESTION 141
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete.
What should the solutions architect do to meet these requirements?
A. Increase the minimum capacity for the Auto Scaling group.
B.Increase the maximum capacity for the Auto Scaling group.
C. Configure scheduled scaling to scale up to the desired compute level.
D. Change the scaling policy to add more EC2 instances during each scaling operation.
Answer: C
解决方案架构师观察到,在达到所需的Amazon EC2容量之前,夜间批处理作业会自动扩大1小时。 每天晚上的峰值容量是相同的,并且批处理作业始终在凌晨1点开始。 解决方案架构师需要找到一种经济高效的解决方案,以快速达到所需的EC2容量,并允许Auto Scaling组在批处理作业完成后按比例缩小规模。
解决方案架构师应该怎么做才能满足这些要求?
A.增加Auto Scaling组的最小容量。
B.增加Auto Scaling组的最大容量。
C.配置计划的缩放比例以扩展到所需的计算级别。
D.更改扩展策略以在每次扩展操作期间添加更多EC2实例。
答案:C
QUESTION 142
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
An ecommerce company is running a multi-tier application on AWS. The front-end and backend
tiers both run on Amazon EC2. and the database runs on Amazon RDS for MySQL. The backend
tier communicates with the RDS instance. There are frequent calls to return identical datasets
from the database that are causing performance slowdowns.
Which action should be taken to improve the performance of the backend?
A. Implement Amazon SNS to store the database calls.
B. Implement Amazon ElastiCache to cache the large datasets.
C. Implement an RDS for MySQL read replica to cache database calls.
D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.
Answer: B

一家电子商务公司正在AWS上运行多层应用程序。 前端和后端
层均在Amazon EC2上运行。 并且该数据库在Amazon RDS for MySQL上运行。 后端
层与RDS实例进行通信。 经常调用返回相同的数据集
从数据库中导致性能下降。
应该采取什么措施来提高后端的性能?
A.实施Amazon SNS来存储数据库调用。
B.实现Amazon ElastiCache以缓存大型数据集。
C.为MySQL只读副本实现RDS以缓存数据库调用。
D.实施Amazon Kinesis Data Firehose以将调用流式传输到数据库。

只读副本和Elasticache都有助于提高性能。 这里最主要的是经常访问的数据集,因此Elasticache用于缓存经常访问的数据。

QUESTION 143
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
A company's application hosted on Amazon EC2 instances needs to access an Amazon S3
bucket. Due to data sensitivity, traffic cannot traverse the internet How should a solutions
architect configure access?
A. Create a private hosted zone using Amazon Route 53.
B. Configure a VPC gateway endpoint for Amazon S3 in the VPC.
C. Configure AWS PrivateLink between the EC2 instance and the S3 bucket.
D. Set up a site-to-site VPN connection between the VPC and the S3 bucket.
托管在Amazon EC2实例上的公司的应用程序需要访问Amazon S3存储桶。由于数据敏感性,流量无法穿越Internet。解决方案架构师应如何配置访问权限? 
A.使用Amazon Route 53创建一个私有托管区域。B.在VPC中为Amazon S3配置VPC网关终端节点。 C.在EC2实例和S3存储桶之间配置AWS PrivateLink。 D.在VPC和S3存储桶之间建立站点到站点VPN连接。

Answer: B
QUESTION 144
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
An application runs on Amazon EC2 instances in private subnets. The application needs to
access an Amazon DynamoDB table.
What is the MOST secure way to access the table while ensuring that the traffic does not leave
the AWS network?
A. Use a VPC endpoint for DynamoDB.
B. Use a NAT gateway in a public subnet.
C. Use a NAT instance in a private subnet.
D. Use the internet gateway attached to the VPC.
Answer: A
应用程序在专用子网中的Amazon EC2实例上运行。 该应用程序需要
访问Amazon DynamoDB表。
在确保流量不离开的同时访问表的最安全的方法是什么
AWS网络?
A.将VPC端点用于DynamoDB。
B.在公共子网中使用NAT网关。
C.在专用子网中使用NAT实例。
D.使用连接到VPC的Internet网关。

Explanantion VPC Enpoint An Interface endpoint uses AWS PrivateLink and is an elastic network interface (ENI) with a private IP address that serves as an entry point for traffic destined to a supported service. Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint servíces), and supported AWS Marketplace partner services. AWS PrivateLink access over Inter-Region VPC Peering: · Applications in an AWS VPC can securely access AWS PrivateLink endpoints across AWS Regions using Inter-Region VPC Peering. " AWS PrivateLink allows you to privately access services hosted on AWS in a highly available and scalable manner, without using public lPs, and without requiring the traffic to traverse the Internet.

" Customers can privately connect to a service even if the service endpoint resides in a different AWS Region, Traffic using Inter-Region VPC Peering stays on the global AWS backbone and never traverses the public Internet.

A gateway endpoint is a gateway that is a target for a specified route in your route table, used for traffic destined to a supported AWS service. " An interface VPC endpoint (interface endpoint) enables you to connect to services powered by AWS PrivateLink.

References: https://docs. .aws.amazon.com/amazondynamodb/atestdeveloperguide/vpc-endpoints- dynamodb.html

QUESTION 145
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A solutions architect needs to design a low-latency solution for a static single-page application
accessed by users utilizing a custom domain name. The solution must be serverless, encrypted
in transit, and cost-effective,
Which combination of AWS services and features should the solutions architect use? (Select
TWO.)
A. Amazon S3
B. Amazon EC2
C. AWS Fargate
D. Amazon CloudFront
E. Elastic Load Balancer
Answer: AD

418/5000
解决方案架构师需要为静态单页应用程序设计低延迟解决方案
由使用自定义域名的用户访问。 解决方案必须是无服务器的,加密的
在运输过程中且具有成本效益,
解决方案架构师应使用哪种AWS服务和功能组合? (选择
二。)
A.亚马逊S3
B.亚马逊EC2
C.AWS Fargate
D.Amazon CloudFront
E.弹性负载平衡器
QUESTION 146
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company has global users accessing an application deployed in different AWS Regions,
exposing public static IP addresses. The users are experiencing poor performance when
accessing the application over the internet.
What should a solutions architect recommend to reduce internet latency?
A. Set up AWS Global Accelerator and add endpoints.
B. Set up AWS Direct Connect locations in multiple Regions.
C. Set up an Amazon CloudFront distribution to access an application.
D. Set up an Amazon Route 53 geoproximity routing policy to route traffic.
Answer: A
公司有全球用户访问部署在不同AWS区域中的应用程序,
公开公共静态IP地址。 用户在以下情况下表现不佳
通过互联网访问应用程序。
解决方案架构师应建议什么以减少Internet延迟?
A.设置AWS Global Accelerator并添加终端节点。
B.在多个区域中设置AWS Direct Connect位置。
C.设置一个Amazon CloudFront发行版以访问应用程序。
D.设置Amazon Route 53地理接近路由策略以路由流量。

Explanation: AWS Global Accelerator is a service in which you create accelerators to improve availability and performance of your applications for local and global users. Global Accelerator directs traffic to optimal endpoints over the AWS global network. This improves the availability and performance of your internet applications that are used by a global audience. Global Accelerator is a global service that supports endpoints in multiple AWS Regions, which are listed in the AWS Region Table. By default, Global Accelerator provides you with two static IP addresses that you associate with your accelerator, (Or, instead of using the IP addresses that Global Accelerator provides, you can configure these entry points to be IPv4 addresses from your own IP address ranges that you bring to Global Accelerator.)

The static IP addresses are anycast from the AWS edge network and distribute incoming application traffic across multiple endpoint resources in multiple AWS Regions, which increases the availability of your applications. Endpoints can be Network Load Balancers, Application Load Balancers, EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions. CORRECT: “Set up AWS Global Accelerator and add endpoints” is the correct answer. INCORRECT: “Set up AWS Direct Connect locations in multiple Regions” is incorrect as this is used to connect from an on-premises data center to AWS. It does not improve performance for users who are not connected to the on-premises data center. INCORRECT: “Set up an Amazon CloudFront distribution to access an application” is incorrect as CloudFront cannot expose static public IP addresses. INCORRECT: “Set up an Amazon Route 53 geoproximity routing policy to route traffic” is incorrect as this does not reduce internet latency as well as using Global Accelerator. GA will direct users to the closest edge location and then use the AWS global network. References: https://docs. .aws. amazon.com/global-accelerator/latestdg/what-is-global-accelerator .html Save time with our exam-specific cheat sheets: https://digitalcloud .training/certification-training/aws- solutions-architect- -associate/networking-and- content-delivery/aws-global-accelerator/

QUESTION 147
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
An application requires a development environment (DÊV) and production environment (PROD)
for several years. The DEV instances will run for 10 hours each day during normal business
hours, while the PROD instances will run 24 hours each day, A solutions architect needs to
determine a compute instance purchase strategy to minimize costs.
Which solution is the MOST cost-effective?
A. DEV with Spot Instances and PROD with On-Demand Instances
B. DEV with On-Demand Instances and PROD with Spot Instances
C. DEV with Scheduled Reserved Instances and PROD with Reserved Instances
D. DEV with On-Demand Instances and PROD with Scheduled Reserved Instances
Answer: C
应用程序需要开发环境(DÊV)和生产环境(PROD)
几年来。 在正常业务期间,DEV实例每天将运行10个小时
小时,而PROD实例每天24小时运行,那么解决方案架构师需要
确定计算实例购买策略以最小化成本。
哪种解决方案最有效?
A.具有竞价型实例的DEV和具有按需实例的PROD
B.带按需实例的DEV和带竞价实例的PROD
C.具有预定保留实例的DEV和具有保留实例的PROD
D.具有按需实例的DEV和具有计划的预留实例的PROD
QUESTION 148
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
A solutions architect is designing a customer-facing application. The application is expected to
have a variable amount of reads and writes depending on the time of year and clearly defined
access patterns throughout the year. Management requires that database auditing and scaling be
managed in the AWS Cloud. The Recovery Point Objective (RPO) must be less than 5 hours.
Which. solutions can accomplish this? (Select TWO.)
A. Use Amazon DynamoDB with auto scaling,
Use on-demand backups and AWS CloudTrail.
B. Use Amazon DynamoDB with auto scaling,
Use on-demand backups and Amazon DynamoDB Streams.
C. Use Amazon Redshift Configure concurrency scaling.
Enable audit logging.
Perform database snapshots every 4 hours.
D. Use Amazon RDS with Provisioned lOPS.
Enable the database auditing parameter.
Perform database snapshots every 5 hours.
E. Use Amazon RDS with auto scaling.
Enable the database auditing parameter.
Configure the backup retention period to at least 1 day.
Answer: AE
解决方案架构师正在设计面向客户的应用程序。 该应用程序有望
根据一年中的不同时间具有不同的读写次数,并且定义明确
全年的访问方式。 管理部门要求对数据库进行审核和扩展
在AWS云中进行管理。 恢复点目标(RPO)必须少于5小时。
哪一个。 解决方案可以做到这一点? (选择两个。)
A.使用Amazon DynamoDB自动缩放功能,
使用按需备份和AWS CloudTrail。
B.将Amazon DynamoDB与自动缩放一起使用,
使用按需备份和Amazon DynamoDB流。
C.使用Amazon Redshift配置并发扩展。
启用审核日志记录。
每4小时执行一次数据库快照。
D.将Amazon RDS与预置的lOPS一起使用。
启用数据库审核参数。
每5小时执行一次数据库快照。
E.将Amazon RDS与自动缩放一起使用。
启用数据库审核参数。
将备份保留期配置为至少1天。
答:AE

Explanation: A. Use Amazon DynamoDB with auto scaling. Use on-demand backups and AWS CloudTrail. CORRECT - Scalable, with backup and AWS Managed Auditing B. Use Amazon DynamoDB with auto scaling. Use on-demand backups and Amazon DynamoDB Streams. INCORRECT - AWS DDB Streams can be used for auditing, but its not AWS managed auditing. C. Use Amazon Redshift Configure concurrency scaling. Enable audit logging. Perform database snapshots every 4 hours. INCORRECT - Not a database. Datalake

D. Use Amazon RDS with Provisioned IOPS. Enable the database auditing parameter. Perform database snapshots every 5 hours. INCORRECT - This does not scale E. Use Amazon RDS with auto scaling, Enable the database auditing parameter. Configure the backup retention period to at least 1 day. CORRECT - Scalable, AWS managed auditing and backup. The backup frequency is not stated but have no technical limitation which states it cannot be less 5 hours (1 day is retention period of the backup).

QUESTION 149
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company hosts its website on Amazon S3. The website serves petabytes of outbound traffic
monthly, which accounts for most of the company's AWS costs,
What should a solutions architect do to reduce costs?
A. Configure Amazon CloudFront with the existing website as the origin.
B. Move the website to Amazon EC2 with Amazon EBS volumes for storage.
C.
Use AWS Global Accelerator and specify the existing website as the endpoint.
D, Rearchitect the website to run on a combination of Amazon API Gateway and AWS Lambda.
Answer: A
517/5000
一家公司在Amazon S3上托管其网站。 该网站提供了PB的出站流量
每月,这占了公司大部分AWS成本,
解决方案架构师应该怎么做才能降低成本?
A.使用现有网站作为来源配置Amazon CloudFront。
B.将网站移至带有Amazon EBS卷的Amazon EC2进行存储。
C。使用AWS Global Accelerator并将现有网站指定为终端节点。
D,重新构建网站以在Amazon API Gateway和AWS Lambda的组合上运行。
Explanation:
A textbook case for CloudFront, The data transfer cost in CloudFront is lower than in S3. With
heavy read operations of static content, it's more economical to add CloudFront in front of you S3
bucket.
QUESTION 150
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
A solution architect has created two IAM policies: Policy1 and Policy2. Both policies are attached
to an IAM group.

Policy1
[
"Version": "2012-10-17", "Statement": [
(
"Effect": "Allow",
"Action":[
"iam:Get*",
"iam:List*",
"kms:List* ",
"ec2:*",
"ds:*",
"logs:Get*",
"logs: Describe* "
1,
"Resource": "*"
]
)
Policy2
[
"Version": "2012-10-17",
"Statement":[
"Effect": "Deny",
"Action": "ds:Delete*",
"Resource": "*"
]
)
A cloud engineer is added as an IAM user to the IAM group, 'Which action will the cloud engineer
be able to perform?

A. Deleting IAM users
B. Deleting directories
C. Deleting Amazon EC2 instances
D. Deleting logs from Amazon CloudWatch Logs
Answer: C  
QUESTION 151
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A solutions architect is helping a developer design a new ecommerce shopping cart application
using AWS services. The developer is unsure of the current database schema and expects to
make changes as the ecommerce site grows. The solution needs to be highly resilient and
capable of automatically scaling read and write capacity.
Which database solution meets these requirements?
A. Amazon Aurora PostgreSQL
B. Amazon DynamoDB with on-demand enabled
C. Amazon DynamoDB with DynamoDB Streams enabled
D. Amazon SQS and Amazon Aurora PostgreSQL
Answer: B
解决方案架构师正在帮助开发人员设计新的电子商务购物车应用程序
使用AWS服务。 开发人员不确定当前的数据库架构,并希望
随着电子商务网站的发展进行更改。 解决方案必须具有高度的弹性和
能够自动扩展读写容量。
哪个数据库解决方案满足这些要求?
A.Amazon Aurora PostgreSQL
B.启用按需的Amazon DynamoDB
C.启用了DynamoDB流的Amazon DynamoDB
D.Amazon SQS和Amazon Aurora PostgreSQL

Explanation: hts://aws. amazon.com/pt/about-aws/whats-nw2181 demand/

QUESTION 152
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A solutions architect is designing an architecture for a new application that requires low network
latency and high network throughput between Amazon EC2 instances. Which component should
be included in the architectural design?
A. An Auto Scaling group with Spot Instance types.
B. A placement group using a cluster placement strategy.
C. A placement group using a partition placement strategy.
D. An Auto Scaling group with On-Demand instance types.
解决方案架构师正在为需要低网络的新应用程序设计体系结构
Amazon EC2实例之间的延迟和高网络吞吐量。 哪个组件应该
被包括在建筑设计中?
A.具有竞价型实例类型的Auto Scaling组。
B.使用集群放置策略的放置组。
C.使用分区放置策略的放置组。
D.具有按需实例类型的Auto Scaling组。
Answer: B
QUESTION 153
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company has a web application with sporadic usage patterns. There is heavy usage at the
beginning of each month, moderate usage at the start of each week, and unpredictable usage
during the week. The application consists of a web server and a MySQL database server running
inside the data center. The company would like to move the application to the AWS Cloud, and
needs to select a cost-effective database platform that will not require database modifications.
Which solution will meet these requirements?
A. Amazon DynamoDB
B. Amazon RDS for MySQL
C. MySQL-compatible Amazon Aurora Serverless
D. MySQL deployed on Amazon EC2 in an Auto Scaling group
Answer: c
公司的Web应用程序具有零星的使用模式。 有大量的使用
每个月初,每个星期初使用量适中,使用量不可预测
本周内。 该应用程序由运行中的Web服务器和MySQL数据库服务器组成
在数据中心内。 该公司希望将应用程序移至AWS Cloud,并
需要选择一个不需要数据库修改的具有成本效益的数据库平台。
哪种解决方案可以满足这些要求?
A.Amazon DynamoDB
B.适用于MySQL的Amazon RDS
C.与MySQL兼容的Amazon Aurora Serverless
D.在Auto Scaling组中部署在Amazon EC2上的MySQL

来自AWS Aurora Serverless:“它使您可以在云中运行数据库而无需管理任何数据库实例。这是一种简单,具有成本效益的选项,适用于偶发性,间歇性或不可预测的工作负载。”

该问题明确指出这是零星的。确实,这是可以预见的,因为我们确实知道什么时候流量很低并且什么时候会增加。但是,我认为无服务器解决方案可以向外扩展,并且仅在需要时才可以->节省成本,从而可以更好地应对此类工作负载。

一周内无法预测的使用情况-> Aurora

QUESTION 154
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A solutions architect is designing a mission-critical web application. It will consist of Amazon EC2
instances behind an Application Load Balancer and a relational database. The database should
be highly available and fault tolerant.
Which database implementations will meet these requirements? (Select TWO.)
A. Amazon Redshift
B. Amazon DynamoDB
C. Amazon RDS for MySQL
D. MySQL-compatible Amazon Aurora Multi-AZ
E. Amazon RDS for SQL Server Standard Edition Mufti-AZ
Answer: DE

解决方案架构师正在设计任务关键型Web应用程序。 它将由Amazon EC2组成
Application Load Balancer和关系数据库后面的实例。 数据库应该
高可用性和容错能力。
哪些数据库实现将满足这些要求? (选择两个。)
A.亚马逊Redshift
B.亚马逊DynamoDB
C.MySQL的Amazon RDS
D.与MySQL兼容的Amazon Aurora Multi-AZ
E.适用于SQL Server Standard Edition Mufti-AZ的Amazon RDS
QUESTION 155
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A media company is evaluating the possibility of moving its systems to the AWS Cloud. The
company needs at least 10 TB of storage with the maximum possible I/O performance for video
processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to
meet requirements for archival media that is not in use anymore.
Which set of services should a solutions architect recommend to meet these requirements?
A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3
Glacier for archival storage
B. Amazon EBS for maximum performance. Amazon EFS for durable data storage, and Amazon S3
Glacier for archival storage
C. Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage,
and Amazon S3 for archival storage
D. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and
Amazon S3 Glacier for archival storage
Answer: D
一家媒体公司正在评估将其系统迁移到AWS云的可能性。 的
公司需要至少10 TB的存储,并具有视频的最大I / O性能
处理能力,300 TB的非常耐用的存储空间用于存储媒体内容,以及900 TB的存储空间
满足不再使用的档案媒体的要求。
解决方案架构师应推荐哪些服务来满足这些要求?
A. Amazon EBS提供最佳性能,Amazon S3提供持久性数据存储,Amazon S3
档案存储冰川
B. Amazon EBS,以获得最佳性能。 用于持久数据存储的Amazon EFS和Amazon S3
档案存储冰川
C. Amazon EC2实例存储可实现最佳性能,Amazon EFS可实现持久数据存储,
和Amazon S3用于档案存储
D. Amazon EC2实例存储可实现最佳性能,Amazon S3可实现持久数据存储,以及
Amazon S3 Glacier用于档案存储
QUESTION 156
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company hosts an application on an Amazon EC2 instance that requires a maximum of 200 GB
storage space. The application is used infrequently, with peaks during mornings and evenings.
Disk I/O varies, but peaks at 3,000 IOPS. The chief financial officer of the company is concerned
about costs and has asked a solutions architect to recommend the most cost-effective storage
option that does not sacrifice performance.
Which solution should the solutions architect recommend?
A. Amazon EBS Cold HDD (sc1)
B. Amazon EBS General Purpose SSD (gp2)
C. Amazon EBS Provisioned lOPS SSD (io1)
D. Amazon EBS Throughput Optimized HDD (st1)
Answer: B

一家公司在Amazon EC2实例上托管一个应用程序,该应用程序最多需要200 GB
储存空间。 该应用程序很少使用,在早晨和晚上都有高峰。
磁盘I / O有所不同,但最高达到3,000 IOPS。 有关公司的首席财务官
有关成本的问题,并已要求解决方案架构师推荐最经济高效的存储
不牺牲性能的选择。
解决方案架构师应建议哪种解决方案?
A.Amazon EBS冷硬盘(sc1)
B.Amazon EBS通用SSD(gp2)
C.Amazon EBS预置的lOPS SSD(io1)
D.Amazon EBS吞吐量优化的硬盘(st1)

Explanation: General Purpose SSD (gp2) volumes offer cost- effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designs gp2 volumes to deliver their provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB. In this case the volume would have a baseline performance of3 x 200 = 600 IOPS. The volume could also burst to 3,000 IOPS for extended periods, As the I/O varies, this should be suitable. CORRECT: “Amazon EBS General Purpose SSD (gp2)” is the correct answer. INCORRECT: “Amazon EBS Provisioned IOPS SSD (io1)” is incorrect as this would be a more expensive option and is not required for the performance characteristics of this workload. INCORRECT: “Amazon EBS Cold HDD (sc1)” is incorrect as there is no lOPS SLA for HDD volumes and they would likely not perform well enough for this workload.

INCORRECT: “Amazon EBS Throughput Optimized HDD (st1)” is incorrect as there is no IOPS SLA for HDD volumes and they would likely not perform well enough for this workload. References: https:/docs. aพs .amazon.com/AWSEC2/latest/UserGuide/ebs-volum-tys. Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-tnn-- associate/compute/amazon-ebs/

QUESTION 157
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company delivers files in Amazon S3 to certain users who do not have AWS credentials. These
users must be given access for a limited lime. What should a solutions architect do to securely
meet these requirements?
A. Enable public access on an Amazon S3 bucket.
B. Generate a presigned URL to share with the users.
C. Encrypt files using AWS KMS and provide keys to the users.
D. Create and assign IAM roles that will grant GetObject permissions to the users.
Answer: B

一家公司将Amazon S3中的文件交付给某些没有AWS凭证的用户。 这些
必须授予用户使用有限石灰的权限。 解决方案架构师应如何做才能安全
满足这些要求?
A.在Amazon S3存储桶上启用公共访问。
B.生成一个预先签名的URL与用户共享。
C.使用AWS KMS加密文件并向用户提供密钥。
D.创建并分配IAM角色,这些角色将向用户授予GetObject权限。
QUESTION 158
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
A leasing company generates and emails PDF statements every month for all its customers.
Each statement is about 400 KB in size. Customers can download their statements from the
website for up to 30 days from when the statements were generated. At the end of their 3-year
lease, the customers are emailed a ZIP file that contains all the statements.
What is the MOST cost-effective storage solution for this situation?
A. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to
move the statements to Amazon S3 Glacier storage after 1 day.
B. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to
move the statements to Amazon S3 Glacier Deep Archive storage after 30 days.
Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to
move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-lA) storage after
30 days.
D. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage
class.
Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
Answer: D
一家租赁公司每月为其所有客户生成并通过电子邮件发送PDF报表。每个语句的大小约为400 KB。
客户可以在生成报表之日起30天内从网站下载其报表。在3年租期结束时,会通过电子邮件向客户发送包含所有对帐单的ZIP文件。对于这种情况,最有成本效益的存储解决方案是什么?
A.使用Amazon S3 Standard存储类存储语句。创建生命周期策略以
1天后将报表移至Amazon S3 Glacier存储。
B.使用Amazon S3 Glacier存储类存储语句。创建生命周期策略以
30天后将报表移至Amazon S3 Glacier Deep Archive存储。
使用Amazon S3 Standard存储类存储语句。创建生命周期策略以
之后将语句移至Amazon S3一次区域不频繁访问(S3 One Zone-lA)存储
30天。
D.使用Amazon S3 Standard-Infrequent Access(S3 Standard-IA)存储存储语句
类。
创建生命周期策略,以在30天后将语句移至Amazon S3 Glacier存储。

S3-IA的每个文件大小至少为128kb,存储时间至少为30天。深度存档是最便宜的存档,也是最适合的存档,因为3年后将可以检索文件。

QUESTION 159
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
A solutions architect is moving the static content from a public website hosted on Amazon EC2
instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the
static assets. The security group used by the EC2 instances restricts access to a limited set of IP
ranges. Access to the static content should be similarly restricted.
Which combination of steps will meet these requirements? (Select TWO.)
A. Create an origin access identity (OAI) and associate it with the distribution. Change the
permissions in the bucket policy so that only the OAl can read the objects.

A. Create an origin access identity (OAl) and associate it with the distribution. Change the
permissions in the bucket policy so that only the OAI can read the objects.
B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2
security group. Associate this new web ACL with the CloudFront distribution.

C. Create a new security group that includes the same IP restrictions that exist in the current EC2
security group, Associate this new security group with the CloudFront distribution.
D. Create a new security group that includes the same IP restrictions that exist in the current EC2
security group. Associate this new security group with the S3 bucket hosting the static content.
E. Create a new IAM role and associate the role with the distribution, Change the permissions either
on the S3 bucket or on the files within the S3 bucket so that only the newly created IAM role has
read and download permissions.
Answer: AB

解决方案架构师正在从Amazon EC2上托管的公共网站转移静态内容
实例到Amazon S3存储桶。 Amazon CloudFront发行版将用于交付
静态资产。 EC2实例使用的安全组将访问限制为一组有限的IP
范围。同样,对静态内容的访问也应受到限制。
哪些步骤组合可以满足这些要求? (选择两个。)
A.创建一个原始访问身份(OAI)并将其与分发关联。改变
存储桶策略中的权限,以便只有OAl可以读取对象。
B.创建一个包含与EC2中相同的IP限制的AWS WAF Web ACL
安全组。将此新的Web ACL与CloudFront分配关联。
C.创建一个新的安全组,其中包括与当前EC2中相同的IP限制
安全组,将此新安全组与CloudFront分配关联。
D.创建一个新的安全组,其中包括与当前EC2中相同的IP限制
安全组。将此新安全组与托管静态内容的S3存储桶相关联。
E.创建一个新的IAM角色并将该角色与分发相关联,或者更改权限
在S3存储桶或S3存储桶中的文件上,以便只有新创建的IAM角色具有
阅读和下载权限。

使用签名的网址或Cookie -限制对Amazon S3存储桶中内容的访问=> A -使用AWS WAF Web ACL => B -使用地理限制

Explanation: C & D - S3 cannot be in a ASG E - Object permissions in a bucket must be given to the files not to the bucket in general.

QUESTION 160
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company has a large Microsoft SharePoint deployment running on-premises that requires
Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS
Cloud and is considering various storage options. The storage solution must be highly available
and integrated with Active Directory for access control.
Which solution will satisfy these requirements?
A. Configure Amazon EFS storage and set the Active Directory domain for authentication.
B. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones.
C. Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume,
D, Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory
domain for authentication.
Answer: D
公司有一个大型Microsoft SharePoint部署在本地运行,需要Microsoft Windows共享文件存储。 该公司希望将此工作负载迁移到AWS
云,正在考虑各种存储选项。 存储解决方案必须具有高可用性,并且必须与Active Directory集成在一起才能进行访问控制。
哪种解决方案可以满足这些要求?
A.配置Amazon EFS存储并设置Active Directory域以进行身份验证。
B.在两个可用区中的AWS Storage Gateway文件网关上创建SMB文件共享。
C.创建一个Amazon S3存储桶并配置Microsoft Windows Server以将其作为卷安装,
D,在AWS上为Windows文件服务器创建Amazon FSx文件系统并设置Active Directory
用于身份验证的域。

Explanation Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration, It offers single-AZ and multi- AZ deployment options, fully managed backups, and encryption of data at rest and in transit. You can optimize cost and performance for your workload needs with SSD and HDD storage options; and you can scale storage and change the throughput performance of your file system at any time. Amazon FSx file storage is accessible from Windows, Linux, and MacOS compute instances and devices running on AWS or on premises. Works with Microsoft Active Directory (AD) to easily integrate file systems with Windows environments.

CORRECT: “Amazon FSx” is the correct answer. INCORRECT: “Amazon EFS” is incorrect as EFS only supports Linux systems. INCORRECT: “Amazon S3” is incorrect as this is not a suitable replacement for a Microsoft filesystem. INCORRECT: “AWS Storage Gateway” is incorrect as this service is primarily used for connecting on-premises storage to cloud storage. It consists of a software device installed on- premises and can be used with SMB shares but it actually stores the data on S3. It is also used for migration. However, in this case the company need to replace the file server farm and Amazon FSx is the best choice for this job. References: https://docs. .aws.amazon.com/fsx/latestWindowsGuide/high-availability-multiAZ.html Save time with our exam-specific cheat sheets: https://digitalcloud .training/certification-training/aws-solutions-architect- associate/storage/amazon-fsx/

QUESTION 161
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company runs multiple Amazon EC2 Linux instances in a VPC with applications that use a
hierarchical directory structure. The applications need to rapidly and concurrently read and write
to shared storage How can this be achieved?
A. Create an Amazon EFS file system and mount it from each EC2 instance,
B. Create an Amazon S3 bucket and permit access from all the EC2 instances in the VPC.
C. Create a file system on an Amazon EBS Provisioned lOPS SSD (io1) volume. Attach the volume
to all the EC2 instances.
D. Create file systems on Amazon EBS volumes attached to each EC2 instance. Synchronize the
Amazon EBS volumes across the different EC2 instances.
Answer: A
一家公司在VPC中使用使用分层目录结构的应用程序运行多个Amazon EC2 Linux实例。应用程序需要快速并发地对共享存储进行读写操作如何实现? 
A.创建一个Amazon EFS文件系统并从每个EC2实例安装它,
B。创建一个Amazon S3存储桶并允许从VPC中的所有EC2实例进行访问。 
C.在Amazon EBS预置的lOPS SSD(io1)卷上创建文件系统。将卷附加到所有EC2实例。 
D.在附加到每个EC2实例的Amazon EBS卷上创建文件系统。在不同的EC2实例之间同步Amazon EBS卷。
QUESTION 162
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company runs an application using Amazon ECS. The application creates resized versions of
an original image and then makes Amazon S3 API calls to store the resized images in Amazon
S3. How can a solutions architect ensure that the application has permission to access Amazon
S3?
A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch
the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task
definition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the
launch configuration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the
ECS cluster while logged in as this account.
Answer: B
一家公司使用Amazon ECS运行应用程序。该应用程序创建原始图像的调整大小版本,然后进行Amazon S3 API调用以将调整大小的图像存储在Amazon S3中。
解决方案架构师如何确保应用程序有权访问Amazon S3? 
A.更新AWS IAM中的S3角色以允许从Amazon ECS进行读/写访问,然后重新启动该容器。 B.创建一个具有S3权限的IAM角色,然后在任务定义中将该角色指定为taskRoleArn。 
C.创建一个安全组,该安全组允许从Amazon ECS到Amazon S3的访问,并更新ECS集群使用的启动配置。 D.创建具有S3权限的IAM用户,然后以该帐户身份登录时重新启动ECS集群的Amazon EC2实例。
QUESTION 163
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A solutions architect has configured the following IAM policy.
1
"Vereicn": "2012-10-17",
"statement":[
"Effect": "Allow",
"Action":[
"lambda:
l,
"Resource": "*"
1,
"Effect": "Deny",
"Action":[
"lambda:CreateFuncti on" ,
"lambda: De leteFunction"
1,
"Resource": "*",
"Condition": (
"IpAddress": (
"ave:SourceIp": "220.100.16. 0/20"
)
Which action will be allowed by the policy?
A. An AWS Lambda function can be deleted from any network.
B. An AWS Lambda function can be created from any network.
C. An AWS Lambda function can be deleted from the 100.220.0.0/20 network.
D. An AWS Lambda function can be deleted from the 220.100.16.0/20 network
Answer: C
QUESTION 164
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A website runs a web application that receives a burst of traffic each day at noon. The users
upload new pictures and content daily, but have been complaining of timeouts. The architecture
uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minute
to initiate upon boot up before responding to user requests.
How should a solutions architect redesign the architecture to better respond to changing traffic?
A. Configure a Network Load Balancer with a slow start configuration,
B. Configure AWS ElastiCache for Redis to offload direct requests to the servers.
C. Configure an Auto Scaling step scaling policy with an instance warmup condition.
D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Answer: C
一个网站运行一个Web应用程序,该应用程序每天中午都会收到大量流量。用户每天上传新图片和新内容,但一直抱怨超时。
该架构使用Amazon EC2 Auto Scaling组,并且自定义应用程序始终在启动时花费1分钟在响应用户请求之前启动。解决方案架构师应如何重新设计架构,以更好地响应不断变化的流量? 
A.使用慢启动配置配置网络负载平衡器,B。配置Redis的AWS ElastiCache将直接请求卸载到服务器。 
C.使用实例预热条件配置Auto Scaling步骤扩展策略。 D.将Amazon CloudFront配置为使用应用程序负载均衡器作为源

If you are creating a step policy, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an instance is not counted toward the aggregated metrics of the Auto Scaling group.

如果要创建步骤策略,则可以指定新启动的实例进行预热所用的秒数。 在指定的预热时间到期之前,不会将实例计入Auto Scaling组的聚合指标。

C在自动缩放中添加了预热条件。

QUESTION 165
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company has a website running on Amazon EC2 instances across two Availability Zones, The
company is expecting spikes in traffic on specific holidays, and wants to provide a consistent user
experience. How can a solutions architect meet this requirement?
A. Use step scaling.
g B. Use simple scaling.
C
Use lifecycle hooks.
D. Use scheduled scaling,
Answer: D

一家公司在两个可用区上的Amazon EC2实例上运行着一个网站,该公司预计特定假期的流量会激增,并希望提供一致的用户体验。解决方案架构师如何满足此要求?答:使用逐步缩放。 g B.使用简单缩放。 C使用生命周期挂钩。 D.使用预定的缩放


QUESTION 166
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
A company's web application is running on Amazon EC2 instances behind an Application Load
Balancer. The company recently changed its policy, which ñow requires the application to be
accessed from one specific country only.
Which configuration will meet this requirement?
A. Configure the security group for the EC2 instances.
B. Configure the security group on the Application Load Balancer.
C. Configure AWS WAF on the Application Load Balancer in a VPC.
D. Configure the network ACL for the subnet that contains the EC2 instances.
公司的Web应用程序在Application Load Balancer后面的Amazon EC2实例上运行。
该公司最近更改了政策,现在只需要从一个特定国家/地区访问该应用程序即可。哪种配置可以满足此要求? 
A.为EC2实例配置安全组。 B.在应用程序负载平衡器上配置安全组。 C.在VPC中的Application Load Balancer上配置AWS WAF。 D.为包含EC2实例的子网配置网络ACL
Answer: C
QUESTION 167
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
A company has 150 TB of archived image data stored on-premises that needs to be mowed to
the AWS Cloud within the next month. The company's current network connection allows up to
100 Mbps uploads for this purpose during the night only.
What is the MOST cost-effective mechanism to move this data and meet the migration deadline?
A. Use AWS Snowmobile to ship the data to AWS.
B. Order multiple AWS Snowball devices to ship the data to AWS.
C. Enable Amazon S3 Transfer Acceleration and securely upload the data.
D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.
Answer: B
一家公司在本地存储了150 TB的存档图像数据,需要在下个月内将其修剪到AWS云中。该公司当前的网络连接仅在夜间允许最多100 Mbps的上传。 什么是最有成本效益的机制来移动此数据并在迁移截止日期之前完成? A.使用AWS Snowmobile将数据运送到AWS。 B.订购多个AWS Snowball设备以将数据发送到AWS。 C.启用Amazon S3 Transfer Acceleration并安全地上传数据。 D.创建一个Amazon S3 VPC终端节点并建立一个VPN以上传数据。

几个雪球设备(80 TB)应该能够轻松移动150 TB。因此答案应该是B。

QUESTION 168
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A three-tier web application processes orders from customers. The web tier consists of Amazon
EC2 instances behind an Application Load Balancer, a middle tier of three EC2 instances
decoupled from the web tier using Amazon SQS. and an Amazon DynamoDB backend. At peak
times, customers who submit orders using the site have to wait much longer than normal to
receive confirmations due to lengthy processing times. A solutions architect needs to reduce
these processing times. Which action will be MOST effective in accomplishing this?
A. Replace the SQS queue with Amazon Kinesis Data Firehose.
B. Use Amazon ElastiCache for Redis in front of the DynamoDB backend tier.
C. Add an Amazon CloudFront distribution to cache the responses for the web tier,
D. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SOS queue
depth.
Answer: D
三层Web应用程序处理来自客户的订单。 Web层由位于Application Load Balancer后面的Amazon EC2实例组成,这是一个三个EC2实例的中间层,使用Amazon SQS与Web层分离。
和一个Amazon DynamoDB后端。在高峰时段,由于处理时间很长,使用该网站提交订单的客户必须比正常等待更长的时间才能收到确认。
解决方案架构师需要减少这些处理时间。哪种行动最有效地做到这一点? 
A.用Amazon Kinesis Data Firehose替换SQS队列。 B.在DynamoDB后端层前面使用Amazon ElastiCache for Redis。 
C.添加Amazon CloudFront发行版以缓存Web层的响应,D.使用Amazon EC2 Auto Scaling根据SOS队列深度扩展中间层实例

为了解决“冗长的处理时间”,添加更多EC2实例。

QUESTION 169
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company wants to host a web application on AWS that will communicate to a database within a
VPC.
The application should be highly available.
What should a solutions architect recommend?
A. Create two Amazon EC2 instances to host the web servers behind a load balancer, and then
deploy the database on a large instance.
B. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web
servers, and then deploy Amazon RDS in multiple Availability Zones.
C. Deploy a load balancer in the public subnet with an Auto Scaling group for the web servers, and
then deploy the database on an Amazon EC2 instance in the private subnet.
D. Deploy two web servers with an Auto Scaling group, configure a domain that points to the two
web servers, and then deploy a database architecture in multiple Availability Zones.
Answer: B

一家公司希望在AWS上托管一个Web应用程序,该应用程序将与VPC中的数据库进行通信。该应用程序应具有很高的可用性。
解决方案架构师应该建议什么?
A.创建两个Amazon EC2实例以在负载均衡器后面托管Web服务器,然后在大型实例上部署数据库。 
B.在具有多个Web服务器的Auto Scaling组的多个可用区中部署负载均衡器,然后在多个可用区中部署Amazon RDS。
C.在具有用于Web服务器的Auto Scaling组的公共子网中部署负载均衡器,然后在专用子网中的Amazon EC2实例上部署数据库。
D.部署具有Auto Scaling组的两个Web服务器,配置指向两个Web服务器的域,然后在多个可用区中部署数据库体系结构
QUESTION 170
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company is migrating to the AWS Cloud, A file server is the first workload to migrate. Users
must be able to access the file share using the Server Message Block (SMB) protocol, Which
AWS managed service meets these requirements?
A. Amazon EBS
B. Amazon EC2
C. Amazon FSx
D. Amazon S3
Answer: C

公司正在迁移到AWS云,文件服务器是第一个要迁移的工作负载。 用户数
必须能够使用服务器消息块(SMB)协议访问文件共享,
AWS托管服务是否满足这些要求?
A.亚马逊EBS
B.亚马逊EC2
C.Amazon FSx
D.亚马逊S3

Explanation Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. Amazon FSx is built on Windows Server and provides a rich set of administrative features that include end-user file restore, user quotas, and Access Control Lists (ACLs). Additionally, Amazon FSX for Windows File Server supports Distributed File System Replication (DFSR) in both Single-AZ and Multi-AZ deployments as can be seen in the feature comparison table below. CORRECT: “Amazon FSx” is the correct answer. INCORRECT: “Amazon Elastic Block Store (EBS)” is incorrect. EFS and EBS are not good use cases for this solution. Neither storage solution is capable of presenting Amazon S3 objects as files to the application, INCORRECT: “Amazon EC2” is incorrect as no SMB support. INCORRECT: “Amazon S3” is incorrect as this is not a suitable replacement for a Microsoft filesystem. References: https://ocs.aws. amazon.com/fsxlatestWindowsGuide/high-availability-multiAZ.html Save time with our exam-specific cheat sheets: https://digitalcloud. .training/certification-training/aws-solutions-architect- associate/storage/amazon-fsx/

QUESTION 171

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company has a mobile chat application with a data store based in Amazon DynamoDB. Users
would like new messages to be read with as lttle latency as possible. A solutions architect needs
to design an optimal solution that requires minimal application changes.
Which method should the solutions architect select?
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code
to use the DAX endpoint.
B. Add DynamoDB read replicas to handle the increased read load. Update the application to point
to the read endpoint for the read replicas.
C. Double the number of read capacity units for the new messages table in DynamoDB. Continue to
use the existing DynamoDB endpoint.
D. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to
point to the Redis cache endpoint instead of DynamoDB.
Answer: A

一家公司拥有一个移动聊天应用程序,该应用程序具有基于Amazon DynamoDB的数据存储。用户希望以尽可能小的延迟读取新消息。解决方案架构师需要设计一种需要最少应用程序更改的最佳解决方案。解决方案架构师应选择哪种方法? A.为新消息表配置Amazon DynamoDB Accelerator(DAX)。更新代码以使用DAX端点。 B.添加DynamoDB只读副本以处理增加的读取负载。更新应用程序以指向只读副本的读取端点。 C.将DynamoDB中新消息表的读取容量单位增加一倍。继续使用现有的DynamoDB端点。 D.将Amazon ElastiCache for Redis缓存添加到应用程序堆栈。更新应用程序以指向Redis缓存端点,而不是DynamoDB。

Explanation Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second,

Amazon ElastiCache is incorrect because although you may use ElastiCache as your database cache, it will not reduce the DynamoDB response time from milliseconds to microseconds as compared with DynamoDB DAX. AWS Device Farm is incorrect because this is an app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time. DynamoDB Read Replica is incorrect because this is primarily used to automate capacity management for your tables and global secondary indexes. References: https://aws amazon.com/dynamodb/dax https:/aws. amazon.com/device-farm Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/aws-cheat sheet-amazon-dynamodb/

Amazon DynamoDB Accelerator(DAX)是一种完全托管的,高度可用的内存中缓存,即使在每秒数百万个请求的情况下,它也可以将Amazon DynamoDB响应时间从毫秒减少到微秒, Amazon ElastiCache是不正确的,因为尽管您可以将ElastiCache用作数据库缓存,但与DynamoDB DAX相比,它不会将DynamoDB响应时间从毫秒缩短为微秒。 AWS Device Farm是不正确的,因为这是一项应用程序测试服务,可让您一次在许多设备上测试您的Android,iOS和Web应用程序并与之交互,或实时再现设备上的问题。 DynamoDB只读副本不正确,因为它主要用于自动执行表和全局二级索引的容量管理

QUESTION 172
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company wants to use an AWS Region as a disaster recovery location for its on-premises
infrastructure. The company has 10 TB of existing data, and the on-premise data center has a 1
Gbps internet connection. A solutions architect must find a solution so the company can have its
existing data on AWS in 72 hours without transmitting it using an unencrypted channel.
Which solution should the solutions architect select?
A. Send the initial 10 TB of data to AWS using FTP.
B.Send the initial 10 TB of data to AWS using AWS Snowball.
C. Establish a VPN connection between Amazon VPC and the company's data center.
D. Establish an AWS Direct Connect connection between Amazon VPC and the company's data
center.
Answer: C
一家公司希望将AWS区域用作其本地基础架构的灾难恢复位置。该公司拥有10 TB的现有数据,而内部数据中心具有1 Gbps的互联网连接。
解决方案架构师必须找到一个解决方案,以便公司可以在72小时内将其现有数据存储在AWS上,而无需使用未加密的通道进行传输。解决方案架构师应选择哪种解决方案? 
A.使用FTP将最初的10 TB数据发送到AWS。 B.使用AWS Snowball将最初的10 TB数据发送到AWS。 
C.在Amazon VPC与公司的数据中心之间建立VPN连接。 D.在Amazon VPC与公司数据中心之间建立AWS Direct Connect连接

Direct connect need at least a month to setup, snowball takes a week

直接连接至少需要一个月的时间来设置,滚雪球需要一周的时间

Explanation: Keyword: AWS Region as DR for On-premises DC (Existing Data=10TB) + 1G Internet Connection Condition: 10TB on AWS in 72 Hours + Without Unencrypted Channel Without Unencrypted Channel = VPN FTP = Unencrypted Channel Options · A· Out of race, since this is unencrypted channel & not matching the condition Options · B · Out of race due to the timebound target & order /delivering AWS Snowball device will take time Options · C · Win th race, using the existing 1G Internet Link we can transfer this 10TB data within 24Hrs using encrypted Channel Options · D· Out of race due to the timebound target & order /delivering AWS Direct Connect will take time

References: https://docs. ,aws .amazon.com/snowballatest/ug/mailing-storage.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-assciatenetworking-and- content-delivery/aws-direct-connect https://digitalcloud .training/certification-training/aws-solutions-architect- associate/networking-and- content-delivery/amazon-vpc/ https://tutorialsdojo.com/aws- direct-connect/ https://tutorialsdojo.com/amazon-vpc/

QUESTION 173
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A web application runs on Amazon EC2 instances behind an Application Load Balancer. The
application allows users to create custom reports of historical weather data. Generating a report
can take up to 5 minutes. These long-running requests use many of the available incoming
connections, making the system unresponsive to other users.
How can a solutions architect make the system more responsive?

A. Use Amazon SQS with AWS Lambda lo generate reports.
B. Increase the idle timeout on the Application Load Balancer to 5 minutes.
C. Update the client-side application code to increase its request timeout to 5 minutes.
D. Publish the reports to Amazon S3 and use Amazon CloudFront for downloading to the user.
Answer: A
Web应用程序在Application Load Balancer后面的Amazon EC2实例上运行。该应用程序允许用户创建历史天气数据的自定义报告。
生成报告最多可能需要5分钟。这些长时间运行的请求使用许多可用的传入连接,从而使系统对其他用户无响应。
解决方案架构师如何使系统更具响应能力? 
A.将Amazon SQS与AWS Lambda一起使用可生成报告。 B.将应用程序负载平衡器上的空闲超时增加到5分钟。 
C.更新客户端应用程序代码以将其请求超时增加到5分钟。 D.将报告发布到Amazon S3并使用Amazon CloudFront下载到用户

Prefer Asyncronus calls = SQS Go serverless = Lambda

QUESTION 174
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company decides to migrate its three-tier web application from on premises to the AWS Cloud.
The new database must be capable of dynamically scaling storage capacity and performing table
joins.
Which AWS service meets these requirements?
A. Amazon Aurora
B. Amazon RDS for SqIServer
C. Amazon DynamoDB Streams
D. Amazon DynamoDB on-demand
Answer: A
一家公司决定将其三层Web应用程序从本地迁移到AWS Cloud。
新数据库必须能够动态扩展存储容量并执行表
加入。
哪项AWS服务符合这些要求?
A.亚马逊极光
B.适用于SqIServer的Amazon RDS
C.Amazon DynamoDB流
D.按需Amazon DynamoDB

Amazon Aurora的优异性能来源于其区别于传统数据库的系统架构。Amazon Aurora基于分布式共享存储架构,存储和计算分离,提供了即时生效的可扩展能力和运维能力。只将重做日志记录写入存储层,系统可以将网络的IOPS减少一个数据量级,将更多资源用于读/写流量,从而获得大幅性能提升。

再来看可用性。Amazon Aurora能够跨三个可用区6路复制,支持多达15个低延迟读取副本、时间点恢复、持续备份到 Amazon S3,三十秒内便可完成故障转移。故而,任何节点故障、任何可用区故障都不会导致应用程序停机。

为进一步降低用户应用云数据库的复杂度,提升灵活性,Amazon Aurora还提供了 Serverless服务,无需配置实例、按需启动、可自动对容量进行规模伸缩,且按秒计费——用户只需根据实际使用的数据库容量付费,应用Amazon Aurora就像打开“水龙头”那样简单

例如,著名设计软件公司Autodesk将Autodesk Access Control Management (ACM) 应用程序一开始是构建在EC2实例之上,但很快超出了最大可用实例的容量限制。为了能够增强ACM性能,降低复制延迟以实现读取扩展,自动存储扩展并完全兼容MySQL,Autodesk将ACM迁移至Amazon Aurora之上,结果大大超出了预期。

迁移之后,ACM应用程序的扩展性提高了20倍,应用程序的响应时间缩短了2倍,并且 Aurora支持的数据库连接数量增加了7倍。迁移的一大亮点在于,ACM迁移至Amazon Aurora之后,CPU利用率下降了10倍,从使 MySQL时高达100%的峰值水平降至不到10%的水平,为ACM的扩展增长留下了空间。

QUESTION 175
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company runs a website on Amazon EC2 instances behind an ELB Application Load Balancer.
Amazon Route 53 is used for the DNS, The company wants to set up a backup website with a
message including a phone number and email address that users can reach if the primary
website is down,
How should the company deploy this solution?
A. Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.
B. Use Amazon S3 website hosting for the backup website and Route 53 latency routing policy,
C. Deploy the application in another AWS Region and use ELB health checks for failover routing.
D. Deploy the application in another AWS Region and use server-side redirection on the primary website.
Answer: A
一家公司在ELB应用程序负载均衡器后面的Amazon EC2实例上运行网站。 Amazon Route 53用于DNS,该公司想建立一个备份网站
,其中包含一条消息,其中包括主站点关闭时用户可以访问的电话号码和电子邮件地址,该公司应如何部署此解决方案? 
A.将Amazon S3网站托管用于备份网站和Route 53故障转移路由策略。 B.将Amazon S3网站托管用于备份网站和Route 53延迟路由策略,
C.在另一个AWS区域中部署应用程序,并使用ELB运行状况检查进行故障转移路由。 D.在另一个AWS区域中部署应用程序,并在主网站上使用服务器端重定向
QUESTION 176
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
A company needs to implement a relational database with, a multi-Region disaster recovery
Recovery Point Objective (RPO) of 1 second and an Recovery Time Objective (RTO) of 1 minute.
Which AWS solution can achieve this?
A. Amazon Aurora Global Database
B. Amazon DynamoDB global tables.
C. Amazon RDS for MySQL with Multi-AZ enabled.
D. Amazon RDS for MySQL with a cross-Region snapshot copy.
Answer: A
公司需要实现一个关系数据库,其中多区域灾难恢复恢复点目标(RPO)为1秒,恢复时间目标(RTO)为1分钟。
哪种AWS解决方案可以实现这一目标?
A. Amazon Aurora全局数据库B. Amazon DynamoDB全局表。 
C.启用了多可用区的Amazon RDS for MySQL。 D.具有跨区域快照副本的Amazon RDS for MySQL

Explanation: Cross-Region Disaster Recovery If your primary region suffers a performance degradation or outage, you can promote one of the secondary regions to take read/write responsibilities. An Aurora cluster can recover in less than 1 minute even in the event of a complete regional outage. This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan.

跨区域灾难恢复如果您的主要区域性能下降或中断,则可以提升其中一个辅助区域来承担读/写职责。即使发生完全区域性故障,Aurora群集也可以在不到1分钟的时间内恢复。这为您的应用程序提供了1秒的有效恢复点目标(RPO)和不到1分钟的恢复时间目标(RTO),为全球业务连续性计划奠定了坚实的基础。

QUESTION 177
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company running an on-premises application is migrating the application to AWS to increase its
elasticity and availability. The current architecture uses a Microsoft SQL Server database with
heavy read activity. The company wants to explore alternate database options and migräte
database engines, if needed. Every 4 hours, the development team does a full copy of the
production database to populate a test database. During this period, users experience latency.
What should a solution architect recommend as replacement database?
A. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test
database.
B. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for
the test database.
C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby
instance for the test database.
D. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore
snapshots from RDS for the test database.
Answer: D
一家运行本地应用程序的公司正在将应用程序迁移到AWS,以提高其弹性和可用性。当前体系结构使用具有大量读取活动的Microsoft SQL Server数据库。
该公司希望探索其他数据库选项,并在需要时迁移数据库引擎。开发团队每隔4个小时对生产数据库进行一次完整复制,以填充测试数据库。
在此期间,用户会遇到延迟。解决方案架构师应该推荐什么作为替代数据库? 
A.将Amazon Aurora与Multi-AZ Aurora副本一起使用,并从mysqldump恢复测试数据库。 
B.将Amazon Aurora与Multi-AZ Aurora副本一起使用,并从Amazon RDS还原测试数据库的快照。 
C.使用Amazon RDS for MySQL进行多可用区部署并读取副本,并将备用实例用于测试数据库。 
D.将Amazon RDS for SQL Server用于多可用区部署并读取副本,并从RDS还原测试数据库的快照。
QUESTION 178
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
A company currently stores symmetric encryption keys in a hardware security module (HSM). A
solution architect must design a sölution to migrate key management to AWS. The solution
should allow for key rotation and support the use of customer provided keys. Where should the
key material be stored to meet these requirements?
A. Amazon S3
B. AWS Secrets Manager
C. AWS Systems Manager Parameter store
D. AWS Key Management Service (AWS KMS)
Answer: B
目前,一家公司将对称加密密钥存储在硬件安全模块(HSM)中。解决方案架构师必须设计解决方案,以将密钥管理迁移到AWS。
解决方案应允许密钥旋转并支持客户提供的密钥的使用。密钥材料应存放在哪里以满足这些要求? 
A.Amazon S3 B.AWS Secrets Manager C.AWS Systems Manager参数存储 D.AWS密钥管理服务(AWS KMS)

Explanation: AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. https://aws. .amazon.com/secrets-manager/

QUESTION 179
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company wants to run a hybrid workload for data processing, The data needs to be accessed
by on- premises applications for local data processing using an NFS protocol, and must also be
accessible from the AWS Cloud for further analytics and batch processing.
Which solution will meet these requirements?
A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics
on this data in the AWS Cloud.
B. Use an AWS storage Gateway tape gateway to copy the backup of the local data to AWS, then
perform analytics on this data in the AWS cloud.
C. Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take
snapshots of the local data, then copy the data to AWS,
D. Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all
the local storage in the AWS cloud, then perform analytics on this data in the cloud.
Answer: A
一家公司希望运行混合工作负载以进行数据处理。数据需要由本地应用程序访问,以使用NFS协议进行本地数据处理,
并且还必须可从AWS Cloud访问以进行进一步的分析和批处理。哪种解决方案可以满足这些要求? 
A.使用AWS Storage Gateway文件网关为AWS提供文件存储,然后在AWS Cloud中对此数据执行分析。 
B.使用AWS Storage Gateway磁带网关将本地数据的备份复制到AWS,然后在AWS云中对此数据执行分析。 
C.在存储的卷配置中使用AWS Storage Gateway卷网关定期拍摄本地数据的快照,然后将数据复制到AWS,
D。在缓存的卷配置中使用AWS Storage Gateway卷网关备份所有本地存储在AWS云中,然后对云中的此数据执行分析。
QUESTION 180
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company must re-evaluate its need for the Amazon EC2 instances it currently has, provisioned
in an Auto Scaling group. At present, the Auto Scaling group is configured for minimum of two
instances and a maximum of four instances across two Availability zones. A Solutions architect
reviewed Amazon CloudWatch metrics and found that CPU utilization is consistently low for the
EC2 instances. What should the solutions architect recommend to maximize utilization while
ensuring the application remains fault tolerant?
A. Remove some EC2 instances to increase the utilization of remaining instances.
B. Increase the Amazon Elastic Block Store (Amazon EBS) capacity of instances with less CPU
utilization.
C. Modify the Auto Scaling group scaling policy to scale in and out based on a higher CPU utilization
metric.
D. Create a new launch configuration that uses smaller instance types. Update the existing Auto
Scaling group.
Answer: D
公司必须在Auto Scaling组中重新评估其对当前拥有的Amazon EC2实例的需求。当前,Auto Scaling组配置为在两个可用区中最少两个实例,最多四个实例。
解决方案架构师查看了Amazon CloudWatch指标,发现EC2实例的CPU利用率始终较低。解决方案架构师应建议什么,以在确保应用程序保持容错能力的同时最大化利用率?
A.删除一些EC2实例以提高其余实例的利用率。 B.增加CPU利用率较低的实例的Amazon Elastic Block Store(Amazon EBS)容量。
C.修改Auto Scaling组扩展策略,以根据更高的CPU利用率指标进行扩展和扩展。 D.创建一个使用较小实例类型的新启动配置。更新现有的Auto Scaling组

这里的要求是优化现有解决方案。 由于CPU利用率一直很低,因此这意味着它们“过度验证”。 正在运行的实例具有比实际消耗或使用的容量更多的容量。 现在,您必须找到一种最大化实例使用率的方法。 一种方法是让更多的流量进入或处理更多的数据,这将消耗实例的CPU。 另一种方法是将实例更改为具有足够容量以处理任何负载的较小版本。 由于它是ASG,并且ASG使用启动配置,因此我们可以在启动配置中更改实例类型。但是,我们只能在创建期间修改启动配置。 因此,这里最好的方法是使用一个较小的实例创建新的启动配置。

QUESTION 181
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company's website provides users with downloadable historical performance reports. The
website needs a solution that will scale to meet the company's website demands globally. The
solution should be cost effective, limit the? provisioning of Into and provide the fastest possible
response time. Which combination should a solutions architect recommend to meet these
requirements?
A. Amazon CloudFront and Amazon S3
B. AWS Lambda and Amazon Dynamo
C. Application Load Balancer with Amazon EC2 Auto Scaling
D. Amazon Route 53 with internal Application Load Balances
Answer: A
公司的网站为用户提供了可下载的历史绩效报告。该网站需要一种能够扩展以满足该公司全球网站需求的解决方案。该解决方案应具有成本效益,限制在哪里?
供应Into并提供最快的响应时间。解决方案架构师应推荐哪种组合来满足这些要求?
A.Amazon CloudFront和Amazon S3 B.AWS Lambda和Amazon Dynamo 
C.具有Amazon EC2自动扩展功能的应用程序负载平衡器D.具有内部应用程序负载平衡的Amazon Route 53
QUESTION 182
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company is developing a real-time multiplier game that uses UDP for communications between
client and servers in an Auto Scaling group Spikes in demand are anticipated during the day, so
the game server platform must adapt accordingly. Developers want to store gamer scores and
other non-relational data in a database solution that will scale without intervention,
Which solution should a solution architect recommend?
A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data
storage.
C. Use a Network Load Balancer for traffic distribution and amazon Aura Global for data storage.
D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for
data storage
Answer: B
一家公司正在开发一种实时乘数游戏,该游戏使用UDP在Auto Scaling组中的客户端和服务器之间进行通信,预计白天会有大量需求,
因此游戏服务器平台必须相应地进行调整。开发人员希望将玩家分数和其他非关系数据存储在无需干预即可扩展的数据库解决方案中,
解决方案架构师应建议哪种解决方案? 
A.使用Amazon Route 53进行流量分配,并使用Amazon Aurora Serverless进行数据存储。 
B.使用网络负载平衡器进行流量分配,并按需使用Amazon DynamoDB进行数据存储。 
C.使用网络负载平衡器进行流量分配,并使用Amazon Aura Global进行数据存储。 
D.使用应用程序负载平衡器进行流量分配,并使用Amazon DynamoDB全局表进行数据存储

QUESTION 183
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A company currently has 250 TB of backup files stored in Amazon S3 in a vendor's proprietary
format Using a Linux-based software application provided by the vendor, the company wants to
retrieve files from Amazon S3, transform the files to an industry-standard format, and re-upload
them to Amazon S3. The company wants to minimize the data transfer charges associated with
this conversation, What should a solution architect do to accomplish this?
A. Install the conversion software as an Amazon S3 batch operation so the data is transformed
without leaving Amazon S3.
B. Install the conversion software onto an on-premises virtual machines. Perform the transformation
and re-upload the files to Amazon S3 from the virtual machine.
C. Use AWS Snowball Edge device to expert the data and install the conversion software onto the
devices. Perform the data transformation and re-upload the files to Amazon S3 from the Snowball
devices.
D. Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion
software onto the instance. Perform the transformation and re-upload the files to Amazon S3 from
the EC2 instance.
Answer: C D?
一家公司目前拥有以供应商专有格式存储在Amazon S3中的250 TB备份文件。该公司希望使用供应商提供的基于Linux的软件应用程序从Amazon S3检索文件,
将文件转换为行业标准格式,并将它们重新上传到Amazon S3。该公司希望最大程度地减少与此对话相关的数据传输费用。
解决方案架构师应该怎么做才能做到这一点? 
A.将转换软件安装为Amazon S3批处理操作,以便在不离开Amazon S3的情况下转换数据。 
B.将转换软件安装到本地虚拟机上。执行转换并将文件从虚拟机重新上传到Amazon S3。 
C.使用AWS Snowball Edge设备对数据进行专家处理并将转换软件安装到设备上。执行数据转换并将文件从Snowball设备重新上传到Amazon S3。
D.在与Amazon S3相同的区域中启动Amazon EC2实例,然后将转换软件安装到该实例上。执行转换并将文件从EC2实例重新上传到Amazon S3。

答案是D。S3与EC2一起使用可处理大型文件。注意250TB。由于数据在S3中,因此您可以将EC2与S3放在同一区域中,因此没有传输成本。另请注意,您需要在EC2上安装供应商提供的软件。因此,D是最佳选择。https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html

QUESTION 184
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
A company has an Amazon EC2 instance running on a private subnet that needs to access a
public websites to download patches and updates. The company does not want external websites
to see the EC2 instance IP address or initiate connection to it.
How can a solution architect achieve this objective?
A. Create a site-to-site VPN connection between the private subnet and the network in which the
public site is deployed
B. Create a NAT gateway in a public subnet Route outbound traffic from the private subnet through
the NAI gateway
c. Create a network ACL for the private subnet where the EC2 instance deployed only allows
access from the IP address range of the public website
D, Create a security group that only allows connections from the IP address range of the public
website.
Attach the security group to the EC2 instance.

公司的Amazon EC2实例在私有子网上运行,需要访问
公共网站下载补丁程序和更新。 该公司不希望外部网站
查看EC2实例IP地址或启动与之的连接。
解决方案架构师如何实现此目标?
A.在专用子网和网络之间建立站点到站点的VPN连接
公共站点已部署
B.在公共子网中创建NAT网关将来自私有子网的出站流量路由通过
NAI网关
C。 为仅部署了EC2实例的私有子网创建网络ACL
从公共网站的IP地址范围访问
D,创建一个安全组,只允许来自公共IP地址范围的连接
网站。
将安全组附加到EC2实例。
Answer: B

答案B您可以使用网络地址转换(NAT)网关来启用专用子网中的实例连接到Internet或其他AWS服务,但阻止Internet启动与这些实例的连接

QUESTION 185
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company has created an isolated backup of jts environment in another Region. The application
is running in warm standby mode and is fronted by an Application Load Balancer (ALB). The
current failover process is manual and requires updating a DNS alias record to point to the
secondary ALB in another Region.
What should a solution architect do to automate the failover process?
A. Enable an ALB health check
B. Enable an Amazon Route 53 health check.
C. Crate an CNAME record on Amazon Route 53 pointing to the ALB endpoint.
D. Create conditional forwarding rules on Amazon Route 53 pointing to an internal BIND DNS
server.
Answer: B
一家公司在另一个地区创建了jts环境的隔离备份。该应用程序在热备份模式下运行,并且位于应用程序负载平衡器(ALB)的前面。
当前的故障转移过程是手动的,需要更新DNS别名记录以指向另一个区域中的辅助ALB。解决方案架构师应该怎么做才能使故障转移过程自动化?
A.启用ALB健康检查B.启用Amazon Route 53健康检查。
C.在Amazon Route 53上创建指向ALB端点的CNAME记录。 D.在指向内部BIND DNS服务器的Amazon Route 53上创建条件转发规则

htts://aws. .amazon.com/premiumsupport/knowledge-center/route-53-dns-health-checks/

QUESTION 186
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
A company needs to share an Amazon S3 bucket with an external vendor. The bucket owner
must be able to access all objects.
Which action should be taken to share the S3 bucket?
A. Update the bucket to be a Requester Pays bucket
B. Update the bucket to enable cross -origin resource sharing (CPORS)
C. Create a bucket policy to require users to grant bucket-owner-full when uploading objects
D. Create an IAM policy to require users to grant bucket-owner-full control when uploading objects.
Answer: C
公司需要与外部供应商共享一个Amazon S3存储桶。存储桶拥有者必须能够访问所有对象。应该采取什么行动来共享S3存储桶?
A.将存储桶更新为请求者支付存储桶B.更新存储桶以启用跨域资源共享(CPORS)
C.创建存储桶策略以要求用户在上载对象时授予存储桶所有者已满D.创建IAM要求用户在上载对象时授予存储桶拥有者完全控制的政策

QUESTION 187
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
A company uses Amazon S3 as its object storage solution. The company has thousands of S3 it
uses to store data. Some of the S3 bucket have data that is accessed less frequently than others.
A solutions architect found that lifecycle policies are not consistently implemented or are
implemented partially. resulting in data being stored in high-cost storage.
Which solution will lower costs without compromising the availability of objects?
A. Use S3 ACLs
B. Use Amazon Elastic Block Store EBS) automated snapshots
C. Use S3 inteligent-Tiering storage
D. Use S3 One Zone-infrequent Access (S3 One Zone-lA).
一家公司使用Amazon S3作为其对象存储解决方案。该公司有数千个S3用于存储数据。一些S3存储桶具有比其他数据访问频率较低的数据。
解决方案架构师发现,生命周期策略并不一致,或者部分实现。导致数据存储在高成本的存储中。
哪种解决方案可以在不影响对象可用性的情况下降低成本?

Answer: C

QUESTION 188
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A solution architect is performing a security review of a recently migrated workload. The workload
is a web application that consists of amazon EC2 instances in an Auto Scaling group behind an
Application Load balancer. The solution architect must improve the security posture and minimize
the impact of a DDoS attack on resources.
Which solution is MOST effective?
A. Configure an AWS WAF ACL with rate-based rules Create an Amazon CloudFront distribution
that points to the Application Load Balancer. Enable the EAF ACL on the CloudFront distribution
B. Create a custom AWS Lambda function that adds identified attacks into a common vulnerability
pool to capture a potential DDoS attack. use the identified information to modify a network ACL to
block access.
C. Enable VPC Flow Logs and store then in Amazon S3. Create a custom AWS Lambda functions
that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP
addresses.
D. Enable Amazon GuardDuty and , configure findings written 10 Amazon GloudWatch Create an
event with Cloud Watch Events for DDoS alerts that triggers Amazon Simple Notification Service
(Amazon SNS) Have Amzon SNS invoke a custom AWS lambda function that parses the logs
looking for a DDoS attack Modify a network ACL to block identified source IP addresses
Answer: A
解决方案架构师正在对最近迁移的工作负载执行安全检查。工作负载是一个Web应用程序,由Application Load Balancer后面的Auto Scaling组中的Amazon EC2实例组成。
解决方案架构师必须改善安全状况,并最大程度地减少DDoS攻击对资源的影响。哪种解决方案最有效? 
A.使用基于费率的规则配置AWS WAF ACL创建指向应用程序负载均衡器的Amazon CloudFront分配。在CloudFront发行版B上启用EAF ACL。
创建一个自定义AWS Lambda函数,将已识别的攻击添加到公共漏洞池中,以捕获潜在的DDoS攻击。使用识别的信息来修改网络ACL以阻止访问。 
C.启用VPC流日志,然后将其存储在Amazon S3中。创建自定义AWS Lambda函数,该函数分析日志以查找DDoS攻击。修改网络ACL以阻止已标识的源IP地址。
D.启用Amazon GuardDuty并配置写入的结果10 Amazon GloudWatch使用Cloud Watch Events为触发Amazon Simple Notification Service(Amazon SNS)的DDoS警报创建事件让Amzon SNS调用自定义AWS lambda函数来解析日志以查找DDoS攻击修改网络ACL以阻止已标识的源IP地址

答案是A。AWSWAF是一种Web应用程序防火墙,可通过检查流量内联来帮助检测和缓解Web应用程序层DDoS攻击。应用程序层DDoS攻击使用格式正确但恶意的请求来规避缓解并消耗应用程序资源。您可以定义自定义安全规则(也称为Web ACL),其中包含一组条件,规则和操作以阻止攻击流量。定义Web ACL之后,您可以将它们应用于CloudFront分配,并且Web ACL将按照您在配置它们时指定的优先级顺序进行评估。为每个Web ACL提供了实时指标和示例Web请求。

使用基于费率的规则配置AWS WAF ACL。创建一个指向应用程序负载均衡器的Amazon CloudFront分配。在CloudFront分发上启用EAF ACL。

QUESTION 189
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company has a custom application running on an Amazon EC2 instance that:
. Reads a large amount of data from Amazon S3
. Performs a multi stage analysis
Writes the results to Amazon DynamoDB
The application writes a significant number of large temporary files during the multi stage analysis
The process performance depends on the temporary storage performance. What would be the
fastest storage option for holding the temporary files?

A. Multiple Amazon S3 buckets with Transfer Acceleration for storage
B. Multiple Amazon EBS drives with Provisioned IOPS and EBS optimization
C. Multiple Amazon EFS volumes using the Network I lie System version 4.1 (NFSv4.1) protocol.
D. Multiple instance store volumes with software RAID 0.
Answer: D
公司在Amazon EC2实例上运行的自定义应用程序具有:。从Amazon S3读取大量数据。执行多阶段分析将结果写入Amazon DynamoDB。应用程序在多阶段分析期间写入大量的大型临时文件。流程性能取决于临时存储性能。保存临时文件最快的存储方式是什么?
A.多个具有存储传输加速功能的Amazon S3存储桶B.多个具有预配置IOPS和EBS优化的Amazon EBS驱动器C.使用Network I lie System版本4.1(NFSv4.1)协议的多个Amazon EFS卷。 D.具有软件RAID 0的多个实例存储卷。

RAID 0将磁盘的性能提高到I / O翻倍。您可以使用EBS或实例存储来创建RAID-0。在这里,临时存储意味着您可以使用实例存储,它可以提供出色的IO性能。

QUESTION 190
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
A solution architect must migrate a Windows internet information Services (IIS) web application to
AWS. The application currently relies on a file share hosted in the user's on-premises network-
attached storage (NAS). The solution architected has proposed migrating the IS web servers
Which replacement to the on-promises filo share is MOST resilient and durable?
A. Migrate the file Share to Amazon RDS.
B. Migrate the tile Share to AWS Storage Gateway
C. Migrate the file Share to Amazon FSx dor Windows File Server.
D. Migrate the tile share to Amazon Elastic File System (Amazon EFS)
解决方案架构师必须将Windows Internet信息服务(IIS)Web应用程序迁移到AWS。该应用程序当前依赖于用户的本地网络连接存储(NAS)中托管的文件共享。
所设计的解决方案已建议迁移IS Web服务器哪些替代现场承诺的filo共享是最有弹性和持久性的? 
A.将文件“共享”迁移到Amazon RDS。 B.将磁贴共享迁移到AWS Storage Gateway C.将文件Share迁移到Amazon FSx dor Windows File Server。 D.将切片共享迁移到Amazon Elastic File System(Amazon EFS)

Answer: C Explanation: https://aws. amazon.com/fsx/windows/

**在今日的再创新(re:Invent 2018)大会上,亚马逊宣布了基于 Windows Server 的 FSx 文件系统,以便企业在云环境中运行完全兼容的 Windows 应用程序。**此外,它可被 Windows 文件服务器全权管理、支持通过吞吐量(IOPS)巨大的 SMB 协议访问、并且能够实现亚毫秒级的一致性能。

FSx for Windows 文件系统的亮点包括,(1)可访问性与协议支持:

可通过亚马逊 Elastic Compute Cloud(EC2)云实例、WorkSpaces 虚拟桌面、AppStream 2.0 应用程序、以及 VMware Cloud on AWS 进行访问。

(2)性能与可调谐性:

亚马逊 FSx for Windows 文件系统可提供一致的性能、亚毫秒级的延迟。文件系统可大至 64TB,吞吐量 2048 MB/s 。

(3)可管理性:

您的文件系统是完全可被管理的,数据以冗余形式存储在 AWS 的可用区域,每天自动进行增量备份,支持在必要时机型额外的备份。

(4)安全性:

多级访问控制与数据保护,文件系统端点在虚拟私有云(VPCs)中创建,访问受到安全性组策略的限制。符合 PCI-DSS 规范,可用于 HIPAA-兼容的应用程序。

(5)多可用区部署:

创建的文件系统,位于明确的 AWS 可用区域,用户可通过 Microsoft DFS 工具来设置自动复制和容错(失效备援),支持最高跨多个文件系统的 300PB 共享空间。

QUESTION 191
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
An application running on an Amazon EC2 instance in VPC-A needs to access files in another
EC2 instance in VPC-B. Both are in separate AWS accounts.
The network administrator needs to design a solution to enable secure access to EC2 instance in
VPC-B from VPC-A. The connectivity should not have a single point of failure or bandwidth
concerns.
Which solution will meet these requirements?
A. Set up a VPC peering connection between VPC-A and VPC-B.
B. Set up VPC gateway endpoints for the EC2 instance running in VPC-B.
C. Attach a virtual private gateway to VPC-B and enable routing from VPC-A.
D. Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate
routes from VPC-B
在VPC-A中的Amazon EC2实例上运行的应用程序需要访问VPC-B中另一个EC2实例中的文件。两者都在单独的AWS账户中。
网络管理员需要设计一种解决方案,以允许从VPC-A安全访问VPC-B中的EC2实例。连接不应有单点故障或带宽问题。哪种解决方案可以满足这些要求? 
A.在VPC-A和VPC-B之间建立VPC对等连接。 B.为在VPC-B中运行的EC2实例设置VPC网关端点。 
C.将虚拟专用网关连接到VPC-B,并启用从VPC-A进行路由。 D.为在VPC-B中运行的EC2实例创建一个专用虚拟接口(VIF),并从VPC-B添加适当的路由

Answer: A Explanation: A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The traffic remains in the private IP space. All inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck. https://docs .aws .amazon.com/vpc/latest/peering/wat-is- peering.html

QUESTION 192
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
A company is seeing access requests by some suspicious IP addresses. The security team
discovers the requests are from different IP addresses under the same CIDR range. What should
a solutions architect recommend to the team?
A. Add a rule in the inbound table of the security to deny the traffic from that CIDR range.
B. Add a rule in the outbound table of the security group to deny the traffic from that CIDR range.
C. Add a deny rule in the inbound table of the network ACL with a lower number than other rules.
D. Add a deny rule in the outbound table of the network ACL with a lower rule number than other
rules.
一家公司看到一些可疑IP地址的访问请求。安全团队发现请求来自相同CIDR范围内的不同IP地址。解决方案架构师应向团队推荐什么? 
A.在安全性的入站表中添加一条规则,以拒绝来自该CIDR范围的流量。 B.在安全组的出站表中添加一条规则,以拒绝来自该CIDR范围的流量。
C.在网络ACL的入站表中添加一个拒绝规则,该规则的编号要比其他规则少。 D.在网络ACL的出站表中添加一个拒绝规则,该规则的规则号比其他规则要少

Answer: C Explanation: You can only create deny rules with network ACLs, it is not possible with security groups. Network ACLs process rules in order from the lowest numbered rules to the highest until they reach and allow or deny. The following table describes some of the differences between security groups and network ACLs:

Therefore, the solutions architect should add a deny rule in the inbound table of the network ACL with a lower rule number than other rules. CORRECT: “Add a deny rule in the inbound table of the network ACL with a lower rule number than other rules” is the correct answer. INCORRECT: “Add a deny rule in the outbound table of the network ACL with a lower rule number than other rules” is incorrect as this will only block outbound traffic, INCORRECT: “Add a rule in the inbound table of the security group to deny the traffic from that CIDR range” is incorrect as you cannot create a deny rule with a security group. INCORRECT: “Add a rule in the outbound table of the security group to deny the traffic from that CIDR range” is incorrect as you cannot create a deny rule with a security group. References: https://docs. aws. .amazon.com/vpc/latest/userguide/vpc-network ·acls.html Save time with our exam-specific cheat sheets: https://digitalcloud .training/certification-training/aws-solutions-architect-associatenetworking-and- content-delivery/amazon-vpc/

QUESTION 193
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for
cross- communication. A recent increase in account creations and VPCs has made it difficult to
maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs.
There are also new requests to create site-to-site VPNs with some of the VPCs, A solutions
architect has been tasked with creating a centrally networking setup for multiple accounts, VPNS,
and VPNs.
Which networking solution meets these requirements?
A. Configure shared VPCs and VPNs and share to each other
B. Configure a hub-and-spoke and route all traffic through VPC peering,
C. Configure an AWS Direct Connect between all VPCs and VPNs.
D. Configure a transit gateway with AWS Transit Gateway and connected all VPCs and VPNs.
Answer: D
公司正在使用VPC对等策略在单个区域中连接其VPC,以允许交叉通信。最近帐户创建和VPC的增加使维持VPC对等策略变得困难,该公司预计将增长到数百个VPC。
还提出了一些使用某些VPC创建站点到站点VPN的新要求。解决方案架构师的任务是为多个帐户,VPNS和VPN创建集中式网络设置。
哪种网络解决方案满足这些要求?
A.配置共享的VPC和VPN并彼此共享B.配置中心辐射型服务器并通过VPC对等路由所有流量,
C.配置所有VPC和VPN之间的AWS Direct Connect。 D.使用AWS Transit Gateway配置一个传输网关,并连接所有VPC和VPN。

AWS Transit Gateway通过中央集线器连接VPC和本地网络。这简化了您的网络,并结束了复杂的对等关系。它充当云路由器–每个新连接仅建立一次。 当您进行全球扩展时,区域间对等使用AWS全球网络将AWS Transit网关连接在一起。您的数据将自动加密,并且永远不会通过公共互联网传输。而且,由于其居中地位,AWS Transit Gateway Network Manager在整个网络上都具有独特的视图,甚至可以连接到软件定义的广域网(SD-WAN)设备。

QUESTION 194
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A monolithic application was recently migrated to AWS and is now running on a single Amazon
EC2 instance. Due to application limitations, it is not possible to use automatic scaling to scale
out the application. The chief technology officer (CTO) wants an automated solution to restore the
EC2 instance in the unlikely event the underlying hardware fails,
What would allow for automatic recovery of the EC2 instance as quickly as possible?
A. Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance if it
becomes impaired.
B. Configure an Amazon CloudWatch alarm to trigger an SNS message that alerts the CTO when
the EC2 instance is impaired.
C. Configure AWS CloudTrail to monitor the health of the EC2 instance, and if it becomes impaired,
triggered instance recovery.
D. Configure an Amazon EventBridge event to trigger an AWS Lambda function once an hour that
checks the health of the EC2 instance and triggers instance recovery if the EC2 instance is
unhealthy.
Answer: A
整体应用程序最近已迁移到AWS,现在正在单个Amazon EC2实例上运行。由于应用程序的限制,无法使用自动缩放来缩放 退出应用程序。
首席技术官(CTO)希望在底层硬件出现故障的不太可能的情况下恢复EC2实例的自动化解决方案,那么,如何才能尽快恢复EC2实例呢? 
A.配置一个Amazon CloudWatch警报,如果警报受损,该警报将触发EC2实例的恢复。 B.配置一个Amazon CloudWatch警报以触发SNS消息,以在EC2实例受损时向CTO发出警报。 
C.配置AWS CloudTrail来监视EC2实例的运行状况,如果它受损,则触发实例恢复。
D.配置一个Amazon EventBridge事件以每小时一次触发一次AWS Lambda函数,以检查EC2实例的运行状况,并在EC2实例运行不正常时触发实例恢复

Explanation: https://docs. aws. amazon.com/AWSEC2latest/UserGuide/ec2-instance-recover .htmI

您可以使用重新引导和恢复操作来自动重新引导那些实例,或者在发生系统“故障”时将它们恢复到新硬件上。

使用Amazon CloudWatch警报操作,您可以创建自动停止,终止,重新引导或恢复EC2实例的警报。当您不再需要运行实例时,可以使用停止或终止操作来节省资金。您可以使用重新引导和恢复操作来自动重新引导那些实例,或者在发生系统损坏时将它们恢复到新硬件上。

在许多情况下,您可能需要自动停止或终止实例。例如,您可能具有专用于批处理工资单处理作业或科学计算任务的实例,这些实例会运行一段时间,然后完成其工作。您可以停止或终止它们,而不是让这些实例闲置(并产生费用),这可以帮助您节省资金。使用停止和终止警报操作之间的主要区别在于,如果需要稍后再次运行已停止的实例,则可以轻松地重新启动它。您还可以保留相同的实例ID和根卷。但是,您无法重新启动已终止的实例。相反,您必须启动一个新实例。

QUESTION 195
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A company has created a VPC with multiple private subnets in multiple Availability Zones (AZs)
and one public subnet in one of the AZs. The public subnet is used to launch a NAT gateway.
There are instance in the private subnet that use a NAT gateway to connect to the internet. In
case is used of an AZ failure, the company wants to ensure that the instance are not all
experiencing internet connectivity issues and that there is a backup plan ready.
Which solution should a solutions architect recommend that is MOST highly available?
A. Create a new public subnet with a NAT gateway in the same AZ Distribute the traffic between the
two NAT gateways
B. Create an Amazon EC2 NAT instance in a now public subnet Distribute the traffic between the
NAT gateway and the NAT instance
C. Create public subnets In each flZ and launch a NAT gateway in each subnet Configure the traffic
from the private subnets In each A2 to the respective NAT gateway
DCreate an Amazon EC2 NAT instance in the same public subnet Replace the NAT gateway with
the NAT instance and associate the instance with an Auto Scaling group with an appropriate
scaling policy.
Answer: C
一家公司创建了一个VPC,该VPC在多个可用区(AZ)中具有多个专用子网,在一个可用区中具有一个公用子网。
公共子网用于启动NAT网关。专用子网中有使用NAT网关连接到Internet的实例。如果使用了AZ故障,该公司希望确保该实例并非都遇到Internet连接问题,
并且已经准备好备份计划。解决方案架构师应该建议哪种解决方案具有最高的可用性?
A.在同一AZ中使用NAT网关创建新的公共子网在两个NAT网关之间分配流量
B.在现在的公共子网中创建Amazon EC2 NAT实例在NAT网关和NAT实例之间分配流量
C.创建在每个flZ中的公共子网并在每个子网中启动NAT网关配置从每个A2中的私有子网到相应NAT网关的流量
D在同一公共子网中创建Amazon EC2 NAT实例将NAT网关替换为NAT实例并关联具有适当扩展策略的Auto Scaling组的实例
QUESTION 196
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
A company has multiple AWS accounts, for various departments. One of the departments wants
to share an Amazon S3 bucket with all other department.
Which solution will require the LEAST amount of effort-?
A. Enable cross-account S3 replication for the bucket
B. Create a pre signed URL tor the bucket and share it with other departments
C. Set the S3 bucket policy to allow cross-account access to other departments
D. Create IAM users for each of the departments and configure a read-only IAM policy
Answer: C
一家公司有多个适用于各个部门的AWS账户。其中一个部门希望与所有其他部门共享一个Amazon S3存储桶。
哪种解决方案需要最少的努力? A.为存储桶启用跨帐户S3复制B.创建一个预先签名的URL来存储桶并与其他部门共享
设置S3存储桶策略以允许跨帐户访问其他部门D.为每个部门创建IAM用户部门并配置只读IAM策略

桶策略是S3的中央控制策略。

使用存储桶策略来管理跨账户控制并审计 S3 对象的权限。如果您在存储桶级别应用存储桶策略,则可以定义拥有访问权限的人(委托人元素)、他们可以访问的对象(资源元素)以及他们访问对象的方式(操作元素)。如果您在存储桶级别应用存储桶策略,将可以为存储桶中的不同对象定义精细访问权限。您还可以检查存储桶策略,以了解谁有权访问 S3 存储桶中的对象。

QUESTION 197
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company collects temperature, humidity, and atmospheric pressure data in cities across
multiple continents. The average volume of 'data collected per site each day is 500 GB. Each site
has a high-speed internet connection. The company's weather forecasting applications are based
in a single Region and analyze the data daily,
What is the FASTEST way to aggregate data for all of these global sites?
. Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to
directly upload site data to the destination bucket.
B. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region
replication to copy objects to the destination bucket.
c. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region
replication to copy objects to the destination bucket.
D. Upload the data to an Amazon EC2 instance in the closes Region, Store the data in an Amazon
EBS volume. One a day take an EBS snapshot and copy it to the centralize Region. Restore the
EBS volume in the centralized Region and run an analysis on the data daily,
Answer: A
一家公司收集多个大洲城市的温度,湿度和大气压力数据。每个站点每天收集的平均数据量为500 GB。
每个站点都有高速互联网连接。该公司的天气预报应用程序位于单个区域,并且每天分析数据。对于所有这些全球站点的数据进行汇总的最快方法是什么? 
。在目标存储桶上启用Amazon S3 Transfer Acceleration。使用分段上传将网站数据直接上传到目标存储桶。
B.将站点数据上传到最近的AWS区域中的Amazon S3存储桶。使用S3跨区域复制将对象复制到目标存储桶。
C。将站点数据上传到最近的AWS区域中的Amazon S3存储桶。使用S3跨区域复制将对象复制到目标存储桶。 
D.将数据上传到关闭区域中的Amazon EC2实例,将数据存储在Amazon EBS卷中。每天拍摄一张EBS快照并将其复制到集中区域。恢复集中区域中的EBS量并每天对数据进行分析
QUESTION 198
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
A company has implemented one of its microservices on AWS Lambda that accesses an Amazon
DynamoDB table named Books. A solutions architect is design an IAM policy to be attached to
the Lambda function's IAM role, giving it access to put, update, and delete items in the Books
table. the IAM policy must prevent function from performing any other actions on the Books table
or any other. Which IAM policy would fulfill these needs and provide the LEAST privileged
access?
一家公司已在AWS Lambda上实现了其微服务之一,该微服务可访问名为Books的Amazon DynamoDB表。
解决方案架构师正在设计一个IAM策略,该策略将附加到Lambda函数的IAM角色,使它可以访问“书籍”表中的放置,更新和删除项目。 
IAM策略必须阻止功能对“书籍”表或任何其他形式执行任何其他操作。哪个IAM策略可以满足这些需求并提供最小的特权访问?
"Version": "2012-10-17",
"statement":I
"sid": " PutUpdateDeleteonBooks",
"Effect": "Allow",
"Action" :
"dynamodb: PutItem",
"dynamodb: UpdateItem",
"dynamodb: DeleteItem"
"Resource": "arn: aws:dynamodb:us-west-2:123456789012:table/Books"

"Version": "2012-10-17",
"statement": [
"sid": "PutUpdateDeleteonBooks",
"Effect": "Allow",
"dynamodb: PutItem" ,
"dynamodb: UpdateItem"
"dynamodb: DeleteItem"
ce": "arnsaws :dynamodb:us-west-2 :123456789012 :table/*"

C. 1
"version“:“2012-10-17“,
"Statement":
"sid”:_”PutUpdateDeleteOnBooks ,
”ffect”:_”A11๐ฬ”,
"Action“: "dynamodb:*",
”Resource”:”aะn:ลฬs : dynamodb:นs-w@st-2 :223456789012:a1eBoo

D.
"Version”:_ *2012-10-17”,
"statement":
"Sid”:_ "PutUpdateDeleteOnBooks",
"effect": "A1l๏พ”,
"Action": "dynamodb:*" ๑
"Resource": "arn: aws:dynamodb:us-west -2:123456789012 :table/Books"
“sid”:”PutUpdateDeleteOnBook,
"Effect”: "Deny” ,
"Action”: "dynamodb:*" ๑
"Resource”: "arn: ลพธ :dynamodb :นธ-พ๏ธt-2:123456789012 :table/Books”
Answer: A
QUESTION 199
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
Application developers have noticed that a production application is very slow when business
reporting users run large production reports against the Amazon RDS instance backing the
application. the CPU and memory utilization metrics for the RDS instance-d not exceed 60%
while the reporting queries are running. The business reporting users must be able to generate
reports without affecting the applications performance.
Which action will accomplish this?
A. Increase the size of the RDS instance
B. Create a read replica and connect the application to it.
C. Enable multiple Availability Zones on the RDS instance
D. Create a read replication and connect the business reports to it.
Answer: D
应用程序开发人员已经注意到,当业务报告用户针对支持该应用程序的Amazon RDS实例运行大型生产报告时,生产应用程序非常慢。报告查询运行时,RDS实例-d的CPU和内存使用率指标不超过60%。业务报告用户必须能够生成报告,而不影响应用程序性能。哪个动作可以完成此任务? A.增加RDS实例的大小B.创建一个只读副本并将应用程序连接到它。 C.在RDS实例上启用多个可用区D.创建一个只读复制并将业务报告连接到它
QUESTION 200
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company's packaged application dynamically creates and returns single-use text files in
response to user requests. The company is using Amazon CloudFront for distribution, but wants
to future reduce data transfer costs, The company modify the application's source code.
What should a solution architect do to reduce costs?
A. Use Lambda@Edge to compress the files as they are sent to users.
B.Enable Amazon S3 Transfer Acceleration to reduce the response times.
C. Enable caching on the CloudFront distribution to store generated files at the edge.
D. Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.
Answer: A
公司的打包应用程序可以动态创建并返回一次性文本文件,以响应用户请求。该公司正在使用Amazon CloudFront进行分发,但希望将来减少数据传输成本。
该公司修改了应用程序的源代码。解决方案架构师应该怎么做才能降低成本?答:
使用Lambda @ Edge压缩文件发送给用户时的文件。 B.启用Amazon S3 Transfer Acceleration以减少响应时间。 
C.在CloudFront分布上启用缓存以将生成的文件存储在边缘。 D.使用Amazon S3分段上传将文件移至Amazon S3,然后再将其返回给用户。

Explanation: B seems more expensive; C does not seem right because they are single use files and will not be needed again from the cache; D multipart mainly for large files and will not reduce data and cost; A seems the best: change the application code to compress the files and reduce the amount of data transferred to save costs.

QUESTION 201
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A public-facing web application queries a database hosted on a Amazon EC2 instance in a
private subnet. A large number of queries involve multiple table joins, and the application
performance has been degrading due to an increase in complex queries. The application team
will be performing updates to improve performance.
What should a solutions architect recommend to the application team? (Select TWO.)
A. Cache query data in Amazon SQS
B. Create a read replica to offload queries
C. Migrate the database to Amazon Athena
D, Implement Amazon DynamoDB Accelerator to cache data.
E. Migrate the database to Amazon RDS
Answer: BE
面向公众的Web应用程序查询专用子网中Amazon EC2实例上托管的数据库。大量查询涉及多个表联接,并且由于复杂查询的增​​加,
应用程序性能一直在下降。应用程序团队将执行更新以提高性能。 解决方案架构师应向应用程序团队推荐什么? (选择两个。)
A.在Amazon SQS中缓存查询数据 B.创建一个只读副本以减轻查询负担 
C.将数据库迁移到Amazon Athena D.实施Amazon DynamoDB Accelerator缓存数据。 E.将数据库迁移到Amazon RDS

具有只读副本的RDS应该可以完成这项工作。所以B和E。

QUESTION 202
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company has a Microsoft Windows-based application that must be migrated to AWS. This
application requires the use of a shared Windows file system attached to multiple Amazon EC2
Windows instances. What should a solution architect do to accomplish this?
A. Configure a volume using Amazon EFS Mount the EPS volume to each Windows Instance
B. Configure AWS Storage Gateway in Volume Gateway mode Mount the volume to each Windows
instance
C. Configure Amazon FSx for Windows File Server Mount the Amazon FSx volume to each
Windows Instance
D. Configure an Amazon EBS volume with the required size Attach each EC2 instance to the volume
Mount the file system within the volume to each Windows instance
Answer: C
公司有一个基于Microsoft Windows的应用程序,必须将其迁移到AWS。 这个
应用程序需要使用附加到多个Amazon EC2的共享Windows文件系统
Windows实例。 解决方案架构师应该怎么做才能做到这一点?
A.使用Amazon EFS配置卷将EPS卷安装到每个Windows实例
B.在卷网关模式下配置AWS Storage Gateway将卷安装到每个Windows
实例
C.为Windows文件服务器配置Amazon FSx将Amazon FSx卷安装到每个
Windows实例
D.配置具有所需大小的Amazon EBS卷将每个EC2实例附加到该卷
将卷内的文件系统挂载到每个Windows实例
QUESTION 203
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company recently expanded globally and wants to make its application accessible to users in
those geographic locations. The application is deploying on Amazon EC2 instances behind an
Application Load balancer in an Auto Scaling group, The company needs the ability shift traffic
from resources in one region to another.s
What should a solutions architect recommend?
A. Configure an Amazon Route 53 latency routing policy
B. Configure an Amazon Route 53 geolocation routing policy
C. Configure an Amazon Route 53 geoproximity routing policy.
D. Configure an Amazon Route 53 multivalue answer routing policy
Answer: C
一家公司最近在全球扩张,希望使这些地理位置的用户可以访问其应用程序。该应用程序正在Auto Scaling组中的应用程序负载均衡器后面的Amazon EC2实例上部署。
该公司需要能够将流量从一个区域的资源转移到另一个区域的能力。解决方案架构师应该建议什么? 
A.配置Amazon Route 53延迟路由策略B.配置Amazon Route 53地理位置路由策略
C.配置Amazon Route 53地理位置邻近路由策略。 D.配置Amazon Route 53多值答案路由策略

C. Geolocation routing policy – Use when you want to route traffic based on the location of your users. Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.

C.地理位置路由策略–在您要根据用户位置路由流量时使用。 Geoproximity路由策略–在您要基于资源的位置路由流量,以及(可选)将流量从一个位置的资源转移到另一位置的资源时使用。

Explanation: Keyword: Users in those Geographic Locations Condition: Ability Shift traffic from resources in One Region to Another Region The following table highlights the key function of each type of routing policy:

Geo-location: " Caters to different users in different countries and different languages. " Contains users within a particular geography and offers them a customized version of the workload based on their specific needs. Geolocation can be used for localizing content and presenting some or all of your website in the language of your users. Can also protect distribution rights. " Can be used for spreading load evenly between regions. " If you have multiple records for overlapping regions, Route 53 will route to the smallest geographic region. " You can create a default record for IP addresses that do not map to a geographic location. The following diagram depicts an Amazon Route 53 Geolocation routing policy configuration:

Reference: https://docs. aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html https://aws. amazon.com/route53/?nc2=h_ _ql_ prod_ _nt_ _r53 Video https://yöutu. be/RGWgfhZByAI Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and- content-delivery/amazon-route-53/

QUESTION 204
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company has several business systems that require access to data stored in a file share. the
business systems will access the file share using the Server Message Block (SMB) protocol. The
file share solution should be accessible from both of the company's legacy on-premises
environment and with AWS. Which services mod the business requirements? (Select TWO.)
A. Amazon EBS
B. Amazon EFS
C. Amazon FSx for Windows
D. Amazon S3
E. AWS Storage Gateway file gateway
Answer: CE
公司有多个业务系统,这些业务系统需要访问文件共享中存储的数据。业务系统将使用服务器消息块(SMB)协议访问文件共享。
该文件共享解决方案应该可以从公司的旧式本地环境和AWS中进行访问。哪些服务改变了业务需求? (选择两个。)A. Amazon EBS B. Amazon EFS 
C. Windows的Amazon FSx D. Amazon S3 E. AWS Storage Gateway文件网关

Explanation: Keyword: SMB + On-premises Condition: File accessible from both on-premises and AWS Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration, It offers single-AZ and multi- AZ deployment options, fully managed backups, and encryption of data at rest and in transit. You can optimize cost and performance for your workload needs with SSD and HDD storage options; and you can scale storage and change the throughput performance of your file system at any time. Amazon FSx file storage is accessible from Windows, Linux, and MacOS compute instances and devices running on AWS or on premises. How FSx for Windows File Server works

AWS Storage Gateway AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. lhese include moving backups to the cloud, using on-premises file shares backed by cloud storage, and providing low latency access to data in AWS for on-premises applications. To support these use cases, Storage Gateway offers three different types of gateways- File Gateway, Tape Gateway, and Volume Gateway - that seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access. Your applications connect to the service through a virtual machine or gateway hardware appliance using standard storage protocols, such as NFS, SMB, and iSCSI. The gateway connects to AWS storage services, such as Amazon S3, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, Amazon EBS, and AWS Backup, providing storage for files, volumes, snapshots, and virtual tapes in AWS. The service includes a highly-optimized and efficient data transfer mechanism, with bandwidth management and automated network resilience. How Storage Gateway works

Answer: A

QUESTION 205

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company's operations teams has an existing Amazon S3 bucket configured to notify an
Amazon SQS queue when new object are created within the bucket. The development team also
wants to receive events when new objects are created. The existing operations team workflow
must remain intact,
Which solution would satisfy these requirements?
A. Create another SQS queue Update the S3 events in bucket to also update the new queue when a
new object is created.
B. Create a new SQS queue that only allows Amazon S3 to access the queue, Update Amazon S3
update this queue when a new object is created
C. Create an Amazon SNS topic and SQS queue for the Update. Update the bucket to send events
to the new topic, Updates both queues to poll Amazon SNS.
D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send
events to the new topic Add subscription for both queue in the topic,
公司的运营团队已将现有的Amazon S3存储桶配置为在存储桶中创建新对象时通知Amazon SQS队列。开发团队还希望在创建新对象时接收事件。
现有的运营团队工作流必须保持完整,哪个解决方案可以满足这些要求? 
A.创建另一个SQS队列更新存储桶中的S3事件,以在创建新对象时也更新新队列。 
B.创建一个仅允许Amazon S3访问该队列的新SQS队列,在创建新对象时,Update Amazon S3更新此队列。
C.为该Update创建Amazon SNS主题和SQS队列。更新存储桶以将事件发送到新主题,同时更新两个队列以轮询Amazon SNS。 
D.为存储桶更新创建一个Amazon SNS主题和SQS队列。更新存储桶以将事件发送到新主题,为该主题中的两个队列添加订阅,

Answer: D

SNS扇出的典型用例

SNS通知还可以发送推送通知到IOS,安卓,Windows和基于百度的设备,也可以通过电子邮箱或者SMS短信的形式发送到各种不同类型的设备上。

SNS的一些特点

  • SNS是实时的推送服务(Push),有别于SQS的拉取服务(Pull/Poll)
  • 拥有简单的API,可以和其他应用程序兼容
  • 可以通过多种不同的传输协议进行集成
  • 便宜、用多少付费多少的服务模型
  • 在AWS管理控制台上就可以进行简单的操作
QUESTION 206
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company wants to deploy a shared file system for its .NET application servers and Microsoft
SQL Server database running on Amazon EC2 instance with Windows Server 2016. The solution
must be able to be integrated in to the corporate Active Directory domain, be highly durable, be
managed by AWS, and provided levels of throuput and IOPS.
Which solution meets these requirements?
A. Use Amazon FSx for Windows File Server
B. Use Amazon Elastic File System (Amazon EFS)
C. Use AWS Storage Gateway in file gateway mode.
D. Deploy a Windows file server on two On Demand instances across two Availability Zones.
Answer: A
一家公司希望为其在Windows Server 2016上的Amazon EC2实例上运行的.NET应用程序服务器和Microsoft SQL Server数据库部署共享文件系统。
该解决方案必须能够集成到公司Active Directory域中,并且必须高度耐用,由AWS进行管理,并提供吞吐量和IOPS级别。哪种解决方案满足这些要求? 
A.将Amazon FSx用于Windows文件服务器B.使用Amazon弹性文件系统(Amazon EFS)
C.在文件网关模式下使用AWS Storage Gateway。 D.在两个可用区中的两个按需实例上部署Windows文件服务器

Explanation: https://aws. .amazon.com/fsx/windows/

QUESTION 207
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company is designing a new service that will run on Amazon EC2 instance behind an Elastic
Load Balancer. However, many of the wëb service clients can only reach IP addresses
whitelisted on their firewalls.
What should a solution architect recommend to meet the clients' needs? What should a solution
architect recommend to meet the clients' needs?
A. A Network Load Balancer with an associated Elastic IP address.
B. An Application Load Balancer with an a associated Elastic IP address
C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address
D. An EC2 instance with a public IP address running as a proxy in front of the load balancer
一家公司正在设计一项新服务,该服务将在Elastic Load Balancer后面的Amazon EC2实例上运行。但是,许多Web服务客户端只能访问其防火墙上列入白名单的IP地址。 
解决方案架构师应建议什么来满足客户的需求? 答:具有关联的弹性IP地址的网络负载平衡器。
B.具有关联的弹性IP地址的应用程序负载平衡器
C.Amazon Route 53托管区域中的A记录指向弹性IP地址 D.一个EC2实例,其公共IP地址在负载均衡器之前作为代理运行

当我们使用域名时,需要Route53,但是在这里我们需要使用IP地址发布webapp,因此Route53不行。

一年后的第四层TCP负载平衡器Network Load Balancer(NLB)的推出。NLB为每个可用区启用静态IP地址。这些静态地址不会改变,因此对于我们的防火墙白名单很有用。但是,NLB仅允许TCP通信,不允许HTTPS卸载,并且它们没有ALB的第7层功能。

https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/

QUESTION 208
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
A company is designing a new service that will run on Amazon EC2 instance behind an Elastic
Load Balancer.
However, many of the web service clients can only reach IP addresses whitelisted on their
firewalls.
What should a solution architect recommend to meet the clients' needs?
A. A Network Load Balancer with an associated Elastic IP address.
B. An Application Load Balancer with an a associated Elastic IP address
C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address
An EC2 instance with a public IP address running as a proxy in front of the load balancer
Answer: A

Explanation: https:/acloud.guru/forums/aws-csyp LzN1_ Aw0dL3Z98CkBs1/Using%20EIP% https://www.bluematador.com/blogstatic-p- aWS- application-load-balancer

QUESTION 209
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company is investigating potential solutions that would collect, process, and store users'
service usage data.
The business objective is to create an analytics capability that will enable the company to gather
operational insights quickly using standard SQL queries.
The solution should be highly available and ensure Atomicity, Consistency, Isolation, and
Durability (ACID) compliance in the data tier .
Which solution should a solutions architect recommend?
A. Use Amazon DynamoDB transactions
B. Create an Amazon Neptune database in a Multi AZ design
C. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design
D. Deploy PostgreSQL on an Amazon EC2 instance that uses Amazon EBS Throughput Optimized
HDD (st1) storage.
Answer: C
一家公司正在研究潜在的解决方案,这些解决方案将收集,处理和存储用户的服务使用数据。业务目标是创建一种分析功能,
使公司能够使用标准SQL查询快速收集运营见解。该解决方案应高度可用并确保 数据层中的原子性,一致性,隔离性和耐久性(ACID)合规性。 
解决方案架构师应建议哪种解决方案? 
A.使用Amazon DynamoDB交易 B.在多可用区设计中创建Amazon Neptune数据库 C
.在多可用区设计中为MySQL数据库使用完全托管的Amazon RDS 
D.在使用Amazon EBS吞吐量优化的HDD存储的Amazon EC2实例上部署PostgreSQL。

QUESTION 210
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company runs a web service on Amazon CC2 instances behind an Application Load Balancer.
The instances run in an Amazon EC2 Auto Scaling group across tWO Availability zones.
The company needs a minimum of tour instances a! all limes to meet the required service level
agreement (SL A) while keeping costs low.
If an Availability Zone tails, how can the company remain compliant with the SLA?
A. Add a target tracking scaling policy with a short cooldown period
B. Change the Auto Scaling group launch configuration to use a larger instance type
C. Change the Auto Scaling group to use six servers across three Availability Zones

D. Change the Auto Scaling group to use eight servers across two Availability Zones
Answer: C
一家公司在Application Load Balancer后面的Amazon CC2实例上运行Web服务。实例在两个可用区中的Amazon EC2 Auto Scaling组中运行。
该公司需要最少的巡回实例a!所有石灰满足所需的服务水平协议(SL A),同时保持较低的成本。如果可用区不足,公司如何保持与SLA的合规性? 
A.添加目标跟踪扩展策略且冷却时间较短B.更改Auto Scaling组启动配置以使用较大的实例类型
C.更改Auto Scaling组以在三个可用区中使用六台服务器 D.更改Auto Scaling组以在两个可用区中使用八台服务器

I will go for C Under the SLA “o For Amazon EC2 (other than Single EC2 Instances), Amazon ECS, or Amazon Fargate, when all of your running instances or running tasks, as applicable, deployed in two or more AZs in the same AWS region (or, if there is only one AZ in the AWS region, that AZ and an AZ in another AWS region) concurrently have no external connectivity.” https://aws.amazon.com/compute/sla/ For D is one AZ is down then no external connectivity vs C, if one down, still got 2 to go.

QUESTION 211
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
An ecommerce company has noticed performance degradation of its Amazon RDS based web
application.
The performance degradation is attribute to an increase .in the number of read-only SQL queries
triggered by business analysts.
A solution architect needs to solve the problem with minimal changes to the existing web
application.
What should the solution architect recommend?
A. Export the data to Amazon DynamoDB and have the business analysts run their queries.
B. Load the data into Amazon ElasticCache and have the business analysts run their queries.
C. Create a read replica of the primary database and have the business analysts run their queries.
D. Copy the data into an Amazon Redshift cluster and have the business analysts rณn their queries.
Answer: C
一家电子商务公司注意到其基于Amazon RDS的Web应用程序的性能下降。性能下降归因于业务分析师触发的只读SQL查询数量的增加。
解决方案架构师需要以对现有Web应用程序的最小更改来解决问题。解决方案架构师应该建议什么? 
A.将数据导出到Amazon DynamoDB,并让业务分析师运行其查询。 B.将数据加载到Amazon ElasticCache中,并让业务分析师运行其查询。 
C.创建主数据库的只读副本,并让业务分析师运行其查询。 D.将数据复制到Amazon Redshift集群中,并让业务分析员调查他们的查询
QUESTION 212
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company is building applications in containers.
The company wants to migrate its on-premises development and operations services from its 0ท-
premises data center to AWS.
Management states that production system must be cloud agnostic and use the same
configuration and administrator tools across production systems,
A solutions architect needs to design a managed solution that will align open-source software.
Which solution meets these requirements?
A._ Launch the containers on Amazon EC2 with EC2 instance worker nodes.
B.Launch the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS workers
nodes.
C. Launch the containers on Amazon Elastic Containers service (Amazon ECS) with AWS Fargate
instances.
D. Launch the containers on Amazon Elastic Container Service (Amazon EC) with Amazon EC2
instance worker nodes.
Answer: B
一家公司正在容器中构建应用程序。该公司希望将其本地开发和运营服务从其0本地数据中心迁移到AWS。
管理层指出,生产系统必须与云无关,并且必须在整个生产系统中使用相同的配置和管理员工具。
解决方案架构师需要设计一个可与开源软件保持一致的托管解决方案。哪种解决方案满足这些要求? 
A._在具有EC2实例工作程序节点的Amazon EC2上启动容器。 
B.在Amazon Elastic Kubernetes Service(Amazon EKS)和EKS worker节点上启动容器。
C.使用AWS Fargate实例在Amazon Elastic Containers服务(Amazon ECS)上启动容器。 
D.使用Amazon EC2实例工作程序节点在Amazon Elastic Container Service(Amazon EC)上启动容器

Explanation: When talking about containerized applications, the leading technologies which will always come up during the conversation are Kubernetes and Amazon ECS (Elastic Container Service). While Kubernetes is an open-sourced container orchestration platform that was originally developed by Google, Amazon ECS is AWS’ proprietary, managed container orchestration service.

因为它要求基于开源的解决方案,EKS是正确的答案。

使用ECS,您可以仅在AWS云中工作,而使用EKS,则可以跨AWS云和本地运行容器,这是“云激昂的”3

QUESTION 213
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company is running a two-tier ecommerce website using services.
The current architect uses a publish- facing Elastic Load Balancer that sends traffic to Amazon
EC2 instances in a private subnet.
The static content is hosted on EC2 instances, and the dynamic content is retrieved from a
MYSQL database.

The application is running in the United States. The company recently started selling to users in
Europe and Australia.
A solution architect needs to design solution so their international users have an improved
browsing experience.
Which solution is MOST cost- effective?
A. Host the entire website on Amazon S3.
B. Use Amazon CloudFront and Amazon S3 to host static images.
C. Increase the number of public load balancers and EC2 instances
D. Deploy the two-tier website in AWS Regions in Europe and Austraila.
Answer: B
一家公司正在使用服务运行一个两层电子商务网站。当前架构师使用面向发布的Elastic Load Balancer,
该流量将流量发送到私有子网中的Amazon EC2实例。静态内容托管在EC2实例上,动态内容从MYSQL数据库检索。 该应用程序正在美国运行。
该公司最近开始向欧洲和澳大利亚的用户销售产品。解决方案架构师需要设计解决方案,以便其国际用户拥有更好的浏览体验。哪种解决方案最划算?
A.将整个网站托管在Amazon S3上。 B.使用Amazon CloudFront和Amazon S3托管静态图像。 
C.增加公共负载平衡器和EC2实例的数量D.在欧洲和澳大利亚的AWS地区部署两层网站
QUESTION 214
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A database is on an Amazon RDS MYSQL 5.6 Multi-AZ DB instance that experience highly
dynamic reads.
Application developers notice a significant slowdown when testing read performance from a
secondary AWS Region.
The developers want a solution that provides less than 1 second of read replication latency,
What should the solutions architect recommend?
A. Install MySQL on Amazon EC2 in (he secondary Region.
B. Migrate the database to Amazon Aurora with cross-Region replicas.
C. Create another RDS for MySQL read replica in the secondary.
D. Implement Amazon ElastiCache to improve database query performance.
Answer: B
数据库位于经历高度动态读取的Amazon RDS MYSQL 5.6 Multi-AZ数据库实例上。
在测试辅助AWS区域的读取性能时,应用程序开发人员会注意到速度明显下降。开发人员想要一个提供小于1秒的读取复制延迟的解决方案,该解决方案架构师应该建议什么?
A.在二级区域的Amazon EC2上安装MySQL。B.通过跨区域副本将数据库迁移到Amazon Aurora。
C.在二级区域中为MySQL只读副本创建另一个RDS。D.实施Amazon ElastiCache以提高数据库查询性能
QUESTION 215
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
An operations team has a standard that states IAM policies should not be applied directly to
users.
Some new members have not been following this standard.
The operation manager needs a way to easily identify the users with attached policies.
What should a solutions architect do to accomplish this?
A. Monitor using AWS CloudTrail
B. Create an AWS Config rule to run daily
C. Publish IAM user changes lo Amazon SNS
D. Run AWS Lambda when a user is modified
Answer: B
运营团队有一个标准,该标准规定IAM策略不应直接应用于用户。一些新成员尚未遵循此标准。
运营经理需要一种方法来轻松识别带有附加策略的用户。解决方案架构师应该怎么做才能做到这一点? 
A.使用AWS CloudTrail进行监控B.创建每天运行的AWS Config规则C
.在Amazon SNS上发布IAM用户更改D.在修改用户后运行AWS Lambda

Explanation: A new AWS Config rule is deployed in the account after you enable AWS Security Hub. The AWS Config rule reacts to resource configuration and compliance changes and send these change items to AWS CloudWatch, When AWS CloudWatch receives the compliance change, a CloudWatch event rule triggers the AWS Lambda function,

QUESTION 216
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company has established a new AWS account.
The account is newly provisioned and no changed have been made to the default settings.
The company is concerned about the security of the AWS account root user.

What should be done to secure the root user?
A. Create IAM users for daily administrative tasks.
Disable the root user.
B. Create IAM users for daily administrative tasks.
Enable multi-factor authentication on the root user.
C. Generate an access key for the root user.
Use the access key for daily administration tasks instead of the AWS Management Console.
D. Provide the root user credentials to the most senior solution architect.
Have the solution architect use the root user for daily administration tasks.
Answer: B
一家公司已经建立了一个新的AWS账户。该帐户是新设置的,并且未更改默认设置。该公司担心AWS账户root用户的安全性。 
应该采取什么措施来保护root用户? A.创建用于日常管理任务的IAM用户。禁用root用户。
B.创建用于日常管理任务的IAM用户。在root用户上启用多因素身份验证。 
C.为根用户生成访问密钥。使用访问密钥代替AWS管理控制台执行日常管理任务。 
D.向最高级的解决方案架构师提供root用户凭据。让解决方案架构师使用root用户执行日常管理任务
QUESTION 217
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A healthcare company stores highly sensitive patient records.
Compliance requires that multiple copies be stored in different locations Each record must be
stored for 7 years.
The company has a service level agreement (SLA) to provide records to government agencies
immediately for the first 30 days and then within 4 hours of a request thereafter.
What should a solutions architect recommend?
A. Use Amazon S3 with cross-Region replication enabled.
After 30 days, transition the data to Amazon S3 Glacier using lifecycle policy
B. Use Amazon S3 with cross-origin resource sharing (CORS) enabled.
After 30 days, transition the data to Amazon S3 Glacier using a lifecycle policy,
C. Use Amazon S3 with cross-Region replication enabled.
After 30 days, transition the data to Amazon S3 Glacier Deep Achieve using a lifecycle policy
D, Use Amazon S3 with cross-origin resource sharing (GORS) enabled,
After 30 days, transition the data to Amazon S3 Glacier Deep Archive using a lifecycle policy
Answer: A
一家医疗保健公司存储高度敏感的患者记录。合规性要求将多份副本存储在不同的位置。每条记录必须存储7年。
该公司拥有服务水平协议(SLA),可在前30天立即向政府机构提供记录,然后在请求后的4小时内提供记录。解决方案架构师应该建议什么? 
A.在启用跨区域复制的情况下使用Amazon S3。 30天后,使用生命周期策略B将数据过渡到Amazon S3 Glacier。
B在启用跨域资源共享(CORS)的情况下使用Amazon S3。 30天后,使用生命周期策略C将数据过渡到Amazon S3 Glacier。
C使用启用了跨区域复制的Amazon S3。 30天后,使用生命周期策略D将数据过渡到Amazon S3 Glacier Deep Achieve,
D使用启用跨域资源共享(GORS)的Amazon S3,30天后,使用生命周期策略将数据过渡到Amazon S3 Glacier Deep Archive

QUESTION 218
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A solutions architect must create a highly available bastion host architecture.
The solution needs to be resilient within a single AWS Region and should require only minimal
effort to maintain,
What should the solutions architect do to meet these requirements?
A. Create a Network Load Balancer backed by an Auto Scaling group with a UDP listener.
B. Create a Network Load Balancer backed by a Spot Fleet with instances in a group with instances
in a partition placement group,
c. Create a Network Load Balancer backed by the existing serves in different Availability Zones as
the target.
D.
Create a Network Load Balancer backed by an Auto Scaling with instances in multiple Availability
zones as the target
Answer: D
解决方案架构师必须创建高度可用的堡垒主机体系结构。该解决方案需要在单个AWS区域内具有弹性,并且只需要进行最小的维护即可。
解决方案架构师应如何满足这些要求? 
A.创建一个网络负载均衡器,该负载均衡器由具有UDP侦听器的Auto Scaling组支持。 
B.创建一个由Spot Fleet支持的网络负载平衡器,其中一个实例在一个组中,另一个实例在一个分区放置组中,
c。创建由不同可用区中的现有服务支持的网络负载平衡器作为目标。 
D.创建一个由Auto Scaling支持的网络负载均衡器,并以多个可用区域中的实例为目标
QUESTION 219
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A solution architect is designing a hybrid application using the AWS cloud.
The network between the on- premises data center and AWS will use an AWS Direct Connect
(DX) connection.
The application connectivity between AWS and the on-premises data center must be highly
resilient,
Which DX configuration should be implemented to meet these requirements?
A. Configure a DX connection with a VPN on top of it.
B. Configure DX connections at multiple DX locations.
C. Configure a DX connection using the most reliable DX partner.
D. Configure multiple virtual interfaces on top of a DX connection.
Answer: B
解决方案架构师正在使用AWS云设计混合应用程序。内部数据中心与AWS之间的网络将使用AWS Direct Connect (DX)连接。 
AWS与本地数据中心之间的应用程序连接必须具有高度的弹性,应实施哪种DX配置以满足这些要求? 
A.在DX连接上配置VPN。 B.在多个DX位置配置DX连接。 
C.使用最可靠的DX伙伴配置DX连接。 D.在DX连接的顶部配置多个虚拟接口

推荐的最佳做法

高度灵活,容错的网络连接对于体系结构良好的系统至关重要。AWS建议从多个数据中心连接以实现物理位置冗余。设计远程连接时,请考虑使用冗余硬件和电信提供商。此外,最佳实践是使用动态路由的主动/主动连接来实现冗余网络连接之间的自动负载平衡和故障转移。提供足够的网络容量,以确保一个网络连接的故障不会淹没并降低冗余连接。

QUESTION 220
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company plans to store sensitive user data on Amazon S3.
Internal security compliance requirement mandata encryption of data before sending it to Amazon
What should a solution architect recommend to satisfy these requirements?
A. Server-side encryption with customer-provided encryption keys
B. Client-side encryption with Amazon S3 managed encryption keys
C. Server-side encryption with keys stored in AWS key Management Service (AWS KMS)
D. Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)
Answer: D
一家公司计划在Amazon S3上存储敏感用户数据。内部安全合规性要求在将数据发送到Amazon之前对数据进行人工数据加密,
解决方案架构师应建议哪些以满足这些要求?
A.使用客户提供的加密密钥进行服务器端加密
B.使用Amazon S3管理的加密密钥进行客户端加密
C.使用存储在AWS密钥管理服务(AWS KMS)中的密钥进行服务器端加密
D.使用存储在AWS Key Management Service(AWS KMS)中的主密钥

Explanation: https://docs. .aws .amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html

QUESTION 221
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company is using Amazon EC2 to run its big data analytics workloads,
These variable workloads run each night, and it is critical they finish by the start of business the
following day.
A solutions architect has been tasked with designing the MOST cost-effective solution.
Which solution will accomplish this?
A. Spot Fleet
B. Spot Instances
C. Reserved Instances
D. On-Demand Instances
Answer: C
一家公司正在使用Amazon EC2来运行其大数据分析工作负载,这些可变工作负载每天晚上运行,
至关重要的是它们要在第二天开始营业时完成。解决方案架构师的任务是设计最具成本效益的MOST解决方案。哪种解决方案可以做到这一点?
A.现货机队B.现货实例C.预留实例D.按需实例
QUESTION 222
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company mandates that an Amazon S3 gateway endpoint must allow traffic to trusted buckets
only.
Which method should a solutions architect implement to meet this requirement?
A. Create a bucket policy for each of the company's trusted S3 buckets that allows traffic only from
the company's trusted VPCs
B. Create a bucket policy for each of the company's trusted S3 buckets that allows traffic only from
the company's S3 gateway endpoint lDs

C. Create an S3 endpoint policy for each of the company's S3 gateway endpoints that blocks access
from any VPC other than the company's trusted VPCs
D. Create an S3 endpoint policy for each of the company's S3 gateway endpoints that provides
access to the Amazon Resource Name (ARN) of the trusted S3 buckets
Answer: D
公司强制要求Amazon S3网关终端节点必须仅允许流量流向受信任的存储桶。解决方案架构师应采用哪种方法来满足此要求? 
A.为公司的每个受信任的S3存储桶创建一个存储桶策略,仅允许来自公司的受信任的VPC的流量B.为公司的每个受信任的S3存储桶创建一个存储桶策略,仅允许来自公司的S3网关端点lD的通信 。
为公司的每个S3网关端点创建一个S3端点策略,该策略阻止从公司的受信任VPC 
D以外的任何VPC进行访问。为公司的每个S3网关端点创建一个S3端点策略,以提供对Amazon资源名称(ARN)的访问)的信任的S3存储桶

创建端点时,可以将端点策略附加到该策略上,以控制对要连接的服务的访问。端点策略必须以JSON格式编写。并非所有服务都支持端点策略。

如果您正在使用Amazon S3的终端节点,则还可以使用Amazon S3存储桶策略来控制对来自特定终端节点或特定VPC的存储桶的访问。

D. S3的VPC端点通过VPC端点访问策略进行保护。这使您可以设置端点应该和不应该访问的S3存储桶。默认情况下,VPC中的任何用户或服务都可以访问任何S3资源。与S3存储桶策略一起使用,可以进一步优化对存储桶和对象的访问控制。

QUESTION 223
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
A company is designing a web application using AWS that processes insurance quotes Users will
request quotes from the application.
Quotes must be separated by quote type must be responded to within 24 hours, and must not be
lost.
The solution should be simple to set up and maintain.
Which solution meets these requirements?
A. Create multiple Amazon Kinesis data streams based on the quote type.
Configure the web application to send messages to the proper data stream.
Configure each backend group of application servers to pool messages from its own data stream
using the Kinesis Client Library (KCL)
B. Create multiple Amazon Simple Notification Service (Amazon SNS) topics and register Amazon
SQS queues to their own SNS topic based on the quote type,
Configure the web application to publish messages to the SNS topic queue.
Configure each backend application server to work its own SQS queue
C. Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the
Amazon SQS queues to the SNS topic.
Configure SNS message fltering to publish messages to the proper SQS queue based on the
quote type.
Configure each backend application server to work its own SQS queue.
D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to
deliver data streams to an Amazon Elasticsearch Service (Amazon ES) cluster.
Configure the web application to send messages to the proper delivery stream.
Configure each backend group of application servers to search for the messages from Amazon
ES and process them accordingly
Answer: C
一家公司正在使用AWS设计可处理保险报价的Web应用程序,用户将向该应用程序请求报价。报价必须按报价类型分开,必须在24小时内回复,并且不得丢失。该解决方案应该易于设置和维护。哪种解决方案满足这些要求?
A.根据报价类型创建多个Amazon Kinesis数据流。配置Web应用程序以将消息发送到正确的数据流。配置每个后端应用程序服务器组,以使用Kinesis Client Library(KCL)
B合并来自其自己的数据流的消息。创建多个Amazon Simple Notification Service(Amazon SNS)主题,并根据报价将Amazon SQS队列注册到自己的SNS主题类型,配置Web应用程序以将消息发布到SNS主题队列。配置每个后端应用程序服务器以工作其自己的SQS队列
C。创建单个Amazon Simple Notification Service(Amazon SNS)主题,并将Amazon SQS队列订阅SNS主题。配置SNS消息过滤,以根据报价类型将消息发布到适当的SQS队列。配置每个后端应用程序服务器以工作自己的SQS队列。 
	D.根据报价类型创建多个Amazon Kinesis Data Firehose交付流,以将数据流交付到Amazon Elasticsearch Service(Amazon ES)集群。配置Web应用程序以将消息发送到正确的传递流。配置应用程序服务器的每个后端组以搜索来自Amazon ES的消息并进行相应处理

Explanation: https://docs. aws.amazon.com/sns/latesdg-fie It all depends on where you want to do the quote type classification i.e. in the app and send to a different/multiple SNS topics (B) or use SNS filtering to do the type classification (C). The question doesn’t really give you enough info to make a clear choice but configuring SNS filteing is probablgt leรs work and easier to maintain than maintaining app code.

这完全取决于您要在哪里进行报价类型分类,即在应用程序中并将其发送到其他/多个SNS主题(B)或使用SNS过滤进行类型分类(C)。这个问题并没有真正为您提供足够的信息来做出明确的选择,但是配置SNS筛选是很可能的工作,比维护应用程序代码更容易维护。

QUESTION 224
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS
database Compliance regulations mandate that all personally identifiable information (PI) be
encrypted at rest.
Which solution should a solutions architect recommend to meet this requirement with the LEAST
amount of changes to the infrastructure"
A. Deploy AWS Certificate Manager to generate certificates.
Use the certificates to encrypt the database volume
B.Deploy AWS CloudHSM. generate encryption keys, and use the customer master key (CMK) to
encrypt database volumes.
C. Configure SSL encryption using AWS Key Management Service customer master keys (AWS
KMS CMKs) to encrypt database volumes
D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption
with AWS Key Management Service (AWS KMS) keys to encrypt instance and database
volumes.
Answer: D
一家公司在以Amazon RDS数据库为后盾的Amazon EC2上运行一个高度敏感的应用程序。法规要求所有静态身份信息都必须加密。解决方案架构师应建议哪种解决方案,以对基础结构进行最少的更改来满足此要求。” 
A.部署AWS Certificate Manager以生成证书。使用证书对数据库卷进行加密
B.Deploy AWS CloudHSM。生成加密密钥,以及使用客户主密钥(CMK)加密数据库卷。 
C.使用AWS Key Management Service客户主密钥(AWS KMS CMK)配置SSL加密以加密数据库卷
D.使用AWS Key Management Service(AWS KMS)密钥配置Amazon Elastic Block Store(Amazon EBS)加密和Amazon RDS加密实例和数据库卷

keyword least change to infra

D似乎是正确的选择,因为它将同时加密EC2 EBS卷和RDS数据库。

QUESTION 225
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company is creating an architecture for a mobile app that requires minimal latency for its users.
The company's architecture consists of Amazon EC2 instances behind an Application Load
Balancer running in an Auto Scaling group.
The EC2 instances connect to Amazon RDS. Application beta testing showed there was a
slowdown when reading the data However the metrics indicate that the EC2 instances do not
cross any CPU utilization thresholds
How can this issue be addressed1?
A. Reduce the threshold for CPU utilization in the Auto Scaling group
B. Replace the Application Load Balancer with a Network Load Balancer.
C. Add read replicas for the RDS instances and direct read traffic to the replica.
D. Add Multi-AZ support to the RDS instances and direct read traffic to the new EC2 instance.
Answer: C
一家公司正在为移动应用程序创建一种架构,该架构需要为其用户提供最小的延迟。
该公司的架构由在Auto Scaling组中运行的Application Load Balancer后面的Amazon EC2实例组成。 EC2实例连接到Amazon RDS。
应用程序Beta测试表明,读取数据时速度变慢。但是,指标表明EC2实例未超过任何CPU使用率阈值。 如何解决这个问题?
A.降低Auto Scaling组中CPU利用率的阈值。 B.用网络负载平衡器替换应用程序负载平衡器。
C.为RDS实例添加只读副本,并将只读流量定向到该副本。 D.向RDS实例添加多可用区支持,并将读取流量定向到新的EC2实例。
QUESTION 226
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A company recently released a new type of internet-connected sensor,
The company is expecting lo sell thousands of sensors, which are designed to stream hígh
volumes of data each second to a central location.
A solutions architect must design a solution that ingests and stores data so that engineering
teams can analyze it in near-real time with millisecond responsiveness.
Which solution should the solutions architect recommend?
A. Use an Amazon SQS queue to ingest the data.
Consume the data with an AWS Lambda function, which then stores the data in Amazon
Redshift.
B. Use an Amazon SOS queue to ingest the data.
Consume the data with an AWS Lambda function, which then stores the data in Amazon
DynamoDB .
C. Use Amazon Kinesis Data Streams to ingest the data.
Consume the data with an AWS Lambda function, which then stores the data in Amazon
Redshift.
D. Use Amazon Kinesis Data Streams to ingest the data.
Consume the data with an AWS Lambda function, which then stores the data in Amazon
DynamoDB.
Answer: D
一家公司最近发布了一种新型的互联网传感器,该公司希望出售数千种传感器,这些传感器旨在将每秒的大量数据流传输到一个中心位置。
解决方案架构师必须设计一种可以吸收和存储数据的解决方案,以便工程团队可以毫秒级的响应速度实时分析数据。
解决方案架构师应建议哪种解决方案? 
A.使用Amazon SQS队列提取数据。使用AWS Lambda函数使用数据,该函数随后将数据存储在Amazon Redshift中。
B.使用Amazon SOS队列提取数据。使用AWS Lambda函数使用数据,该函数随后将数据存储在Amazon DynamoDB中。
C.使用Amazon Kinesis数据流提取数据。使用AWS Lambda函数使用数据,该函数随后将数据存储在Amazon Redshift中。 
D.使用Amazon Kinesis数据流提取数据。使用AWS Lambda函数使用数据,然后将其存储在Amazon DynamoDB中

Explanation: https://aws. .amazon.com/blogs/big-data/analyze-data-in-amazon-dynamodb-using-amazon- sagemaker-for-real-time-prediction/

QUESTION 227
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company is migrating a NoSQL database cluster to Amazon EC2.
The database automatically replicates data to maintain at least three copies of the data. I/O
throughput of the servers is the highest priority,
Which instance type should a solutions architect recommend for the migration?
A. Storage optimized instances with instance store
B. Burstable general purpose instances with an Amazon Elastic Block Store (Amazon EBS) volume
C. Memory optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization
enabled
D. Compute optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization
enabled
Answer: A
一家公司正在将NoSQL数据库集群迁移到Amazon EC2。数据库自动复制数据以维护至少三个数据副本。
服务器的I / O吞吐量是最高优先级,解决方案架构师应为迁移建议哪种实例类型? 
A.具有实例存储的存储优化实例
B.具有Amazon Elastic Block Store(Amazon EBS)卷的可突发通用实例
C.已启用Amazon Elastic Block Store(Amazon EBS)优化的内存优化实例
D.使用Amazon Elastic Block计算优化实例启用商店(Amazon EBS)优化

Instance storage is fasted, and nosql DB have 3 copies

A是唯一适合所有需求的产品。针对IO优化的存储,复制和副本可在实例停止时为我们提供保护

这里的要求是I / O,唯一的选择是A

QUESTION 228
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company operates a website on Amazon EC2 Linux instances.
Some of the instances are faring Troubleshooting points to insufficient swap space on the failed
instances.
The operations team lead needs a solution to monitor this,
What should a solutions architect recommend?
A. Configure an Amazon CloudWatch SwapUsage metric dimension.
Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch.
B. Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics.
Monitor SwapUsage metrics in CloudWatch.
C. Install an Amazon CloudWatch agent on the instances.
Run an appropriate script on a set schedule.
Monitor SwapUtilizalion metrics in CloudWatch.
D. Enable detailed monitoring in the EC2 console.
Create an Amazon CloudWatch SwapUtilizalion custom metric.
Monitor SwapUtilization metrics in CloudWatch.
Answer: D C?
一家公司在Amazon EC2 Linux实例上运营一个网站。一些实例失败。故障排除指出故障实例上的交换空间不足。运营团队负责人需要一个解决方案来监控此情况。
解决方案架构师应该建议什么? 
A.配置Amazon CloudWatch交换使用量指标维度。在CloudWatch的EC2指标中监控“交换使用情况”维度。 
B.使用EC2元数据收集信息,然后将其发布到Amazon CloudWatch自定义指标。在CloudWatch中监控交换使用量指标。
C.在实例上安装Amazon CloudWatch代理。按照设定的时间表运行适当的脚本。在CloudWatch中监控交换利用率指标。
D.在EC2控制台中启用详细监视。创建一个Amazon CloudWatch交换利用率自定义指标。在CloudWatch中监控交换利用率指标。

Explanation: https://docs. aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html

c是正确的,它在谈论旧的监视脚本,但是仍然允许使用自定义脚本。我们建议您使用CloudWatch代理收集指标和日志。向仍在使用旧的监视脚本从其Linux实例收集信息的客户提供了有关监视脚本的信息。不再支持旧的监视脚本

我相信这里的主要要求是监视交换使用率—与内存度量标准有关。CloudWatch没有内存指标。您可以做的是在实例上安装Cloudwatch代理,并配置一个自定义指标来监视内存,或者特别是Swap使用情况。是的,AWS不鼓励使用脚本,因为它们具有您可以利用的现有服务。在这种特定情况下,主要问题实际上是-在解决Cloudwatch的局限性时如何解决该问题?选项C解决了该需求。

交换利用率是详细监控的一部分

QUESTION 229
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
A company has two applications it wants to migrate to AWS,
Both applications process a large set of files by accessing the same files at the same time.
Both applications need to read the files with low latency.
Which architecture should a solutions architect recommend for this situation?
A. Configure two AWS Lambda functions to run the applications.
Create an Amazon EC2 instance with an instance store volume to store the data.
B. Configure two AWS Lambda functions to run the applications.
Create an Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) volume to
store the data.
C. Configure one memory optimized Amazon EC2 instance to run both applications simultaneously.
Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned lOPS to store the
data.
D, Configure two Amazon EC2 instances to run both applications.
Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode
and Bursting.
Throughput mode to store the data.
Answer: D
一家公司有两个要迁移到AWS的应用程序,
这两个应用程序通过同时访问相同的文件来处理大量文件。
这两个应用程序都需要以低延迟读取文件。
解决方案架构师应针对这种情况推荐哪种架构?
A.配置两个AWS Lambda函数以运行应用程序。
使用实例存储卷创建一个Amazon EC2实例以存储数据。
B.配置两个AWS Lambda函数以运行应用程序。
使用Amazon Elastic Block Store(Amazon EBS)卷创建一个Amazon EC2实例以
存储数据。
C.配置一个内存优化的Amazon EC2实例以同时运行两个应用程序。
使用预置的lOPS创建Amazon Elastic Block Store(Amazon EBS)卷以存储
数据。
D,配置两个Amazon EC2实例以运行两个应用程序。
使用通用性能模式配置Amazon Elastic File System(Amazon EFS)
和爆裂。
吞吐量模式下存储数据。

EFS就是干这个的

QUESTION 230
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
A company recently deployed a new auditing system to centralize information about operating
system versions, patching, and installed software for Amazon EC2 instances.
A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups
successfully send reports to the auditing system as soon as they are launched and terminated.
Which solution achieves these goals MOST efficiently?
A. Use a scheduled AWS Lambda function and execute a script remotely on all EC2 instances to
send data to the audit system.
B. Use EC2 Auto Scaling lifecycle hooks to execute a custom script to send data to the audit system
when instances are launched and terminated.
C. Use an EC2 Auto Scaling launch configuration to execute a custom script through user data to
send data to the audit system when instances are launched and terminated.
D.Execute a custom script on the instance operating system to send data to the audit system.
Configure the script to be executed by the EC2 Auto Scaling group when the instance starts and
is terminated.
Answer: B

一家公司最近部署了新的审核系统,以集中有关Amazon EC2实例的操作系统版本,
补丁程序和已安装软件的信息。解决方案架构师必须确保通过EC2 Auto Scaling组配置的所有实例在启动和终止后立即将其成功发送到审计系统。
哪种解决方案可以最有效地实现这些目标?
A.使用预定的AWS Lambda函数并在所有EC2实例上远程执行脚本以
将数据发送到审核系统。
B.使用EC2 Auto Scaling生命周期挂钩执行自定义脚本,以将数据发送到审核系统
当实例启动和终止时。
C.使用EC2 Auto Scaling启动配置通过用户数据执行自定义脚本
启动和终止实例时将数据发送到审核系统。
D.在实例操作系统上执行自定义脚本,以将数据发送到审核系统。
将实例配置为在实例启动时由EC2 Auto Scaling组执行脚本。
终止。

生命周期绑定使您可以执行自定义操作,并在Auto Scaling组启动或终止实例时暂停它们。实例暂停后,它将一直处于等待状态,直到使用complete-lifecycle-action命令或操作完成生命周期操作CompleteLifecycleAction或直到超时到期为止(默认为一小时)。

例如,假设刚启动的实例完成了启动顺序,并且生命周期挂钩暂停了该实例。当实例处于待机状态时,您可以在其上安装或配置软件,以确保该实例在开始接收流量之前已完全准备就绪。对于另一个使用生命周期绑定的示例,当发生缩减事件时,将首先注销将在负载均衡器上结束的实例(如果将Auto Scaling组与Elastic Load Balancing一起使用) 。然后,生命周期挂钩会在实例结束之前暂停实例。实例处于等待状态时,例如,您可以连接到该实例,并在实例完全终止之前下载日志或其他数据。

每个Auto Scaling组可以具有多个生命周期链接。但是,每个Auto Scaling组的链接数是有限的

QUESTION 231
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company requires a durable backup storage solution for its on-premises database servers while
ensuring oก-premises applications maintain access to these backups for quick recovery,
The company will use AWS storage services as the destination for these backups.
A solutions architect is designing a solution with minimal operational overhead.
Which solution should the solutions architect implement?
A. Deploy an AWS Storage Gateway fle gateway on-premises and associate it with an Amazon S3
bucket
B.Back up the databases to an AWS Storage Gateway volume gateway and access it using the
Amazon S3 API.
C. Transfer the database backup files to an Amazon Elastic Block Store (Amazon EBS) volume
attached to an Amazon EC2 instance.
D. Back up the database directly to aท AWS Snowball device and นรร lifecycle rules to move the
data to Amazon S3 Glacier Deep Archive.
Answer: A
一家公司需要为其本地数据库服务器提供持久的备份存储解决方案,同时还要确保本地应用程序保持对这些备份的访问权限以实现快速恢复。
该公司将使用AWS存储服务作为这些备份的目标。解决方案架构师正在设计具有最小运营开销的解决方案。解决方案架构师应实施哪种解决方案? 
A.在本地部署AWS Storage Gateway网关并将其与Amazon S3存储桶相关联
B
.将数据库备份到AWS Storage Gateway卷网关并使用Amazon S3 API访问它。
C.将数据库备份文件传输到附加到Amazon EC2实例的Amazon Elastic Block Store(Amazon EBS)卷。 
D.直接将数据库备份到AWS Snowball设备和นรร生命周期规则,以将数据移至Amazon S3 Glacier Deep Archive

AWS Storage Gateway的典型应用

QUESTION 232
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company has a web server running on an Amazon EC2 instance in a public subnet with an
Elastic IP address.
The default security group is assigned to the EC2 instance.
The default network ACL has been modified to block all traffic.
A solutions architect needs to make the web server accessible from everywhere on port 443.
Which combination of steps will accomplish this task? (Select TWO.)
A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.

B. Create a security group with a rule to allow TCP port 443 to destination 00 0 0/0.
C. Update the network ACL to allow TCP port 443 from source 0.0 0 0/0.
D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to
destination 0.0.0.0/0.
E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0 0/0 and outbound TCP
port 32768-65535 to destination 0 0 0.0/0
公司的Web服务器在具有弹性IP地址的公共子网中的Amazon EC2实例上运行。默认安全组已分配给EC2实例。
默认网络ACL已修改为阻止所有流量。解决方案架构师需要使Web服务器可以从端口443上的任何位置访问。哪种步骤组合可以完成此任务? (选择两个。)
A.创建一个具有规则的安全组,以允许源0.0.0.0/0中的TCP端口443。
B.创建一个带有规则的安全组,以允许TCP端口443到达目标00 0 0/0。 
C.更新网络ACL,以允许源0.0 0 0/0的TCP端口443。 
D.更新网络ACL,以允许从源0.0.0.0/0到目标0.0.0.0/0的入站/出站TCP端口443。 
E.更新网络ACL以允许从源0.0.0 0/0入站TCP端口443和到目标0 0 0.0 / 0的出站TCP端口32768-65535

Answer: AE

A&E https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports In practice, to cover the different types of clients that might initiate traffic to public-facing instances in your VPC, you can open ephemeral ports 1024-65535. However, you can also add rules to the ACL to deny traffic on any malicious ports within that range. Ensure that you place the deny rules earlier in the table than the allow rules that open the wide range of ephemeral ports.

QUESTION 233
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company hosts its website on AWS. To address the highly variable demand, the company has
implemented Amazon EC2 Auto Scaling.
Management is concerned that the company is over- provisioning its infrastructure, especially at
the front end of the three-tier application.
A solutions architect needs to ensure costs are optimized without impacting performance.
What should the solutions architect do to accomplish this?
A. Use Auto Scaling with Reserved Instances.
B. Use Auto Scaling with a scheduled scaling policy.
C. Use Auto Scaling with the suspend-resume feature
D. Use Auto Scaling with a target tracking scaling policy.
Answer; D
一家公司在AWS上托管其网站。为了满足高度变化的需求,该公司实施了Amazon EC2 Auto Scaling。
管理层担心该公司过度配置了基础架构,尤其是在三层应用程序的前端。
解决方案架构师需要确保在不影响性能的情况下优化成本。解决方案架构师应该怎么做才能做到这一点? 
A.将Auto Scaling与保留实例一起使用。 B.将Auto Scaling与计划的缩放策略一起使用。 
C.将Auto Scaling与暂停恢复功能一起使用D.将Auto Scaling与目标跟踪缩放策略一起使用。

Explanation: https://docs. aws. amazon.com/autoscaling/ec2userguidea-caln–n.

使用目标跟踪缩放策略,您可以选择缩放指标并设置目标值。Amazon EC2 Auto Scaling创建和管理CloudWatch警报,这些警报触发扩展策略并根据指标和目标值计算扩展调整。缩放策略可根据需要添加或删除容量,以将指标保持在指定的目标值或接近指定的目标值。除了使度量接近目标值外,目标跟踪缩放策略还根据负载模式的变化来调整度量的变化。

例如,您可以使用目标跟踪缩放比例来:

  • 配置目标跟踪扩展策略,以使Auto Scaling组的平均总CPU利用率保持在40%。
  • 配置目标跟踪扩展策略,以将Auto Scaling组的Application Load Balancer目标组的每个目标的请求计数保持在1000。

根据您的应用程序需求,您可能会发现这些流行的扩展指标之一在使用目标跟踪时最适合您,或者您可能发现这些指标的组合或其他指标可以更好地满足您的需求。

QUESTION 234
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
A company is concerned that two NAT instances in use will no longer be able to support the
traffic needed for the company's application.
A solutions architect wants to implement a solution that is highly available fault tolerant, and
automatically scalable.
What should the solutions architect recommend?
一家公司担心两个正在使用的NAT实例将不再能够支持
公司应用所需的流量。
解决方案架构师希望实施高度可用的容错解决方案,并且
自动扩展。
解决方案架构师应该建议什么?

A. Remove the two NAT instances and replace them with two NAT gateways in the same Availability
Zone.
B.Use Auto Scaling groups with Network Load Balancers for the NAT instances in different
Availability Zones.
C. Remove the two NAT instances and replace them with two NAT gateways in different Availability
Zones.
D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a
Network Load Balancer.

A.删除两个NAT实例,并用两个具有相同可用性的NAT网关替换它们
区。
B.将Auto Scaling组与Network Load Balancer一起用于不同实例中的NAT实例
可用区。
C.删除两个NAT实例,并用两个具有不同可用性的NAT网关替换它们
区域。
D.用不同可用区中的竞价型实例替换这两个NAT实例,并部署一个
网络负载平衡器。Answer: C

C is correct. 1 NAT Gateway is required in each of the AZ.Same AZ is belongs to NAT Gateways not NAT instances每个可用区中都需要1个NAT网关。同一可用区属于NAT网关而不是NAT实例

QUESTION 235
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A solutions architect is working on optimizing a legacy document management application
running on Microsoft Windows Server in an on-premises data center.
The application stores a large number of files on a network file share.
The chief information officer wants to reduce the on-premises data center footprint and minimize
storage costs by moving on-premises storage to AWS.
What should the solutions architect do to meet these requirements?
A. Set up an AWS Storage Gateway file gateway,
B. Set up Amazon Elastic File System (Amazon EFS)
C. Set up AWS Storage Gateway as a volume gateway
D. Set up an Amazon Elastic Block Store (Amazon EBS) volume.
Answer: A
解决方案架构师正在优化本地数据中心中运行在Microsoft Windows Server上的旧版文档管理应用程序。
该应用程序将大量文件存储在网络文件共享上。首席信息官希望通过将本地存储移至AWS来减少本地数据中心的占地面积并最大程度降低存储成本。
解决方案架构师应怎么做才能满足这些要求? 
A.设置AWS Storage Gateway文件网关,B。设置Amazon Elastic File System(Amazon EFS)
C.将AWS Storage Gateway设置为卷网关D.设置Amazon Elastic Block Store(Amazon EBS)卷
QUESTION 236
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company is processing data on a daily basis.
The results of the operations are stored in an Amazon S3 bucket, analyzed daily for one week,
and then must remain immediately accessible for occasional analysis
What is the MOST cost- effective storage solution alternative to the current configuration?
A. Configure a lifecycle policy to delete the objects after 30 days
B. Configure a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days.
C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access
(S3 Standard-lA) after 30 days
D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-lnfrequent Access
(S3 One Zone-lA) after 30 days.
Answer: C
公司每天都在处理数据。操作结果存储在Amazon S3存储桶中,每天分析一周,然后必须立即保持访问状态以进行偶发分析,
对于当前配置,有什么最省钱的存储解决方案? 
A.配置生命周期策略以在30天后删除对象
B.配置生命周期策略以在30天后将对象转换到Amazon S3 Glacier。
C.配置生命周期策略以在30天后将对象过渡到Amazon S3标准不频繁访问(S3 Standard-lA)
D.配置生命周期策略以将对象过渡到Amazon S3标准不频繁访问(S3标准不频繁访问) )30天后。

现有的解决方案是保留结果文件以供偶尔分析,同时建议替代方案时,请记住这一点,因为它不会影响现有功能。看来答案是C“ S3标准IA”。 “ S3 One Zone IA”的问题是,如果AZ向下,仍然存在结果文件不可用的风险

QUESTION 237
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A recent analysis of a company's IT expenses highlights the need to reduce backup costs.
The company's chief information officer wants to simplify the on-premises backup infrastructure
and reduce costs by eliminating the use of physical backup tapes.
The company must preserve the existing investment in the on-premises backup applications and
workflows.
What should a solutions architect recommend?
A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
B. Set up an Amazon EFS file system that connects with the backup applications using the NFS
interface
C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI
interface
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual
tape library (VTL) interface.
Answer: D
最近对公司的IT支出进行的分析表明,需要降低备份成本。该公司的首席信息官希望通过消除使用物理备份磁带来简化本地备份基础架构并降低成本。
公司必须保留在本地备份应用程序和工作流程中的现有投资。解决方案架构师应该建议什么?
A.设置AWS Storage Gateway以使用NFS界面与备份应用程序连接。 B.设置使用NFS接口与备份应用程序连接的Amazon EFS文件系统
C.设置使用iSCSI接口与备份应用程序连接的Amazon EFS文件系统D.设置AWS Storage Gateway以与备份连接使用iSCSI虚拟磁带库(VTL)接口的应用程序

Explanation:

QUESTION 238
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company wants to replicate its data to AWS to recover in the event of a disaster.
Today, a system administrator has scripts that copy data to a NFS share Individual backup files
need to be accessed with low latency by application administrators to deal with errors in
processing.
What should a solutions architect recommend to meet these requirements?
A. Modify the script to copy data to an Amazon S3 bucket instead of the on-premises NFS share
B
Modify the script to copy data to an Amazon S3 Glacier Archive instead of the on-premises NFS
share
C. Modify the script to copy data to an Amazon Elastic File System (Amazon EFS) volume instead of
the on-premises NFS share.
D. Modify the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance
instead of the on-premises NFS share.
Answer: D
一家公司希望将其数据复制到AWS以在发生灾难时进行恢复。
今天,系统管理员拥有将数据复制到NFS共享的脚本单个备份文件
应用程序管理员需要以低延迟访问它们,以处理
处理。
解决方案架构师应建议哪些以满足这些要求?
A.修改脚本以将数据复制到Amazon S3存储桶,而不是本地NFS共享
乙
修改脚本以将数据复制到Amazon S3 Glacier存档而不是本地NFS
分享
C.修改脚本以将数据复制到Amazon Elastic File System(Amazon EFS)卷而不是
本地NFS共享。
D.修改脚本以将数据复制到适用于File Gateway虚拟设备的AWS Storage Gateway
而不是本地NFS共享。
QUESTION 239
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A solutions architect is designing the storage architecture for a new web application used for
stonng and viewing engineering drawings.
All application components will be deployed on the AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for
the engineering drawings to load.
The application must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
A. Amazon S3 with Amazon CloudFront
B. Amazon S3 Glacier with Amazon ElastiCache
C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
D. AWS Storage Gateway with Amazon ElastiCache
Answer: A
解决方案架构师正在设计用于新的Web应用程序的存储体系结构,该应用程序用于简化和查看工程图。
所有应用程序组件都将部署在AWS基础架构上。应用程序设计必须支持缓存,以最大程度地减少用户等待工程图加载的时间。
该应用程序必须能够存储PB的数据。解决方案架构师应使用哪种存储和缓存组合? 
A.使用Amazon CloudFront的Amazon S3 B.使用Amazon ElastiCache的Amazon S3 Glacier 
C.使用Amazon CloudFront的Amazon Elastic Block Store(Amazon EBS)卷D.使用Amazon ElastiCache的AWS Storage Gateway

Explanation: CloudFront for caching and S3 as the origin. Glacier is used for archiving which is not the case for this scenario.

QUESTION 240
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A company that develops web applications has launched hundreds of Application Load Balancers
(ALBs) in multiple Regions.
The company wants to create an allow list (or the lPs of all the load balancers on its firewall
device.
A solutions architect is looking for a one-time, highly available solution to address this request,
which will also help reduce the number of lPs that need to be allowed by the firewall.
What should the solutions architect recommend to meet these requirements?
A. Create a AWS Lambda function to keep track of the lPs for all the ALBs in different Regions Keep
refreshing this list.
B. Set up a Network Load Balancer (NLB) with Elastic lPs.
Register the private lPs of all the ALBs as targets to this NLB.
C. Launch AWS Global Accelerator and create endpoints for all the Regions.
Register all the AL Bs in different Regions to the corresponding endpoints
D. Set up an Amazon EC2 instance, assign an Elastic IP to this EC2 instance, and configure the
instance as a proxy to forward traffic to all the ALBs,
Answer: C
一家开发Web应用程序的公司推出了数百个应用程序负载平衡器 (ALB)在多个区域中。 该公司希望创建一个允许列表(或防火墙上所有负载均衡器的lP 设备。
解决方案架构师正在寻找一种一次性的,高度可用的解决方案来满足这一要求, 这也将有助于减少防火墙需要允许的lP数量。 解
决方案架构师应建议哪些以满足这些要求?
A.创建一个AWS Lambda函数以跟踪不同区域中所有ALB的lP Keep 刷新此列表。 
B.用弹性IP设置网络负载平衡器(NLB)。 将所有ALB的私有IP注册为该NLB的目标。
C.启动AWS Global Accelerator并为所有区域创建端点。 将不同区域中的所有AL B注册到相应的端点 
D.设置一个Amazon EC2实例,为该EC2实例分配一个弹性IP,然后配置 实例作为代理将流量转发到所有ALB,

ELB在一个区域内提供负载平衡,AWS Global Accelerator在多个区域之间提供流量管理[..] AWS Global Accelerator通过将这些功能扩展到单个AWS区域之外,对ELB进行了补充,允许您为任意数量的应用程序提供全局接口。地区。如果您有满足全球客户群的工作负载,我们建议您使用AWS Global Accelerator。如果您的工作负载托管在单个AWS区域中,并且由同一区域内及其周围的客户端使用,则可以使用Application Load Balancer或Network Load Balancer来管理资源。

为端点组注册端点:在每个端点组中注册一个或多个区域资源,例如应用程序负载平衡器,网络负载平衡器,EC2实例或弹性IP地址。然后,您可以设置权重以选择路由到每个端点的流量。

QUESTION 241
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company recently implemented hybrid cloud connectivity using AWS Direct Connect and is
migrating data to Amazon S3.
The company is looking for a fully managed solution that will automate and accelerate the
replication of data between the on-premises storage systems and AWS storage services.
Which solution should a solutions architect recommend to keep the data private?
A. Deploy an AWS DataSync agent tor the on-premises environment.
Configure a sync job to replicate the data and connect it with an AWS service endpoint.
B. Deploy an AWS DataSync agent for the on-premises environment.
Schedule a batch job to replicate point-ln-time snapshots to AWS.
. Deploy an AWS Storage Gateway volume gateway for the on-premises environment.
Configure it to store data locally, and asynchronously back up point-in-time snapshots to AWS.
D. Deploy an AWS Storage Gateway file gateway for the on-premises environment.
Configure it to store data locally, and asynchronously back up point-in-lime snapshots to AWS.
Answer: A
一家公司最近使用AWS Direct Connect实现了混合云连接,并且正在
将数据迁移到Amazon S3。
该公司正在寻找一种完全托管的解决方案,该解决方案可以自动化并加速
在本地存储系统和AWS存储服务之间复制数据。
解决方案架构师应建议使用哪种解决方案来保护数据私有?
A.将AWS DataSync代理部署到本地环境。
配置同步作业以复制数据并将其与AWS服务终端节点连接。
B.为本地环境部署一个AWS DataSync代理。
计划批处理作业以将时间点快照复制到AWS。
。为本地环境部署一个AWS Storage Gateway卷网关。
配置它以在本地存储数据,然后将时间点快照异步备份到AWS。
D.为本地环境部署一个AWS Storage Gateway文件网关。
将其配置为在本地存储数据,并将异步点快照备份到AWS。

Explanation: You can use AWS DataSync with your Direct Connect link to access public service endpoints or private VPC endpoints. When using VPC endpoints, data transferred between the DataSync agent and AWS services does not traverse the public internet or need public IP addresses, increasing the security of data as it Ís copied over the network.

您可以将AWS DataSync与Direct Connect链接一起使用以访问公共服务终端节点或私有VPC终端节点。使用VPC终端节点时,在DataSync代理和AWS服务之间传输的数据不会遍历公共互联网或需要公共IP地址,从而提高了数据在网络上复制时的安全性。

AWS DataSync使您可以轻松地通过网络在本地存储和AWS存储服务之间传输数据。DataSync自动执行数据传输过程和高性能,安全数据传输所需的基础结构的管理。DataSync还包括加密和完整性验证,因此您的数据可以安全,完整地传输并可以使用。所有这些都最大限度地减少了快速,可靠和安全的传输所需的内部开发和管理。

QUESTION 242
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company has an on-premises data center that is running out of storage capacity.
The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth
costs,
The solution must allow for immediate retrieval of data at no additional cost.

How can these requirements be met?
A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval.
Enable provisioned retrieval capacity for the workload
B. Deploy AWS Storage Gateway using cached volumes.
Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed
data subsets locally.
C. Deploy AWS Storage Gateway using stored volumes to store data locally.
Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon
S3
D. Deploy AWS Direct Connect to connect with the on-premises data center.
Configure AWS Storage Gateway to store data locally,
Use Storage Gateway to asynchronously bacK up potnt-tn-time snapshots of the data to Amazon
S3.
公司的本地数据中心存储容量不足。 该公司希望将其存储基础架构迁移到AWS,同时最大程度地减少带宽 费用, 
该解决方案必须允许立即检索数据而无需任何额外费用。 如何满足这些要求? 
A.部署Amazon S3 Glacier Vault并启用快速检索。 为工作负载启用预配置的检索能力 
B.使用缓存的卷部署AWS Storage Gateway。 使用Storage Gateway在Amazon S3中存储数据,同时保留经常访问的副本 本地数据子集。 。
使用存储的卷部署AWS Storage Gateway以在本地存储数据。 使用Storage Gateway将数据的时间点快照异步备份到Amazon S3
D.部署AWS Direct Connect以与本地数据中心连接。 配置AWS Storage Gateway以在本地存储数据, 使用Storage Gateway异步将数据的有效时间快照存储到Amazon S3。

大程度地减少带宽 费用

在存储模式下,您的主要数据存储在本地,并且整个数据集可用于低延迟访问,同时可以异步备份到AWS

Answer: C Explanation: Volume Gateway provides an iSCSI target, which enables you to create block storage volumes and mount them as iSCSI devices from your on-premises or EC2 application servers. The Volume Gateway runs ‘in either a cached or stored mode: In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency access. In the stored mode, your primary data is stored locally and your entire dataset is available for low- latency access while asynchronously backed up to AWS.

QUESTION 243
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company Is reviewing Its AWS Cloud deployment to ensure its data is not accessed by anyone
without appropriate authorization,
Ä solutions architect is tasked with identifying all open Amazon S3 buckets and recording any S3
bucket configuration changes.
What should the solutions architect do to accomplish this?
A. Enable AWS Config service with the appropriate rules
B. Enable AWS Trusted Advisor with the appropriate checks.
C. Write a script using an AWS SDK to generate a bucket report
D. Enable Amazon S3 server access logging and configure Amazon CloudWatch Events.
Answer: A
一家公司正在审查其AWS Cloud部署以确保任何人都无法访问其数据
未经适当授权,
Ä解决方案架构师的任务是识别所有打开的Amazon S3存储桶并记录任何S3
存储桶配置更改。
解决方案架构师应该怎么做才能做到这一点?
A.使用适当的规则启用AWS Config服务
B.通过适当的检查启用AWS Trusted Advisor。
C.使用AWS开发工具包编写脚本以生成存储桶报告
D.启用Amazon S3服务器访问日志记录并配置Amazon CloudWatch Events。

Explanation:

AWS Config :

. Helps with auditing and recording compliance of your AWS resources . Helps record configurations and changes “over time . Possibility of storing the configuration data into S3 (analyzed by Athena) . Questions that can be solved by AWS Config; . Is there unrestricted SSH access to my security groups?

Do my buckets have any public access? . How has my ALB configuration changed over time? . You can receive alerts (SNS notifications) for any changes . AWS Config is a per-region service . Can be aggregated across regions and accounts

QUESTION 244
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
A company built an application that lets users check in to places they visit, rank the places, and
add reviews about their experiences.
The application is successful with a rapid increase in the number of users every month.
The chief technology officer fears the database supporting the current Infrastructure may not
handle, the new load the following month because the single Amazon RDS for MySQL instance
has triggered alarms related lo resource exhaustion due to read requests.
What can a solutions architect recommend to prevent service Interruptions at the database layer
with minimal changes to code?
A. Create RDS read replicas and redirect read-only traffic to the read replica endpoints.
Enable a Multi-AZ deployment.
B. Create an Amazon EMR cluster and migrate the data to a Hadoop Distributed File System
(HDFS) with a replication factor of 3.
C. Create an Amazon ElastiCache cluster and redirect all read-only traffic to the cluster.
Set up the cluster to be deployed m three Availability Zones.
D. Create an Amazon DynamoDB table to replace the RDS instance and redirect all read-only traffic
to the DynamoDB table.
Enable DynamoDB Accelerator to offload traffic from the main table.
Answer: A
一家公司构建了一个应用程序,该应用程序使用户可以签入他们访问过的地方,对地方进行排名并
添加有关他们的经历的评论。
该应用程序成功,每月用户数量迅速增加。
首席技术官担心支持当前基础架构的数据库可能不会
处理,新的负载将在下个月加载,因为单个Amazon RDS for MySQL实例
由于读取请求而触发了与资源耗尽有关的警报。
解决方案架构师可以建议什么来防止数据库层的服务中断
用最少的代码更改?
A.创建RDS只读副本,并将只读流量重定向到只读副本端点。
启用多可用区部署。
B.创建一个Amazon EMR集群并将数据迁移到Hadoop分布式文件系统
(HDFS),复制因子为3。
C.创建一个Amazon ElastiCache集群,并将所有只读流量重定向到该集群。
设置要在三个可用区中部署的群集。
D.创建一个Amazon DynamoDB表来替换RDS实例并重定向所有只读流量
到DynamoDB表。
启用DynamoDB Accelerator以减轻主表的流量。
QUESTION 245
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company runs an application on Amazon EC2 Instances.
The application is deployed in private subnets in three Availability Zones of the us-east-1 Region.
The instances must be able to connect to the internet to download files.
The company wants a design that ls highly available across the Region.
Which solution should be implemented to ensure that there are no disruptions to Internet
connectivity?
A. Deploy a NAT Instance In a private subnet of each Availability Zone.
B. Deploy a NAT gateway in a public subnet of each Availability Zone.

C. Deploy a transit gateway in a private subnet of each Availability Zone.
D. Deploy an internet gateway in a public subnet of each Availability Zone.
Answer: B
公司在Amazon EC2实例上运行应用程序。
该应用程序部署在us-east-1地区三个可用区中的专用子网中。
实例必须能够连接到互联网以下载文件。
该公司希望该设计在本地区具有较高的可用性。
应该实施哪种解决方案以确保不会中断Internet
连接性?
A.在每个可用区的专用子网中部署NAT实例。
B.在每个可用区的公共子网中部署NAT网关。
C.在每个可用区的专用子网中部署一个传输网关。
D.在每个可用区的公共子网中部署Internet网关。
QUESTION 246
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
A company has migrated an on-premises Oracle database to an Amazon RDS (or Oracle Multi-
AZ DB instance In the us-east-l Region.
A solutions architect is designing a disaster recovery strategy to have the database provisioned In
the us-west-2 Region In case the database becomes unavailable in the us-east-1 Region.
The design must ensure the database is provisioned in the us-west-2 Region in a maximum of 2
hours, with a data loss window of no more than 3 hours.
How can these requirements be met?
A. Edit the DB instance and create a read replica in us-west-2.
Promote the read replica to master In us- west-2 in case the disaster recovery environment needs
to be activated.
B. Select the multi-Region option to provision a standby instance in us-west-2.
The standby Instance will be automatically promoted to master In us-west-2 in case the disaster
recovery environment needs to be created.
C. Take automated snapshots of the database instance and copy them to us-west-2 every 3 hours.
Restore the latest snapshot to provision another database instance in us-west-2 in case the
disaster recovery environment needs to be activated.
D. Create a multimaster read/write instances across multiple AWS Regions Select VPCs in us-east-
1 and us-west-2 lo make that deployment.
Keep the master read/write instance in us-west-2 available to avoid having to activate a disaster
recovery environment,
Answer: A
一家公司已将本地Oracle数据库迁移到Amazon RDS(或Oracle Multi-
AZ数据库实例在us-east-l地区。
解决方案架构师正在设计灾难恢复策略,以在其中配置数据库
us-west-2区域如果us-east-1区域中的数据库不可用。
设计必须确保在us-west-2区域中最多配置2个数据库
小时,且资料遗失视窗不超过3小时。
如何满足这些要求?
A.编辑数据库实例并在us-west-2中创建一个只读副本。
在灾难恢复环境需要的情况下,将只读副本升级为us-west-2中的主副本
被激活。
B.选择“多区域”选项以在us-west-2中置备一个备用实例。
如果发生灾难,备用实例将自动升级为us-west-2中的master
需要创建恢复环境。
C.拍摄数据库实例的自动快照,并每3小时将其复制到us-west-2。
恢复最新的快照以在us-west-2中置备另一个数据库实例,以防
灾难恢复环境需要激活。
D.在多个AWS区域中创建多主机读/写实例。在us-east-中选择VPC,
1和us-west-2 lo进行了部署。
保持us-west-2中的主读/写实例可用,以避免不得不激活灾难
恢复环境,

Amazon RDS for Oracle现在通过跨区域只读副本支持托管灾难恢复和数据邻近性。实施灾难恢复。如果主数据库实例发生故障,则可以将只读副本提升为独立实例,作为灾难恢复解决方案。

QUESTION 247
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
A company has an application with a REST-based Interface that allows data to be received in
near-real time from a third-party vendor.
Once received, the application processes and stores the data for further analysis.
The application ls running on Amazon EC2 instances.
The third-party vendor has received many 503 Service Unavailable Errors when sending data to
the application.
When the data volume spikes, the compute capacity reaches its maximum limit and the
application is unable to process all requests.
Which design should a solutions architect recommend to provide a more scalable solution?
A. Use Amazon Kinesis Data Streams to ingest the data.
Process the data using AWS Lambda functions.
B. Use Amazon API Gateway on top of the existing application.
Create a usage plan with a quota limit for the third-party vendor.
. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data.
Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer.
D. Repackage the application as a container.
Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2
launch type with an Auto Scaling group.
Answer: A
公司的应用程序带有基于REST的接口该接口允许在以下位置接收数据
第三方供应商提供的近实时服务
一旦收到应用程序将处理并存储数据以进行进一步分析
该应用程序在Amazon EC2实例上运行
第三方供应商在将数据发送到时已收到许多503服务不可用错误
应用程序
当数据量激增时计算能力达到最大限制并且
应用程序无法处理所有请求
解决方案架构师应建议采用哪种设计来提供更具扩展性的解决方案
A.使用Amazon Kinesis Data Streams摄取数据
使用AWS Lambda函数处理数据
B.在现有应用程序顶部使用Amazon API Gateway
创建具有配额限制的第三方供应商使用计划
使用Amazon Simple Notification ServiceAmazon SNS提取数据
将EC2实例放在应用程序负载均衡器后面的Auto Scaling组中
D.将应用程序重新包装为容器
使用EC2使用Amazon Elastic Container ServiceAmazon ECS部署应用程序
带有Auto Scaling组的启动类型
QUESTION 248
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days.
The company's network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization.
What should a solutions architect do to meet these requirements?
A. Use AWS Snowball.
B. Use AWS DataSync.
C. Use a secure VPN connection.
D. Use Amazon S3 Transfer Acceleration,
Answer: A
公司必须在30天内将20 TB的数据从数据中心迁移到AWS云。
该公司的网络带宽限制为15 Mbps,利用率不能超过70%。
解决方案架构师应该怎么做才能满足这些要求?
A.使用AWS Snowball。
B.使用AWS DataSync。
C.使用安全的VPN连接。
D.使用Amazon S3传输加速,
答:A
QUESTION 249
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
A company recently deployed a two-tier application in two Availability Zones in the us-east-1
Region.
The databases are deployed in a private subnet while the web servers are deployed in a public
subnet.
An internet gateway is attached to the VPC. The application and database run on Amazon EC2
instances.
The database servers are unable to access patches on the internet.
A solutions architect needs to design a solution that maintains database security with the least
operational overhead.
Which solution meets these requirements?
A. Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an
Elastic IP address.
Update the routing table of the private subnet to use it as the default route.
B. Deploy a NAT gateway inside the private subnet for each Availability Zone and associate it with
an Elastic IP address.
Update the routing table of the private subnet to use it as the default route.
C. Deploy two NAT instances inside the public subnet for each Availability Zone and associate them
with Elastic IP addresses.
Update the routing table of the private subnet to use it as the default route.
D. Deploy two NAT instances inside the private subnet for each Availability Zone and associate them
with Elastic IP addresses.
Update the routing table of the private subnet to use it as the default route.
Answer: A
一家公司最近在us-east-1的两个可用区中部署了两层应用程序
地区。
数据库部署在专用子网中,而Web服务器部署在公共子网中
子网。
Internet网关已连接到VPC。该应用程序和数据库在Amazon EC2上运行
实例。
数据库服务器无法访问Internet上的补丁。
解决方案架构师需要设计一种解决方案,以最少的维护数据库安全性
运营开销。
哪种解决方案满足这些要求?
A.在每个可用区的公共子网内部署NAT网关,并将其与
弹性IP地址。
更新专用子网的路由表以将其用作默认路由。
B.在每个可用区的专用子网内部署一个NAT网关,并将其与
弹性IP地址。
更新专用子网的路由表以将其用作默认路由。
C.在每个可用区的公共子网内部署两个NAT实例,并将它们关联
具有弹性IP地址。
更新专用子网的路由表以将其用作默认路由。
D.在每个可用区的专用子网内部署两个NAT实例,并将它们关联
具有弹性IP地址。
更新专用子网的路由表以将其用作默认路由。

Explanation:

NAT Gateway

AWS managed NAT, higher bandwidth, better availability, no admin . Pay by the hour for usage and bandwidth . NAT is created in a specific AZ, uses an EIP . Cannot be used by an instance in that subnet (only from other subnets) . Requires an lGW (Private Subnet => NAT => IGW) .5 Gbps of bandwidth with automatic scaling up to 45 Gbps . No security group to manage 1 required

如果是从私有子网中的机器连接到 Internet,可以使用 NAT(Network Address Translation)网关,NAT 网关是 AWS 的一项服务,其需要被放置在公有子网中。创建出 NAT 网关后,我们需要把 NAT 网关配置到这个私有子网所关联的路由表中:将所有 Internet 的网络请求路由给了 NAT 网关,之后 NAT 网关会紧接着转发这个请求。NAT 网关转发的网络请求又根据公有子网的路由表规则路由给 Internet 网关,由 Internet 网关转发向网络目标。由此便实现了私有子网向 Internet 的通信。

和 NAT 网关类似的还有 NAT 实例,它们不同的地方在于,NAT 实例创建后其背后的机器在 EC2(AWS 虚拟机服务)列表中是可见的,你甚至可以同时将它作为堡垒机来用,缺点在于该实例是单点的,无法保证高可用。NAT 网关作为 AWS 服务,其背后实例不可见,但 AWS 会为此保证可用性。

QUESTION 250
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
A solutions architect must design a solution for a persistent database that is being migrated from
on- premises to AWS,
The database requires 64,000 lOPS according to the database administrator.
If possible, the database administrator wants to use a single Amazon Elastic Block Store
(Amazon EBS) volume to host the database instance.
Which solution effectively meets the database administrator's criteria?
A. Use an instance from the 13 l/O optimized family and leverage local ephemeral storage to
achieve the IOPS requirement.
B. Create an Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1)
volume attached. Configure the volume to have 64,000 lOPS.
C. Create and map an Amazon Elastic File System (Amazon EFS) volume to the database instance
and use the volume to achieve the required IOPS for the database.
D. Provision two volumes and assign 32,000 IOPS to each. Create a logical volume at the operating
system level that aggregates both volumes to achieve the IOPS requirements.
Answer: B
解决方案架构师必须为正在从本地迁移到AWS的持久数据库设计解决方案,
根据数据库管理员的说法,该数据库需要64,000 lOPS。
如果可能,数据库管理员希望使用一个Amazon Elastic Block Store
(Amazon EBS)卷来托管数据库实例。
哪种解决方案有效满足数据库管理员的条件?
A.使用来自13 l / O优化系列的实例,并利用本地临时存储来
达到IOPS要求。
B.使用Amazon EBS配置的IOPS SSD(io1)创建基于Nitro的Amazon EC2实例
卷附。将卷配置为具有64,000 lOPS。
C.创建一个Amazon Elastic File System(Amazon EFS)卷并将其映射到数据库实例
并使用该卷来实现数据库所需的IOPS。
D.提供两卷,并为每卷分配32,000 IOPS。在运行时创建逻辑卷
聚合两个卷以达到IOPS要求的系统级别。

Explanation:

EBS -Volume Types Summary . gp2: General Purpose Volumes (cheap) , 3 IOPSI GiB, minimum 100 lOPS, burst to 3000 IOPS, max 16000 IOPS · IGiB- I6TiB,+ITB= +3000 IOPS . iol: Provisioned IOPS (expensive)

Min 100 IOPS, Max 64000 lOPS (Nitro) or 32000 (other) .4GiB - 16TiB. Size of volume and IOPS are independent . stl: Throughput Optimized HDD

500GiB- 16TiB , 500 MiB /s throughput . scI:Cold HDD, Infrequently accessed data .500GiB - 16TiB ,250 MiB Is throughput

QUESTION 251
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
A company recently launched its website to serve content to its global user base.
The company wants to store and accelerate the delivery of static content to its users by
leveraging Amazon CloudFront with an Amazon EC2 instance attached as its origin.
How should a solutions architect optimize high availability for the application?
A. Use Lambda@Edge for CloudFront.
B. Use Amazon S3 Transfer Acceleration for CloudFront.
C. Configure another EC2 instance in a different Availability Zone as part of the origin group.
D. Configure another EC2 instance as part of the origin server cluster in the same Availability Zone.
Answer: A
一家公司最近启动了其网站,向其全球用户群提供内容。 该公司希望通过以下方式存储并加速向用户提供静态内容的速度:
利用以Amazon EC2实例作为来源的Amazon CloudFront。 解决方案架构师应如何优化应用程序的高可用性? 
A.将Lambda @ Edge用于CloudFront。 B.将Amazon S3传输加速用于CloudFront。 
C.在另一个可用区中配置另一个EC2实例作为源组的一部分。 D.将另一个EC2实例配置为同一可用区中原始服务器群集的一部分。 答:A
QUESTION 252
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company is planning to build a new web application on AWS.
The company expects predictable traffic most of the year and very high traffic on occasion.
The web application needs to be highly available and fault tolerant with minimal latency.
What should a solutions architect recommend to meet these requirements?
A. Use an Amazon Route 53 routing policy to distribute requests to two AWS Regions, each with
one Amazon EC2 instance.
B. Use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across
multiple Availability Zones.
C. Use Amazon EC2 instances in a cluster placement group with an Application Load Balancer
across multiple Availability Zones.
D. Use Amazon EC2 instances in a cluster placement group and include the cluster placement
group within a new Auto Scaling group.
Answer: B
一家公司计划在AWS上构建新的Web应用程序。 该公司预计一年中的大部分时间都是可预测的流量,
偶尔还会有很高的流量。 Web应用程序必须具有高可用性,并具有最小的延迟容错能力。 
解决方案架构师应建议哪些以满足这些要求? 
A.使用Amazon Route 53路由策略将请求分发到两个AWS区域,每个区域 一个Amazon EC2实例。
B.将Auto Scaling组中的Amazon EC2实例与Application Load Balancer一起使用 多个可用区。
C.将集群放置组中的Amazon EC2实例与Application Load Balancer一起使用 跨多个可用区。
D.在集群放置组中使用Amazon EC2实例并包括集群放置 组在新的Auto Scaling组中。
QUESTION 253
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company wants to migrate a workload to AWS.

The chief information security officer requires that all data be encrypted at rest when stored in the
cloud.
The company wants complete control of encryption key lifecycle management.
The company must be able to immediately remove the key material and audit key usage
independently of AWS CloudTrail.
The chosen services should integrate with other storage services that will be used on AWS.
Which services satisfies these security requirements?
A. AWS CloudHSM with the CloudHSM client
B. AWS Key Management Service (AWS KMS) with AWS CloudHSM
C. AWS Key Management Service (AWS KMS) with an external key material origin
D. AWS Key Management Service (AWS KMS) with AWS managed customer master keys (CMKs)
Answer: B
公司希望将工作负载迁移到AWS。 首席信息安全官要求,将所有数据存储在计算机中时,必须对所有数据进行静态加密。
云。 该公司希望对加密密钥生命周期管理进行完全控制。 公司必须能够立即删除密钥材料并审核密钥使用情况 独立于AWS CloudTrail。 
所选服务应与将在AWS上使用的其他存储服务集成。 哪些服务满足这些安全要求? 
A.AWS CloudHSM与CloudHSM客户端 B.具有AWS CloudHSM的AWS Key Management Service(AWS KMS)
C.具有外部密钥材料来源的AWS密钥管理服务(AWS KMS) D.具有AWS托管的客户主密钥(CMK)的AWS Key Management Service(AWS KMS)

Explanation: Took a bit of reading. Key points in question: “The company must be able to immediately remove the key material and audit key usage independently” “The chosen services should integrate with other storage services that will be used on AWS” Point 1: Q: CanI use CloudHSM to store keys or encrypt data used by other AWS services? Ans: Yes. You can do all encryption in your CloudHSM-integrated application. In this case, AWS services such as Amazon S3 or Amazon Elastic Block Store (EBS) would only see your data encrypted. Point 2: AWS manages the hardware security module (HSM) appliance, but does not have access to your keys. You control and manage your own keys Ref: https://aws .amazon.com/cloudhsm/features/ Ref: https://aws .amazon.com/cloudhsm/faqs/

QUESTION 254
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company is looking for a solution that can store video archives in AWS from old news footage.
The company needs to minimize costs and will rarely need to restore these files.
When the files are needed, they must be available in a maximum of five minutes.
What is the MOST cost-effective solution?
A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.
C. Store the video archives in Amazon S3 Standard-lnfrequent Access (S3 Standard-lA).
D. Store the video archives in Amazon S3 One Zone-lnfrequent Access (S3 One Zone-lA).
Answer: A
一家公司正在寻找一种可以将旧新闻素材中的视频档案存储在AWS中的解决方案。 该公司需要将成本降到最低,并且很少需要还原这些文件。 
需要文件时,它们最多必须有五分钟的时间可用。 什么是最具成本效益的解决方案? A.将视频档案存储在Amazon S3 Glacier中,并使用快速检索。 
B.将视频档案存储在Amazon S3 Glacier中并使用标准检索。 
C.将视频档案存储在Amazon S3标准不频繁访问(S3 Standard-1A)中。 
D.将视频档案存储在Amazon S3一区不频繁访问(S3 One Zone-lA)中。

you can use Expedited retrievals to access data in 1 – 5 minutes for a flat rate of $0.03 per GB retrieved

QUESTION 255
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company wants to use Amazon S3 for the secondary copy of its on-premises dataset.
The company would rarely need to access this copy,
The storage solution's cost should be minimal.
Which storage solution meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-lnfrequent Access (S3 Standard-lA)
D. S3 One Zone-lnfrequent Access (S3 One Zone-lA)

Answer: D
一家公司希望将Amazon S3用作其本地数据集的二级副本。 该公司很少需要访问此副本, 
存储解决方案的成本应最小。 哪种存储解决方案满足这些要求?
A.S3标准 B. S3智能分层 
C. S3标准不频繁访问(S3 Standard-1A) D.S3一区不频繁访问(S3一区lA)
QUESTION 256
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
A company has enabled AWS CloudTrail logs to deliver log files to an Amazon S3 bucket for
each of its developer accounts.
The company has created a central AWS account for streamlining management and audit
reviews.
An internal auditor needs to access the CloudTrail logs, yet access needs to be restricted for all
developer account users.
The solution must be secure and optimized. How should a solutions architect meet these
requirements?
A. Configure an AWS Lambda function in each developer account to copy the log files to the central
account.
Create an IAM role in the central account for the auditor.
Attach an IAM policy providing read- only permissions to the bucket.
B. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the
central account.
Create an IAM user in the central account for the auditor.
Attach an IAM policy providing full permissions to the bucket.
C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the
central account.
Create an IAM role in the central account for the auditor.
Attach an IAM policy providing read- only permissions to the bucket.
). Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket
in each developer account.
Create an IAM user in the central account for the auditor.
Attach an IAM policy providing full permissions to the bucket.
Answer: C
一家公司已启用AWS CloudTrail日志以将日志文件传送到Amazon S3存储桶以用于 每个开发者帐户。 
该公司已创建一个中央AWS账户以简化管理和审核 评论。 内部审核员需要访问CloudTrail日志,但需要限制所有访问 开发者帐户用户。 
解决方案必须是安全的和优化的。解决方案架构师应如何满足这些要求 要求? 
A.在每个开发人员账户中配置一个AWS Lambda函数,以将日志文件复制到中央 帐户。 在审核员的中央帐户中创建IAM角色。 将提供只读权限的IAM策略附加到存储桶。 
B.从每个开发人员帐户配置CloudTrail,以将日志文件传递到S3存储桶中 中央帐户。 在中央帐户中为审核员创建一个IAM用户。 将提供完整权限的IAM策略附加到存储桶。 
C.从每个开发人员帐户配置CloudTrail,以将日志文件传递到S3存储桶中 中央帐户。 在审核员的中央帐户中创建IAM角色。 将提供只读权限的IAM策略附加到存储桶。 
)。在中央帐户中配置AWS Lambda函数以从S3存储桶复制日志文件 在每个开发者帐户中。 在中央帐户中为审核员创建一个IAM用户。 将提供完整权限的IAM策略附加到存储桶。
QUESTION 257
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A company has an application that posts messages to Amazon SQS Another application polls the
queue and processes the messages in an l/O-intensive operation.
The company has a service level agreement (SLA) that specifies the maximum amount of time
that can elapse between receiving the messages and responding to the users.
Due to an increase in the number of messages the company has difficulty meeting its SLA
consistently.
What should a solutions architect do to help improve the application's processing time and ensure
it can handle the load at any level?
A. Create an Amazon Machine Image (AMI) from the instance used for processing,
Terminate the instance and replace it with a larger size.
B. Create an Amazon Machine Image (AMI) from the instance used for processing.
Terminate the instance and replace it with an Amazon EC2 Dedicated Instance
C. Create an Amazon Machine image (AMI) from the instance used for processing,
Create an Auto Scaling group using this image in its launch configuration.
Configure the group with a target tracking policy to keep us aggregate CPU utilization below 70%.
D. Create an Amazon Machine Image (AMI) from the instance used for processing.
Create an Auto Scaling group using this image in its launch configuration.
Configure the group with a target tracking policy based on the age of the oldest message in the
SQS queue.
一个公司有一个将消息发布到Amazon SQS的应用程序,另一个应用程序轮询 以I / O密集型操作对消息进行排队和处理。 
该公司有一个服务级别协议(SLA),其中规定了最长时间 在接收消息和响应用户之间可能会花费一些时间。 
由于消息数量的增加,公司难以满足其SLA 始终如一。 解决方案架构师应采取什么措施来帮助缩短应用程序的处理时间并确保 它可以处理任何水平的负载? 
A.从用于处理的实例创建Amazon Machine Image(AMI), 终止实例并将其替换为更大的尺寸。 
B.从用于处理的实例创建Amazon Machine Image(AMI)。 终止实例并将其替换为Amazon EC2专用实例 
C.从用于处理的实例创建Amazon Machine image(AMI), 在启动配置中使用该映像创建一个Auto Scaling组。 使用目标跟踪策略配置该组,以使我们的总CPU使用率保持在70%以下。
D.从用于处理的实例创建Amazon Machine Image(AMI)。 在启动配置中使用该映像创建一个Auto Scaling组。 根据目标中最旧邮件的年龄,使用目标跟踪策略配置组 SQS队列。

Answer: C

QUESTION 258
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A company is planning to deploy an Amazon RDS DB instance running Amazon Aurora.
The company has a backup retention policy requirement of 90 days.
Which solution should a solutions architect recommend?
A. Set the backup retention period to 90 days when creating the RDS DB instance
B. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a
lifecycle policy set to delete after 90 days.
C. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention
set to 90 days.
Create an AWS Backup job to schedule the execution of the backup plan daily
D, Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda
function that makes a copy of the RDS automated snapshot Purge snapshots older than 90 days
Answer: B
一家公司计划部署运行Amazon Aurora的Amazon RDS数据库实例。
该公司的备份保留政策要求为90天。
解决方案架构师应建议哪种解决方案?
A.在创建RDS数据库实例时,将备份保留期设置为90天
B.配置RDS以使用以下命令将自动快照复制到用户管理的Amazon S3存储桶中:
生命周期策略设置为在90天后删除。
C.创建一个AWS备份计划以执行RDS数据库的每日快照并保留
设置为90天。
创建一个AWS Backup作业以安排每天执行备份计划
D,将每日预定事件与Amazon CloudWatch Events一起使用以执行自定义AWS Lambda
复制RDS自动快照的功能清除90天之前的快照
QUESTION 259
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
A company is using a tape backup solution to store its key application data offsite.
The daily data völume is around 50 TB.
The company needs to retain the backups for 7 years for regulatory purposes.
The backups are rarely accessed and a week's notice is typically given if a backup needs to be
restored,
The company is now considering a cloud-based option to reduce the storage costs and
operational burden of managing tapes.
The company also wants to make sure that the transition (rom tape backups to the cloud
minimizes disruptions.
Which storage solution is MOST cost-effective'?
A. Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive
B. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier.
C. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3
Glacier
D. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the
backup to Amazon S3 Glacier
Answer: A
一家公司正在使用磁带备份解决方案将其关键应用程序数据存储在异地。
每日数据量约为50 TB。
该公司需要将备份保留7年,以用于监管目的。
很少访问备份,如果需要备份通常会给出一周的通知
恢复,
该公司现在正在考虑基于云的选项,以降低存储成本并
管理磁带的操作负担。
该公司还希望确保过渡(将磁带备份到云
最小化干扰。
哪种存储解决方案最符合成本效益”?
A.使用Amazon Storage Gateway备份到Amazon Glacier Deep Archive
B.使用AWS Snowball Edge直接将备份与Amazon S3 Glacier集成。
C.将备份数据复制到Amazon S3并创建生命周期策略以将数据移动到Amazon S3
冰川
D.使用Amazon Storage Gateway备份到Amazon S3并创建生命周期策略以移动
备份到Amazon S3 Glacier

AWS Storage Gateway服务现在将Tape Gateway与Amazon S3 Glacier Deep Archive存储类集成在一起,使您可以将虚拟磁带存储在成本最低的Amazon S3存储类中,从而每月减少将长期数据存储在云中的成本。至75%。 S3 Glacier Deep Archive是一种新的S3存储类,可提供安全,持久的对象存储以进行长期数据保留和数字保存。借助此功能,Tape Gateway支持将新的虚拟磁带直接存档到S3 Glacier和S3 Glacier Deep Archive,从而帮助您满足备份,存档和恢复要求。

QUESTION 260
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
A company relies on an application that needs at least 4 Amazon EC2 instances during regular
traffic and must scale up to 12 EC2 instances during peak loads.
The application is critical to the business and must be highly available
Which solution will meet these requirements?
A. Deploy the EC2 instances in an Auto Scaling group.
Set the minimum to 4 and the maximum to 12, with 2 in Availability Zone A and 2 in Availability
Zone B
B. Deploy the EC2 instances in an Auto Scaling group.
Set the minimum to 4 and the maximum to 12, with all 4 in Availability Zone A
C. Deploy the EC2 instances in an Auto Scaling group.

Set the minimum to 8 and the maximum to 12, with 4 in Availability Zone A and 4 in Availability
ZoneB
D. Deploy the EC2 instances in an Auto Scaling group.
Set the minimum to 8 and the maximum to 12 with all 8 in Availability Zone A
Answer: C

公司依靠的应用程序在常规期间至少需要4个Amazon EC2实例
流量,并且在高峰负载期间最多可扩展到12个EC2实例。
该应用程序对企业至关重要,并且必须高度可用
哪种解决方案可以满足这些要求?
A.在Auto Scaling组中部署EC2实例。
将最小值设置为4,将最大值设置为12,在可用性区域A中设置2,在可用性区域中设置2
B区
B.将EC2实例部署在Auto Scaling组中。
将最小值设置为4,将最大值设置为12,所有4个设置在可用区A中
C.在Auto Scaling组中部署EC2实例。
将最小值设置为8,将最大值设置为12,在可用性区域A中设置4,在可用性区域中设置4
B区
D.在Auto Scaling组中部署EC2实例。
在可用区A中将最小值设置为8,将最大值设置为12

Explanation: It requires HA and if one AZ is down then at least 4 instances will be active in another AZ which is key for this question.

QUESTION 261
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company is planning to migrate its virtual server -based workloads to AWS The company has
internet-facing load balancers backed by application servers.
The application servers rely on patches from an internet-hosted repository
Which services should a solutions architect recommend be hosted on the public subnet? (Select
TWO.)
A. NAT gateway
B. Amazon RDS DB instances
C. Application Load Balancers
D. Amazon EC2 application servers
E. Amazon Elastic File System (Amazon EFS) volumes
Answer: AC
一家公司计划将其基于虚拟服务器的工作负载迁移到AWS 由应用程序服务器支持的面向Internet的负载平衡器。 
应用程序服务器依赖Internet托管存储库中的补丁 解决方案架构师建议在公共子网上托管哪些服务? (选择 二。) 
A.NAT网关 B.Amazon RDS数据库实例 
C.应用程序负载平衡器 D.Amazon EC2应用程序服务器 E.Amazon弹性文件系统(Amazon EFS)卷

Amazon已经发布了其新的负载均衡器产品,Application Load Balancer(ALB)。ALB是一种新型智能负载均衡器,对于那些运行基于HTTP的服务的用户来说,它可以显著地降低负载均衡的成本。

ALB 是位于OSI模型第七层的负载均衡器,因此它能根据网络包的内容将该网络包路由到不同的后端服务。现有的负载均衡器多是位于OSI模型第四层的 TCP/UDP均衡器。与这些均衡器不同的是,ALB将检查网络包的内容,并将该网络包发送给适当的服务。当前,ALB支持基于URL对路由流量定义多至 十条的独立规则。

ALB(Application Load Balancer)とは、Amazon.comが提供するAWS(Amazon Web Services)と呼ばれるシステムの一部で、Webサービスに発生する負荷を分散するロードバランシングサービスです。 近年はSNSでの拡散などによって、突然Webアプリケーションへアクセスが集中することも多くなっています。 突然のアクセスの急増はWebサービスの表示スピードを重くしたり、エラーを引き起こしたりする原因になるでしょう。 そんなWebサービスにかかる負荷を分散し、安定性や高可用性を向上させるのが、ALBのようなロードバランサーです。 ALBが持つ数々の機能を利用することで、Webサービスを継続的かつ効果的に運用できるでしょう。

AWSサービスにはさまざまなメリットがありますが、特にALBにおいて有益なのは以下のようなものになります。

・高可用性の実現をサポート ・証明書管理やユーザー認証のようなセキュリティ ・さまざまなレベルにおけるアプリケーション負荷への柔軟な対応 ・アプリケーションの細かなモニタリングと監査

これらのメリットは複雑なWebサービスの運用において、効率性と実用性を高めるきっかけになり得ます。 あらゆる事業内容を改善し、新しい仕事を進めることもできるので、将来的にはメリットがさらに大きくなるでしょう。

QUESTION 262
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
An application is running on Amazon EC2 instances Sensitive information required for the
application is stored in an Amazon S3 bucket.
The bucket needs to be protected from internet access while only allowing services within the
VPC access to the bucket.
Which combination of actions should a solutions archived take to accomplish this" (Select TWO.)
A. Create a VPC endpoint for Amazon S3.
B. Enable server access logging on the bucket
C. Apply a bucket policy to restrict access to the S3 endpoint.
D. Add an S3 ACL to the bucket that has sensitive information
E. Restrict users using the IAM policy to use the specific bucket
Answer: AC

应用程序正在Amazon EC2实例上运行
应用程序存储在Amazon S3存储桶中。
需要保护该存储桶以防止互联网访问,同时仅允许在
VPC访问存储桶。
存档的解决方案应采取哪种动作组合来完成此任务”(选择两个。)
A.为Amazon S3创建VPC终端节点。
B.在存储桶上启用服务器访问日志记录
C.应用存储桶策略以限制对S3端点的访问。
D.将S3 ACL添加到具有敏感信息的存储桶中
E.限制使用IAM策略的用户使用特定存储桶

Explanation: ACL is a property at object level not at bucket level .Also by just adding ACL you cant let the services in VPC allow access to the bucket

QUESTION 263
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
A solutions architect is designing a multi-Region disaster recovery solution for an application that
will provide public API access.
The application will use Amazon EC2 instances with a userdata script to load application code
and an Amazon RDS for MySQL database.
The Recovery Time Objective (RTO) is 3 hours and the Recovery Point Objective (RPO) is 24
hours.
Which architecture would meet these requirements at the LOWEST cost?
解决方案架构师正在为以下应用程序设计多区域灾难恢复解决方案:
将提供公共API访问。
该应用程序将结合使用带有用户数据脚本的Amazon EC2实例来加载应用程序代码
以及用于MySQL数据库的Amazon RDS。
恢复时间目标(RTO)为3小时,恢复点目标(RPO)为24
小时。
哪种架构能够以最低的成本满足这些要求?

A. Use an Application Load Balancer for Region failover.
Deploy new EC2 instances with the userdata script.
Deploy separate RDS instances in each Region
B. Use Amazon Route 53 for Region failover.
Deploy new EC2 instances with the userdata script.
Create a read replica of the RDS instance in a backup Region
C. Use Amazon API Gateway for the public APls and Region failover.
Deploy new EC2 instances with the userdata script.
Create a MySQL read replica of the RDS instance in a backup Region
D. Use Amazon Route 53 for Region failover.
Deploy new EC2 instances with the userdata scnpt for APls, and create a snapshot of the RDS
instance daily for a backup.
Replicate the snapshot to a backup Region
Answer: D

答:使用应用程序负载平衡器进行区域故障转移。
使用userdata脚本部署新的EC2实例。
在每个区域中部署单独的RDS实例
B.使用Amazon Route 53进行区域故障转移。
使用userdata脚本部署新的EC2实例。
在备份区域中创建RDS实例的只读副本
C.使用Amazon API Gateway进行公共APls和区域故障转移。
使用userdata脚本部署新的EC2实例。
在备份区域中创建RDS实例的MySQL只读副本
D.使用Amazon Route 53进行区域故障转移。
使用用于APls的userdata scnpt部署新的EC2实例,并创建RDS的快照
每天备份一次。
将快照复制到备份区域
答案:D
QUESTION 264
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A solutions architect is designing a new API using Amazon API Gateway that will receive
requests from users,
The volume of requests is highly variable, several hours can pass without receiving a single
request.
The data processing will take place asynchronously but should be completed within a few
seconds after a request is made
Which compute service should the solutions architect have the API invoke to deliver the
requirements at the lowest cost?
A. An AWS Glue job
B. An AWS Lambda function
C. A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon EKS)
D. A containerized service hosted in Amazon ECS with Amazon EC2
Answer: B

解决方案架构师正在使用Amazon API Gateway设计新的API,它将收到
来自用户的请求,
请求量变化很大,可能要经过几个小时才能收到一个请求
请求。
数据处理将异步进行,但应在少数时间内完成
发出请求后的秒数
解决方案架构师应调用API哪种计算服务来交付
要求最低的成本?
A.AWS Glue作业
B.AWS Lambda函数
C.托管在Amazon Elastic Kubernetes服务(Amazon EKS)中的容器化服务
D.使用Amazon EC2托管在Amazon ECS中的容器化服务
QUESTION 265
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A development team needs to host a website that will be accessed by other teams.
The website contents.consist of HTML. cSS, client side JavaScript, and images.
Which method is the MOST cost-effective for hosting the website?
A. Containerize the website and host it in AWS Fargate
B. Create an Amazon S3 bucket and host the website there,
C. Deploy a web server on an Amazon EC2 instance to host the website.
D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express is
framework
Answer: B
开发团队需要托管一个网站,其他团队可以访问该网站。
网站内容由HTML组成。 cSS,客户端JavaScript和图像。
托管网站最经济有效的方法是哪种?
A.对网站进行容器化并将其托管在AWS Fargate中
B.创建一个Amazon S3存储桶并在那里托管网站,
C.在Amazon EC2实例上部署Web服务器以托管网站。
D.使用使用Express的AWS Lambda目标配置应用程序负载均衡器是
构架
QUESTION 266
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A company has media and application files that need to be shared internally.
Users currently are authenticated using Active Directory and access files from a Microsoft
Windows platform.

The chief execute officer wants to keep the same user permissions, but wants the company to
improve the process as the company is reaching its storage capacity limit.
What should a solutions architect recommend?
A. Set up a corporate Amazon S3 bucket and move and media and application files.
Configure Amazon FSx for Windows File Server and move all the media and application files.
C. Configure Amazon Elastic File System (Amazon EFS) and move all media and application files.
D. Set up Amazon EC2 on Windows, attach multiple Amazon Elastic Block Store (Amazon EBS)
volumes and, and move all media and application files.
Answer: B
公司的媒体和应用程序文件需要在内部共享。
当前使用Active Directory对用户进行身份验证,并可以从Microsoft访问文件
Windows平台。

首席执行官希望保留相同的用户权限,但希望公司
在公司达到其存储容量极限时改进流程。
解决方案架构师应该建议什么?
A.设置公司Amazon S3存储桶并移动媒体和应用程序文件。
为Windows File Server配置Amazon FSx,并移动所有媒体和应用程序文件。
C.配置Amazon Elastic File System(Amazon EFS)并移动所有媒体和应用程序文件。
D.在Windows上设置Amazon EC2,附加多个Amazon Elastic Block Store(Amazon EBS)
卷,然后移动所有媒体和应用程序文件。
QUESTION 267
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company is moving its legacy workload to the AWS Cloud.
The workload files will be shared, appended, and frequently accessed through Amazon EC2
instances when they are first created.
The files Will be accessed occasionally as they age
What should a solutions architect recommend?
A. Store the data using Amazon EC2 instances with attached Amazon Elastic Block Store (Amazon
EBS) data volumes
B. Store the data using AWS Storage Gateway volume gateway and export rarely accessed data to
Amazon S3 storage
C. Store the data using Amazon Elastic File System (Amazon EFS) with lifecycle management
enabled for rarely accessed data
D. Store the data using Amazon S3 with an S3 lifecycle policy enabled to move data to S3 Standard-
Infrequent Access (S3 Standard-lA)
Answer: D
一家公司正在将其遗留工作负载转移到AWS云。
工作负载文件将通过Amazon EC2共享,附加和频繁访问
首次创建时的实例。
这些文件会随着时间的流逝而偶尔访问
解决方案架构师应该建议什么?
A.使用带有附加Amazon Elastic Block Store(Amazon
EBS)数据量
B.使用AWS Storage Gateway卷网关存储数据并将很少访问的数据导出到
Amazon S3存储
C.使用具有生命周期管理功能的Amazon Elastic File System(Amazon EFS)存储数据
为很少访问的数据启用
D.使用启用了S3生命周期策略的Amazon S3存储数据,以将数据移动到S3 Standard-
不频繁访问(S3 Standard-IA)
QUESTION 268
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company is deploying a multi-instance application within AWS that requires minimal latency
between the instances.
What should a solutions architect recommend?
A. Use an Auto Scaling group with a cluster placement group.
B. Use an Auto Scaling group with single Availability Zone in the same AWS Region.
C. Use an Auto Scaling group with multiple Availability Zones in the same AWS Region.
D. Use a Network Load Balancer with multiple Amazon EC2 Dedicated Hosts as the targets
Answer: A

一家公司正在AWS中部署需要最少延迟的多实例应用程序
在实例之间。
解决方案架构师应该建议什么?
A.将Auto Scaling组与群集放置组一起使用。
B.在同一AWS区域中的单个可用区中使用Auto Scaling组。
C.将Auto Scaling组与同一AWS区域中的多个可用区配合使用。
D.使用具有多个Amazon EC2专用主机的网络负载均衡器作为目标
QUESTION 269
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company receives structured and semi-structured data from various sources once every day,
A solutions architect needs to design a solution that leverages big data processing frameworks.
The data should be accessible using SQL queries and business intelligence tools.
What should the solutions architect recommend to build the MOST high-performing solution?
A. Use AWS Glue to process data and Amazon S3 to store data
B. Use Amazon EMR to process data and Amazon Redshift to store data
C. Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data
D。Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon
EFS) to store data
Answer: B

公司每天一次从各种来源接收结构化和半结构化数据,
解决方案架构师需要设计一种利用大数据处理框架的解决方案。
可以使用SQL查询和商业智能工具访问数据。
解决方案架构师应建议什么来构建MOST高性能解决方案?
A.使用AWS Glue处理数据并使用Amazon S3存储数据
B.使用Amazon EMR处理数据并使用Amazon Redshift存储数据
C.使用Amazon EC2处理数据并使用Amazon Elastic Block Store(Amazon EBS)存储数据
D。使用Amazon Kinesis Data Analytics处理数据和Amazon Elastic File System(Amazon
EFS)以存储数据

由于存在大数据问题,因此EMR将成为处理大数据的完美服务。

QUESTION 270
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Company is designing a website that uses an Amazon S3 bucket to store static images.
The company wants ail future requests have taster response times while reducing both latency
and cost.
Which service configuration should a solutions architect recommend?
A. Deploy a NAT server in front of Amazon S3.
B. Deploy Amazon CloudFront in front of Amazon S3.
Deploy a Network Load Balancer in front of Amazon S3.
D. Configure Auto Scaling to automatically adjust the capacity of the website.
Answer: B
公司正在设计一个使用Amazon S3存储桶存储静态图像的网站。
该公司希望所有未来的请求都具有品尝者的响应时间,同时减少两个延迟
和成本。
解决方案架构师应建议哪种服务配置?
A.在Amazon S3前面部署NAT服务器。
B.在Amazon S3前面部署Amazon CloudFront。
在Amazon S3之前部署网络负载平衡器。
D.配置自动缩放以自动调整网站的容量。
QUESTION 271
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket
are encrypted?
A. Update the bucket policy to deny if the PutObject does not have an s3 x-amz-acl header set
B. Update the bucket policy to deny if the PutObject does not have an s3 x-amz-acl header set to
private
c. Update the bucket policy to deny if the PutObject does not have an aws SecureTransport header
set to true
D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption
header set
Answer: D

解决方案架构师应该怎么做才能确保上传到Amazon S3存储桶的所有对象都经过加密?
A.更新存储桶策略以拒绝PutObject没有设置s3 x-amz-acl标头的情况
B.将存储桶策略更新为拒绝,如果PutObject没有将s3 x-amz-acl标头设置为private
C如果PutObject没有将aws SecureTransport标头设置为true,则更新存储桶策略以拒绝
D.更新存储桶策略以拒绝PutObject没有设置x-amz-服务器端加密头的情况

使用Amazon S3默认加密上传的没有加密标头(例如x-amz-server-side-encryption或x-amz-server-side-encryption-aws-kms-key-id)的对象确保使用AWS KMS对其进行加密(在将其存储在S3存储桶中之前)。然后使用存储桶策略来防止使用其他加密设置(AES-256)上传对象,并且使用AWS KMS加密上传的对象将包含您的AWS账户的密钥ID。

QUESTION 272
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company runs a high performance computing (HPC) workload on AWS.
The workload required low- latency network performance and high network throughput with tightly
coupled node-to-node communication.
The Amazon EC2 instances are properly sized for compute and storage capacity, and are
launched using default options.
What should a solutions architect propose to improve the performance of the workload'?
A. Choose a cluster placement group while launching Amazon EC2 instances
B. Choose dedicated instance tenancy while launching Amazon EC2 instances
C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances
D. Choose the required capacity reservation while launching Amazon EC2 instances.
Answer: A
一家公司在AWS上运行高性能计算(HPC)工作负载。
工作负载要求低延迟网络性能和高网络吞吐量,并且要求紧密
节点到节点的耦合通信。
Amazon EC2实例的大小适合计算和存储容量,并且具有
使用默认选项启动。
解决方案架构师应提出什么建议来改善工作负载的性能?
A.启动Amazon EC2实例时选择一个集群放置组
B.启动Amazon EC2实例时选择专用实例租赁
C.在启动Amazon EC2实例时选择Elastic Inference加速器
D.在启动Amazon EC2实例时选择所需的容量预留。

工作负载要求低延迟网络性能和高网络吞吐量,并且要求紧密 节点到节点的耦合通信。 参照 cluster placement group

QUESTION 273
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company's dynamic website is hosted using on-premises servers in the United States.
The company is launching its product in Europe and it wants to optimize site loading times for
new European users.
The site's backend must remain in the United States. The product is being launched in a few
days, and an immediate solution is needed
What should the solutions architect recommend?
A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it
B. Move the website to Amazon S3 Use cross-Region replication between Regions.
C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers
D. Use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers
Answer: C
公司的动态网站使用美国的本地服务器托管。
该公司正在欧洲推出其产品,并希望优化网站的加载时间,
新的欧洲用户。
该网站的后端必须保留在美国。 该产品即将推出
天,并且需要立即解决
解决方案架构师应该建议什么?
A.在us-east-1中启动Amazon EC2实例并将网站迁移到该实例
B.将网站移至Amazon S3。在区域之间使用跨区域复制。
。 将Amazon CloudFront与指向本地服务器的自定义来源一起使用
D.使用指向本地服务器的Amazon Route 53地理接近路由策略
QUESTION 274
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
A company is building a media-sharing application and decides to use Amazon S3 for storage.
When a media file is uploaded the company starts a mult-step process to create thumbnails,
identify objects in the images, transcode videos into standard formats and resolutions and extract
and store the metadata to an Amazon DynamoDB table.
The metadata is used for searching and navigation. The amount of traffic is variable The solution
must be able to scale to handle spikes in load without unnecessary expenses.
What should a solutions architect recommend to support this workload?
A. Build the processing into the website or mobile app used to upload the content to Amazon S3.
Save the required data to the DynamoDB table when the objects are uploaded
B. Trigger AWS Step Functions when an object is stored in the S3 bucket.
Have the Step Functions perform the steps needed to process the object and then write the
metadata to the DynamoDB table
C. Trigger an AWS Lambda function when an object is stored in the S3 bucket.
Have the Lambda function start AWS Batch to perform the steps to process the object.
Place the object data in the DynamoDB table when complete
D. Trigger an AWS Lambda function to store an initial entry in the DynamoDB table when an object
is uploaded to Amazon S3.
Use a program running on an Amazon EC2 instance in an Auto Scaling group to poll the index for
unprocess use the program to perform the processing
Answer: C
1433/5000
一家公司正在构建媒体共享应用程序,并决定使用Amazon S3进行存储。
上载媒体文件后,公司将开始一步一步的过程来创建缩略图,
识别图像中的对象,将视频转码为标准格式和分辨率,然后提取
并将元数据存储到Amazon DynamoDB表。
元数据用于搜索和导航。流量是可变的解决方案
必须能够扩展以应对负载峰值,而没有不必要的支出。
解决方案架构师应建议什么来支持此工作负载?
A.将处理内置到用于将内容上传到Amazon S3的网站或移动应用程序中。
上载对象时,将所需数据保存到DynamoDB表中
B.当对象存储在S3存储桶中时,触发AWS Step Functions。
让“步骤功能”执行处理对象所需的步骤,然后编写
元数据到DynamoDB表
C.当对象存储在S3存储桶中时,触发AWS Lambda函数。
让Lambda函数启动AWS Batch以执行处理对象的步骤。
完成后将对象数据放置在DynamoDB表中
D.触发AWS Lambda函数以在对象存在时将初始条目存储在DynamoDB表中
已上传到Amazon S3。
使用在Auto Scaling组中的Amazon EC2实例上运行的程序来轮询索引
取消处理使用程序执行处理

without unnecessary expenses 所以选Lambda

QUESTION 275
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A company has recently updated its internal security standards.
The company must now ensure all Amazon S3 buckets and Amazon Elastic Block Store (Amazon
EBS) volumes are encrypted with keys created and periodically rotated by internal security
specialists.
The company is looking for a native, software-based AWS service to accomplish this goal.
What should a solutions architect recommend as a solution?
A. Use AWS Secrets Manager with customer master keys (CMKs) to store master key material and
apply a routine to create a new CMK periodically and replace it in AWS Secrets Manager.
B. Use AWS Key Management Service (AWS KMS) with customer master keys (CMKs) to store
master key material and apply a routing to re-create a new key periodically and replace it in AWS
KMS.
C. Use an AWS CloudHSM cluster with customer master keys (CMKs) to store master key material
and apply a routine a re-create a new key periodically and replace it in the CloudHSM cluster nodes.
D. Use AWS Systems Manager Parameter Store with customer master keys (CMKs) keys to store
master key material and apply a routine to re-create a new periodically and replace it in the
Parameter Store.
Answer: A
一家公司最近更新了其内部安全标准。
公司现在必须确保使用内部安全专家创建并定期轮换的密钥对所有Amazon S3存储桶和Amazon Elastic Block Store(Amazon EBS)卷进行加密。
该公司正在寻找基于软件的本机AWS服务来实现此目标。
解决方案架构师应建议什么作为解决方案?
A.使用带有客户主密钥(CMK)的AWS Secrets Manager来存储主密钥材料,并应用例程来定期创建新的CMK,并在AWS Secrets Manager中将其替换。
B.将AWS Key Management Service(AWS KMS)与客户主密钥(CMK)一起使用以存储主密钥材料,并应用路由以定期重新创建新密钥并将其替换在AWS KMS中。
C.使用带有客户主密钥(CMK)的AWS CloudHSM集群来存储主密钥材料,并应用例程并定期重新创建新密钥,并将其替换到CloudHSM集群节点中。
D.使用带有客户主密钥(CMK)密钥的AWS Systems Manager参数存储来存储主密钥材料,并应用例程以定期重新创建新的并将其替换在参数存储中。

Explanation: AWS Secrets Manager provides full lifecycle management for secrets within your environment. In this post, Maitreya and I will show you how to use Secrets Manager to store, deliver, and rotate SSH keypairs used for communication within compute clusters. Rotation of these keypairs is a security best practice, and sometimes a regulatory requirement. Traditionally, these keypairs have been associated with a number of tough challenges. For example, synchronizing key rotation across all compute nodes, enable detailed logging and auditing, and manage access to users in order to modify secrets.

QUESTION 276
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A solution architect must design a solution that uses Amazon CloudFront with an Amazon S3 to
store a static website.
The company security policy requires that all websites traffic be inspected by AWS WAF.
How should the solution architect company with these requirements?
A. Configure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource
Name (ARN) only
B. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting
content from the S3 origin,
C. Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3
only Associate AWS WAF to CloudFront.
D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict
access to the S3 bucket. Enable AWS WAF on the distribution.
Answer: D
解决方案架构师必须设计一个使用Amazon CloudFront和Amazon S3来存储静态网站的解决方案。
公司安全政策要求AWS WAF检查所有网站流量。
解决方案架构师应如何满足这些要求?
A.配置一个S3存储桶策略以仅接受来自AWS WAF Amazon Resource Name(ARN)的请求
B.将Amazon CloudFront配置为将所有传入请求转发到AWS WAF,然后再从S3来源请求内容,
C.配置一个安全组,该安全组允许Amazon CloudFront IP地址访问Amazon S3
仅将AWS WAF与CloudFront相关联。
D.将Amazon CloudFront和Amazon S3配置为使用源访问身份(OAI)来限制对S3存储桶的访问。 在分发上启用AWS WAF。

要仅允许从CloudFront分配访问您的Amazon S3存储桶,请首先将原始访问身份(OAI)添加到您的分配中。然后,查看您的存储桶策略和Amazon S3访问控制列表(ACL)以确保: ✑只有OAI可以访问您的存储桶。 ✑CloudFront可以代表请求者访问存储桶。 ✑用户无法以其他方式(例如,使用Amazon S3 URL)访问对象。

要限制对您从Amazon S3存储桶中提供的内容的访问,请创建CloudFront签名URL或签名Cookie以限制对Amazon S3存储桶中文件的访问,然后创建一个特殊的CloudFront用户,称为原始访问身份(OAI)并关联它与您的分布。然后,您配置权限,以便CloudFront可以使用OAI来访问文件并向用户提供文件,但是用户不能使用指向S3存储桶的直接URL来访问那里的文件。采取这些步骤可帮助您维护对通过CloudFront服务的文件的安全访问。

QUESTION 277

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company has copied 1 PB of data from a colocation facility to an Amazon S3 bucket in the us-
east-1 Region using an AWS Direct Connect link.
The company now wants to copy the data to another S3 bucket in the us-west-2 Region.
The colocation facility does not allow the use AWS Snowball,
What should a solutions architect recommend to accomplish this?
A. Order a Snowball Edge device to copy the data from one Region to another Region.
B. Transfer contents from the source S3 bucket to a target S3 bucket using the S3 console.
C. Use the aws S3 sync command to copy data from the source bucket to the destination bucket.
D. Add a cross-Region replication configuration to copy objects across S3 buckets in different Reg.
Answer: D
一家公司已使用AWS Direct Connect链接将1 PB数据从托管设施复制到us-east-1 Region中的Amazon S3存储桶。
该公司现在希望将数据复制到us-west-2 Region中的另一个S3存储桶。
托管服务不允许使用AWS Snowball,
解决方案架构师应该推荐什么来实现这一目标?
A.订购Snowball Edge设备将数据从一个地区复制到另一地区。
B.使用S3控制台将内容从源S3存储桶传输到目标S3存储桶。
C.使用aws S3 sync命令将数据从源存储桶复制到目标存储桶。
D.添加跨区域复制配置以跨不同Reg中的S3存储桶复制对象。
QUESTION 278

A company has hired a new cloud engineer who should not have access to an Amazon S3 bucket named Company Confidential. The cloud engineer must be able to read from and write to an S3 bucket called AdminTools. Which IAM policy will meet these requirements?

![image-20200923002914182](/Users/gaoyunhu/Library/Application Support/typora-user-images/image-20200923002914182.png)

一家公司雇用了一位新的云工程师,该工程师不应该访问名为Company Confidential的Amazon S3存储桶。云工程师必须能够读取和写入名为AdminTools的S3存储桶。哪种IAM策略将满足这些要求?

QUESTION 279
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
An engineering team is developing and deploying AWS Lambda functions,
The team needs to create roles and manage policies in AWS IAM to configure the permissions of
the Lambda functions.
How should the permissions for the team be configured SO they also adhere to the concept of
least privilege?
A. Create an IAM role with a managed policy attached.
Allow the engineering team and the L ambda functions to assume this role
B. Create an IAM group for the engineering team with an IAMFullAccess policy attached.
Add all the users from the team to this IAM group
C. Create an execution role for the Lambda functions.
Attach a managed policy that has permission boundaries specific to these Lambda functions
D. Create an IAM role with a managed policy attached that has permission boundaries specific to the
Lambda functions.
Allow the engineering team to assume this role.
Answer: D
一个工程团队正在开发和部署AWS Lambda函数,该团队需要在AWS IAM中创建角色并管理策略以配置Lambda函数的权限。
应该如何配置团队的权限,以使他们也遵守最小特权的概念?
A.创建一个IAM角色,并附加一个托管策略。
允许工程团队和Lambda功能担当此角色
B.为工程团队创建一个IAM组,并附加一个IAMFullAccess策略。
将团队中的所有用户添加到该IAM组
C.为Lambda函数创建执行角色。
附加具有特定于这些Lambda函数的权限边界的托管策略
D.创建一个附加了托管策略的IAM角色,该托管策略具有特定于
Lambda函数。
让工程团队承担这个角色。
答案:D
QUESTION 280
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company needs a secure connection between 'its on-premises environment and AWS.
This connection does not need high bandwidth and will handle a small amount of traffic.
The connection should be set up quickly,
What is the MOST cost- effective method to establish this type of connection?
A. Implement a client VPN
B. Implement AWS Direct Connect
C. Implement a bastion host on Amazon EC2 53D.
D. Implement an AWS Site-to-Site VPN connection.
Answer: D
公司需要在其本地环境和AWS之间建立安全连接。
此连接不需要高带宽,并且可以处理少量流量。
连接应该很快建立,
建立这种连接的最经济有效的方法是什么?
A.实施客户端VPN
B.实施AWS Direct Connect
C.在Amazon EC2 53D上实施堡垒主机。
D.实施一个AWS Site-to-Site VPN连接。
默认情况下,您在 Amazon VPC 中启动的实例无法与您自己的(远程)网络进行通信。您可以通过创建 AWS Site-to-Site VPN(Site-to-Site VPN)连接并将路由配置为通过该连接传输流量,从 VPC 启用对远程网络的访问。
QUESTION 281
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company is building a payment application that must be highly available even during regional
service disruptions.
A solutions architect must design a data storage solution that can be easily replicated and used in
other AWS Regions.
The application also requires low-latency atomicity, consistency, isolation, and durability (ACID)
transactions that need to be immediately available to generate reports.
The development team also needs to use SQL.C
Which data storage solution meets these requirements'?
A. Amazon Aurora Global Database
B. Amazon DynamoDB global tables
C. Amazon S3 with cross-Region replication and Amazon Athena
D. MySQL on Amazon EC2 instances with Amazon Elastic Block Store (Amazon EBS) snapshot
replication
Answer: D

公司正在构建付款应用程序,即使在区域服务中断期间,该应用程序也必须高度可用。
解决方案架构师必须设计一个易于在其他AWS区域中复制和使用的数据存储解决方案。
该应用程序还需要低延迟原子性,一致性,隔离性和持久性(ACID)事务,这些事务必须立即可用以生成报告。
开发团队还需要使用SQL.C
哪种数据存储解决方案可以满足这些要求?
A.Amazon Aurora全球数据库
B.Amazon DynamoDB全局表
C.具有跨区域复制和Amazon Athena的Amazon S3
D.具有Amazon Elastic Block Store(Amazon EBS)快照复制的Amazon EC2实例上的MySQL

Amazon Aurora是一种兼容MySQL和PostgreSQL的商用级别关系数据库,它既有商用数据库的性能和可用性(比如Oracle数据库),又具有开源数据库的成本效益(比如MySQL数据库)。

Aurora的速度可以达到MySQL数据库的5倍,同时它的成本只是商用数据库的1/10

Aurora和其他RDS服务类似,AWS会负责各种管理任务,例如硬件、数据库补丁和数据库备份等。

另外,Aurora还有以下这些特点:

  • 10GB的起始存储空间,可以增加到最大64TB的容量
  • 计算资源可以提升到最多32vCPU和244GB的内存
  • Aurora会将你的数据复制2份到每一个可用区内,并且复制到最少3个可用区,因此你会有6份数据库备份
  • 2份及以下的数据备份丢失,不影响Aurora的写入功能
  • 3份及以下的数据备份丢失,不影响Aurora的读取功能
  • Aurora有自动修复的功能,AWS会自动检查磁盘错误和数据块问题并且自动进行修复
  • 有两种数据库只读副本
    • Aurora Replicas(最多支持15个)
    • MySQL Replica(最多支持5个)
    • 两者的区别是Aurora主数据库出现故障的时候,Aurora Replicas可以自动变成主数据库,而MySQL Replica不可以
QUESTION 282
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A solutions architect is using Amazon S3 to design the storage architecture of a new digital media
application,
The media files must be resilient to the loss of an Availability Zone Some files are accessed
frequently while other files are rarely accessed in an unpredictable pattern.
The solutions architect must minimize the costs of storing and retrieving the media files.
Which storage option meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-lnfrequent Access (S3 Standard-lA)
D. S3 One Zone-lnfrequent Access (S3 One Zone-lA)
Answer: B

562/5000
解决方案架构师正在使用Amazon S3来设计新的数字媒体应用程序的存储架构,
媒体文件必须具有适应性,以防止丢失可用区。某些文件经常被访问,而其他文件则很少以不可预测的方式被访问。
解决方案架构师必须将存储和检索媒体文件的成本降至最低。
哪个存储选项符合这些要求?
A.S3标准
B. S3智能分层
C. S3标准不频繁访问(S3 Standard-1A)
D.S3一区不频繁访问(S3一区lA)
QUESTION 283
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
A company uses a legacy on-premises analytics application that operates on gigabytes of csv
files and represents months of data.
The legacy application cannot handle the growing size of CSV files New cSV files are added daily
from various data sources to a central on-premises storage location.
The company wants to continue to support the legacy application while users learn AWS
analytics services.
To achieve this, a solutions architect wants to maintain two synchronized copies of all the csv
files on-premises and in Amazon S3.
Which solution should the solutions architect recommend?

A. Deploy AWS DataSync on-premises.
Configure DataSync to continuously replicate the cSV files between the company's on-premises
storage and the company's S3 bucket

B. Deploy an on-premises file gateway.
Configure data sources to write the csV files to the file gateway,
Point the legacy analytics application to the file gateway.

The file gateway should replicate the csv files to Amazon S3
C. Deploy an on-premises volume gateway.
Configure data sources to write the csV files to the volume gateway.
Point the legacy analytics application to the volume gateway.
The volume gateway should replicate data to Amazon S3.

D. Deploy AWS DataSync on-premises.
Configure DataSync to continuously replicate the cSV files between on-premises and Amazon
Elastic File System (Amazon EFS).
Enable replication from Amazon EFS to the company's S3 bucket.
Answer: A
公司使用在千兆位CSV上运行的旧式本地分析应用程序
文件并代表几个月的数据。
旧版应用程序无法处理不断增长的CSV文件大小,每天都会从各种数据源向中央本地存储位置添加新的cSV文件。
该公司希望在用户学习AWS分析服务的同时继续支持旧版应用程序。
为此,解决方案架构师希望在本地和Amazon S3中维护所有csv文件的两个同步副本。
解决方案架构师应建议哪种解决方案?

A.在本地部署AWS DataSync。
配置DataSync以在公司的本地存储和公司的S3存储桶之间连续复制cSV文件
B.部署本地文件网关。
配置数据源以将csV文件写入文件网关,
将旧版分析应用程序指向文件网关。
文件网关应将csv文件复制到Amazon S3
C.部署本地卷网关。
配置数据源以将csV文件写入卷网关。
将旧版分析应用程序指向卷网关。
卷网关应将数据复制到Amazon S3。
D.在本地部署AWS DataSync。
配置DataSync以在本地和Amazon Elastic File System(Amazon EFS)之间连续复制cSV文件。
启用从Amazon EFS到公司S3存储桶的复制。
QUESTION 284
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
An application allows users at a company's headquarters to access product data.
The product data is stored in an Amazon RDS MySQL DB instance.
The operations team has isolated an application performance slowdown and wants to separate
read traffic from write traffic.
A solutions architect needs to optimize the application's performance quickly.
What should the solutions architect recommend?
A. Change the existing database to a Multi-AZ deployment.
Serve the read requests from the primary Availability Zone,
B. Change the existing database to a Multi-AZ deployment.
Serve the read requests from the secondary Availability Zone.
C. Create read replicas for the database.
Configure the read replicas with half of the compute and storage resources as the source
database.
D. Create read replicas for the database.
Configure the read replicas with the same compute and storage resources as the source
database.
Answer: D
应用程序允许公司总部的用户访问产品数据。
产品数据存储在Amazon RDS MySQL数据库实例中。
运营团队已隔离了应用程序性能下降的问题,并希望将读取流量与写入流量分开。
解决方案架构师需要快速优化应用程序的性能。
解决方案架构师应该建议什么?
A.将现有数据库更改为多可用区部署。
服务来自主要可用区的读取请求,
B.将现有数据库更改为多可用区部署。
服务来自辅助可用区的读取请求。
C.为数据库创建只读副本。
使用一半的计算和存储资源作为源数据库配置只读副本。
D.为数据库创建只读副本。
使用与源数据库相同的计算和存储资源来配置只读副本。

Explanation:rds read replicas

You have a production database that is taking on normal load You want to run a reporting application to run some analytics You create a Read Replica to run the new workload there The production application is unaffected Read replicas are used for SELECT (=read) only kind of statements (not INÍSERT, UPDATE, DELETE)

QUESTION 285
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
A company wants to optimize the cost of its data storage for data that is accessed quarterly.
The company requires high throughput, low latency, and rapid access, when needed.
Which Amazon S3 storage class should a solutions architect recommend?
A. Amazon S3 Glacier (S3 Glacier)
B. Amazon S3 Standard (S3 Standard)
C. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
D. Amazon S3 Standard-lnfrequent Access (S3 Standard-lA)
Answer: B
一家公司希望针对按季度访问的数据优化其数据存储成本。 该公司需要在需要时具有高吞吐量,低延迟和快速访问。 
解决方案架构师应推荐哪种Amazon S3存储类? A.亚马逊S3冰川(S3冰川) B.Amazon S3标准(S3标准) 
C.Amazon S3智能分层(S3智能分层) D.Amazon S3标准不频繁访问(S3 Standard-1A)
QUESTION 286
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
A company requires that all versions of objects in its Amazon S3 bucket be retained.
Current object versions will be frequently accessed during the first 30 days, after which they will
be rarely accessed and must be retrievable within 5 minutes.
Previous object versions need to be kept forever, will be rarely accessed, and can be retrieved
within 1 week. All storage solutions must be highly available and highly durable.
What should a solutions architect recommend to meet these requirements in the MOST cost-
effective manner?
A. Create an S3 lifecycle policy for the bucket that moves current object versions from S3 Standard
storage to S3 Glacier after 30 days and moves previous object versions to S3 Glacier after 1 day.
B. Create an S3 lifecycle policy for the bucket that moves current object versions from S3 Standard
storage to S3 Glacier after 30 days and moves previous object versions to S3 Glacier Deep
Archive after 1 day
C. Create an S3 lifecycle policy for the bucket that moves current object versions from S3 Standard
storage to S3 Standard-infrequent Access (S3 Standard-IA) after 30 days and moves previous
object versions to S3 Glacier Deep Archive after 1 day
D. Create an S3 lifecycle policy for the bucket that moves current object versions from S3 Standard
storage to S3 One Zone-lnfrequent Access (S3 One Zone-lA) after 30 days and moves previous
object versions to S3 Glacier Deep Archive after 1 day
Answer: B
公司要求保留其Amazon S3存储桶中所有版本的对象。
当前对象版本将在前30天内频繁访问,之后将
很少访问,并且必须在5分钟内可检索。
以前的对象版本需要永久保存,很少访问并且可以检索
在1周内。所有存储解决方案都必须具有高可用性和高度耐用性。
解决方案架构师应如何建议才能在MOST成本中满足这些要求-
有效的方式?
A.为从当前S3 Standard中移动对象版本的存储桶创建S3生命周期策略
30天后存储到S3 Glacier,并在1天后将以前的对象版本移动到S3 Glacier。
B.为从当前S3 Standard中移动对象版本的存储桶创建S3生命周期策略
30天后存储到S3 Glacier,并将先前的对象版本移至S3 Glacier Deep
1天后存档
C.为从当前S3 Standard版本移动对象的存储桶创建S3生命周期策略
30天后存储到S3标准不频繁访问(S3 Standard-IA)并移至上一个
1天后将对象版本升级到S3 Glacier Deep Archive
D.为从当前S3 Standard版本移动对象的存储桶创建S3生命周期策略
30天后存储到S3一区不频繁访问(S3 One Zone-lA),并向前移动
1天后将对象版本升级到S3 Glacier Deep Archive

Explanation:

![image-20200923003539177](/Users/gaoyunhu/Library/Application Support/typora-user-images/image-20200923003539177.png)

QUESTION 287
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company hosts its core network services, including directory services and DNS. in its on-
premises data center.
The data center is connected to the AWS Cloud using AWS Direct Connect (DX) Additional AWS
accounts are planned that will require quick, cost-effective, and consistent access to these
network services
What should a solutions architect implement to meet these requirements with the LEAST amount
of operational overhead?
A. Create a DX connection in each new account.
Route the network traffic to the on-premises servers
B. Configure VPC endpoints in the DX VPC for all required services.
Route the network traffic to the on- premises servers
C. Create a VPN connection between each new account and the DX VPp
Route the network traffic to the on-premises servers
D. Configure AWS Transit Gateway between the accounts.
Assign DX to the transit gateway and route network traffic to the on-premises servers
Answer: D
公司托管其核心网络服务,包括目录服务和DNS。 在其本地数据中心中。
数据中心使用AWS Direct Connect(DX)连接到AWS云,计划中的其他AWS账户将需要快速,经济高效且一致地访问这些网络服务
解决方案架构师应以最低的运营开销实施哪些措施来满足这些要求?
答:在每个新帐户中创建一个DX连接。
将网络流量路由到本地服务器
B.在DX VPC中为所有必需的服务配置VPC端点。
将网络流量路由到本地服务器
C.在每个新帐户和DX VPp之间创建VPN连接
将网络流量路由到本地服务器
D.在账户之间配置AWS Transit Gateway。
将DX分配给传输网关,并将网络流量路由到本地服务器

公司 数百个VPC

AWS Transit Gateway通过中央集线器连接VPC和本地网络。这简化了您的网络,并结束了复杂的对等关系。它充当云路由器–每个新连接仅建立一次。 当您进行全球扩展时,区域间对等使用AWS全球网络将AWS Transit网关连接在一起。您的数据将自动加密,并且永远不会通过公共互联网传输。而且,由于其居中地位,AWS Transit Gateway Network Manager在整个网络上都具有独特的视图,甚至可以连接到软件定义的广域网(SD-WAN)设备

QUESTION 288
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances.
Amazon RDS DB instances and Amazon Redshift clusters are configured with tags.
The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this"
A. Use AWS Config rules to define and detect resources that are not property tagged
B. Use Cost Explorer to display resources that are not properly tagged Tag those resources
manually,
C. Write API calls to check all resources for proper tag allocation.
Periodically run the code on an EC2 instance.
D. Write API calls to check all resources for proper tag allocation.
Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code
Answer: C A?
一家在AWS上托管其Web应用程序的公司希望确保所有Amazon EC2实例。
Amazon RDS数据库实例和Amazon Redshift集群配置有标签。
该公司希望最大程度地减少配置和操作此检查的工作量。
解决方案架构师应该怎么做才能做到这一点”
A.使用AWS Config规则来定义和检测未标记属性的资源
B.使用Cost Explorer显示未正确标记的资源手动标记这些资源,
C.编写API调用以检查所有资源是否正确分配了标签。
定期在EC2实例上运行代码。
D.编写API调用以检查所有资源是否正确分配了标签。
通过Amazon CloudWatch安排AWS Lambda函数以定期运行代码

Explanation:

AWS Config Rules . Can use AWS managed config rules (over 75) . Can make custom config rules (must be defined in AWS Lambda)

Evaluate ifeach EBS disk is of type gp2

Evaluate if each EC2 instance is t2.micro

Rules can be evaluated 1 triggered:

For each config change

And I or: at regular time intervals

Can trigger CloudWVatch Events if the rule is non-compliant (and chain with Lambda)

Rules can have auto remediations:

Ifaresource is not compliant, you can trigger an auto remediation

Ec stop instances with non-approved tags AWS Config Rules does not prevent actions from happening (no deny) Pricing; no free tier, $2 per active rule per region per month.

QUESTION 289
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
An application running on an Amazon EC2 instance needs to access an Amazon DynamoDB
table.
Both the EC2 instance and the DynamoDB table are in the same AWS account.
A solutions architect must configure the necessary permissions.
Which solution will allow least privilege access to the DynamoDB table from the EC2 instance?
A. Create an IAM role with the appropriate policy to allow access to the DynamoDB table.
Create an instance profile to assign this IAM role to the EC2 instance
B. Create an IAM role with the appropriate policy to allow access to the DynamoDB table.
Add the EC2 instance to the trust relationship policy document to allow it to assume the role
C. Create an IAM user with the appropriate policy to allow access to the DynamoDB table.
Store the credentials in an Amazon S3 bucket and read them from within the application code
directly.
D. Create an IAM user with the appropriate policy to allow access to the DynamoDB table.
Ensure that the application stores the IAM credentials securely on local storage and uses them to
make the DynamoDB calls
Answer: A
在Amazon EC2实例上运行的应用程序需要访问Amazon DynamoDB表。
EC2实例和DynamoDB表都在同一个AWS账户中。
解决方案架构师必须配置必要的权限。
哪种解决方案将允许从EC2实例对DynamoDB表的最小特权访问?
A.使用适当的策略创建一个IAM角色,以允许访问DynamoDB表。
创建实例配置文件以将此IAM角色分配给EC2实例
B.使用适当的策略创建一个IAM角色,以允许访问DynamoDB表。
将EC2实例添加到信任关系策略文档中,以使其担当角色
C.使用适当的策略创建一个IAM用户,以允许访问DynamoDB表。
将凭证存储在Amazon S3存储桶中,并直接从应用程序代码中读取它们。
D.使用适当的策略创建一个IAM用户,以允许访问DynamoDB表。
确保应用程序将IAM凭据安全地存储在本地存储上,并使用它们进行DynamoDB调用

an instance profile to assign this IAM role to the EC2 instance

QUESTION 290
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
An application uses an Amazon RDS MySQL DB instance.
The RDS database is becoming low on disk space.
A solutions architect wants to increase the disk space without downtime.
Which solution meets these requirements with the LEAST amount of effort?
A. Enable storage auto scaling in RDS.
B. Increase the RDS database instance size
C. Change the RDS database instance storage type to Provisioned lOPS.
D. Back up the RDS database, increase the storage capacity, restore the database and stop the
previous instance

应用程序使用Amazon RDS MySQL数据库实例。
RDS数据库的磁盘空间不足。
解决方案架构师希望在不停机的情况下增加磁盘空间。
哪项解决方案能尽力满足这些要求?
A.在RDS中启用存储自动缩放。
B.增加RDS数据库实例大小
C.将RDS数据库实例存储类型更改为Provisioned lOPS。
D.备份RDS数据库,增加存储容量,还原数据库并停止
先前的事例

Answer: A Explanation: Advantage over using RDS versus deploying DB on EC2 . RDS is a managed service: Automated provisioning, OS patching

  • Continuous backups and restore to specific timestamp (Point in Time Restore)! . Monitoring dashboards. . Read replicas for improved read performance . Multi AZ setup for DR (Disaster Recovery)

  • Maintenance windows for upgrades . Scaling capability (vertical and horizontal) . Storage backed by EBS (gp2 or iol) . BUT you can’t SSH into your instances

QUESTION 291
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company uses an Amazon S3 bucket to store static images for its website. The company
configured permissions to allow access to Amazon S3 objects by privileged users only.
What should a solutions architect do to protect against data loss? (Choose two.)
A. Enable versioning on the S3 bucket.
B. Enable access logging on the S3 bucket.
C. Enable server-side encryption on the S3 bucket.
D. Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier.
E. Use MFA Delete to require multi-factor authentication to delete an object.
Answer: AE
542/5000
一家公司使用Amazon S3存储桶为其网站存储静态图像。 该公司将权限配置为仅允许特权用户访问Amazon S3对象。
解决方案架构师应该怎么做才能防止数据丢失? (选择两个。)
A.在S3存储桶上启用版本控制。
B.在S3存储桶上启用访问日志记录。
C.在S3存储桶上启用服务器端加密。
D.配置S3生命周期规则以将对象转换到Amazon S3 Glacier。
E.使用“ MFA删除”要求进行多重身份验证才能删除对象。
QUESTION 292
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company has an application that runs on Amazon EC2 instances within a private subnet in a
VPC.
The instances access data in an Amazon S3 bucket in the same AWS Region.
The VPC contains a NAT gateway in a public subnet to access the S3 bucket.
The company wants to reduce costs by replacing the NAT gateway without compromising
security or redundancy
Which solution meets these requirements?
A. Replace the NAT gateway with a NAT instance
B. Replace the NAT gateway with an internet gateway.
C. Replace the NAT gateway with a gateway VPC endpoint
D. Replace the NAT gateway with an AWS Direct Connect connection
611/5000
公司拥有一个在aVPC的私有子网内的Amazon EC2实例上运行的应用程序。
实例访问同一AWS区域中Amazon S3存储桶中的数据。
VPC在公共子网中包含一个NAT网关,以访问S3存储桶。
该公司希望通过更换NAT网关来降低成本,同时又不影响安全性或冗余性
哪种解决方案满足这些要求?
A.用NAT实例替换NAT网关
B.用Internet网关替换NAT网关。
C.用网关VPC端点替换NAT网关
D.用AWS Direct Connect连接替换NAT网关

Answer: C Explanation: VPC Endpoints

  • Endpoints allow you to connect to AWS Services using a private network instead of the public Www network . They scale horizontally and are redundant . They remove the need of IGW, NAT, etc…. to access AWS Services . Interface: provisions an ENI (private IP address) as an entry point (must attach security group) most AWS services . Gateway; provisions a target and must be used in a route table - S3 and DynamoDB . In case of issues: . Check DNS Setting Resolution in yourVPC

  • Check Route Tables

  • 端点允许您使用专用网络而不是公共Www网络连接到AWS服务。它们在水平方向上缩放并且是多余的。他们不需要IGW,NAT等…来访问AWS服务。接口:将ENI(专用IP地址)设置为大多数AWS服务的入口点(必须附加安全组)。网关;提供目标,并且必须在路由表-S3和DynamoDB中使用。如有问题:。在yourVPC中检查DNS设置解析

QUESTION 293
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
A company is designing a message-driven order processing application on AWS.
The application consists of many services and needs to communicate the results of its processing
to multiple consuming services.
Each of the consuming services may take up to 5 days to receive the messages.
Which process will meet these requirements?
A. The application sends the results of its processing to an Amazon Simple Notification Service
(Amazon SNS) topic.
Each consuming service subscribes to this SNS topic and consumes the results
B. The application sends the results of its processing to an Amazon Simple Notification Service
(Amazon SNS) topic.
Each consuming service consumes the messages directly from its corresponding SNS topic.
C. The application sends the results of its processing to an Amazon Simple Queue Service (Amazon
SQS) queue.
Each consuming service runs as an AWS Lambda function that consumes this single SQS queue.
D. The application sends the results of its processing to an Amazon Simple Notification Service
(Amazon SNS) topic.
An Amazon Simple Queue Service (Amazon SQS) queue is created for each service and each
queue is configured to be a subscriber of the SNS topic.
Answer: C

一家公司正在AWS上设计消息驱动的订单处理应用程序。
该应用程序包含许多服务,并且需要将其处理结果传达给多个使用服务。
每个使用服务可能最多需要5天才能收到邮件。
哪个过程可以满足这些要求?
答:应用程序将其处理结果发送到Amazon Simple Notification Service(Amazon SNS)主题。
每个使用服务都订阅此SNS主题并使用结果
B.应用程序将其处理结果发送到Amazon Simple Notification Service(Amazon SNS)主题。
每个使用服务都直接从其相应的SNS主题使用消息。
C.应用程序将其处理结果发送到Amazon Simple Queue Service(AmazonSQS)队列。
每个使用服务均作为使用该单个SQS队列的AWS Lambda函数运行。
D.应用程序将其处理结果发送到Amazon Simple Notification Service(Amazon SNS)主题。
为每个服务创建一个Amazon Simple Queue Service(Amazon SQS)队列,并将每个队列配置为SNS主题的订阅者。
QUESTION 294
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
A company stores call recordings on a monthly basis Statistically, the recorded data may be
referenced randomly within a year but accessed rarely after 1 year.
Files that are newer than 1 year old must be queried and retrieved as quickly as possible,
A delay in retrieving older files is acceptable A solutions architect needs to store the recorded
data at a minimal cost.
Which solution is MOST cost-effective?

A. Store individual files in Amazon S3 Glacier and store search metadata in object tags created in
S3 Glacier,
Query S3 Glacier tags and retrieve the files from S3 Glacier
B. Store individual files in Amazon S3 Use lifecycle policies to move the files to Amazon S3 Glacier
after 1 year.
Query and retrieve the files from Amazon S3 or S3 Glacier.
Archive individual files and store search metadata for each archive in Amazon S3.
Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year.
Query and retrieve the files by searching for metadata from Amazon S3
. Archive individual files in Amazon S3.
Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year.
Store search metadata in Amazon DynamoDB Query the files from DynamoDB and retrieve them
from Amazon S3 or S3 Glacier
Answer: B
一家公司每月存储一次通话记录统计上讲,记录的数据可能会在一年之内随机引用,但在一年后很少访问。
必须尽快查询和检索1岁以下的文件,
检索较旧文件的延迟是可以接受的。解决方案架构师需要以最小的成本存储记录的数据。
哪种解决方案最划算?
A.将单个文件存储在Amazon S3 Glacier中,并将搜索元数据存储在S3 Glacier中创建的对象标签中,
查询S3 Glacier标签并从S3 Glacier检索文件
B.在Amazon S3中存储单个文件1年后,使用生命周期策略将文件移至Amazon S3 Glacier。
从Amazon S3或S3 Glacier查询和检索文件。
在Amazon S3中归档单个文件并将每个归档的搜索元数据存储在Amazon S3中。使用生命周期策略将文件在一年后移至Amazon S3 Glacier。通过从Amazon S3中搜索元数据来查询和检索文件
C.在Amazon S3中归档单个文件。
1年后,使用生命周期策略将文件移至Amazon S3 Glacier。
D将搜索元数据存储在Amazon DynamoDB中从DynamoDB查询文件并从Amazon S3或S3 Glacier检索它们

QUESTION 295

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it.
The job is stateless in nature, can be started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete.
The company has asked a solutions architect to design a scalable and cost-effective solution that meets the requirements of the job.
What should the solutions architect recommend?
A. Implement EC2 Spot Instances
B. Purchase EC2 Reserved Instances
C. Implement EC2 On-Demand Instances
D. Implement the processing on AWS Lambda
Answer: A
一家公司拥有高度动态的批处理作业,该作业使用许多Amazon EC2实例来完成它。
该工作本质上是无状态的,可以在任何给定时间启动和停止而不会产生负面影响,并且通常最多需要60分钟才能完成。
该公司已要求解决方案架构师设计出可满足工作要求的可扩展且具有成本效益的解决方案。
解决方案架构师应该建议什么?
A.实施EC2竞价型实例
B.购买EC2预留实例
C.实施EC2按需实例
D.在AWS Lambda上实施处理

Explanation:

EC2 Spot Instances

  • Can get a discount of up to 90% compared to On-demand
  • Instances that you can “lose” at any point of time if your max price is less than the current spot pnice
  • The MOST cost-efficient instances in AWS . Useful for workdoads that are resilient to failure
  • Batch jobs
  • Data analysis
  • lmage processing
  • Not great for critical jobs or databases
  • Great combo: Reserved Instances for baseline + On-Demand & Spot for peaks

EC2竞价型实例 与按需相比可享受高达90%的折扣 如果您的最高价格低于当前现货价格,则您可以在任何时间“下跌”的实例 AWS中最具成本效益的实例。对于可以抵抗失败的工作环境很有用 批处理作业 数据分析 图像处理 不适用于关键工作或数据库 出色的组合:基线的预留实例+峰的按需采样

QUESTION 296
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
An online photo application lets users upload photos and perform image editing operations.
The application offers two classes of service free and paid Photos submitted by paid users are
processed before those submitted by free users.
Photos are uploaded to Amazon S3 and the job information is sent to Amazon SQS.
Which configuration should a solutions architect recommend?
A. Use one SQS FIFO queue.
Assign a higher priority to the paid photos so they are processed first
B. Use two SQS FIFO queues: one for paid and one for free.
Set the free queue to use short polling and the paid queue to use long polling
. Use two SQS standard queues one for paid and one for free.
Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
D. Use one SQS standard queue. Set the visibility timeout of the paid photos to zero.
Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first
Answer: C
在线照片应用程序使用户可以上传照片并执行图像编辑操作。 该应用程序提供免费和付费两种服务,
付费用户提交的照片是 在免费用户提交的内容之前进行处理。 将照片上传到Amazon S3,并将职位信息发送到Amazon SQS。 解决方案架构师应建议哪种配置? 
答:使用一个SQS FIFO队列。 为付费照片分配更高的优先级,以便首先处理它们 
B.使用两个SQS FIFO队列:一个用于付费队列,一个用于免费队列。 将空闲队列设置为使用短轮询,将付费队列设置为使用长轮询 。
C 使用两个SQS标准队列,一个为付费队列,另一个为免费队列。 配置Amazon EC2实例,以优先于付费队列优先于付费队列轮询。
D.使用一个SQS标准队列。将付费照片的可见性超时设置为零。 配置Amazon EC2实例以区分可见性设置的优先级,以便首先处理付费照片
QUESTION 297
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company has an application hosted on Amazon EC2 instances in two VPCs across different
AWS Regions,
To communicate with each other, the instances use the internet for connectivity.
The security team wants to ensure that no communication between the instances happens over
the internet.0
What should a solutions architect do to accomplish this"
A. Create a NAT gateway and update the route table of the EC2 instances' subnet
B. Create a VPC endpoint and update the route table of the EC2 instances' subnet
C. Create a VPN connection and update the route table of the EC2 instances' subnet
D. Create a VPC peering connection and update the route table of the EC2 instances' subnet
Answer: D
一家公司在跨不同AWS区域的两个VPC中的Amazon EC2实例上托管了一个应用程序,
为了彼此通信,实例使用互联网进行连接。
安全团队希望确保实例之间的通信不会通过Internet发生。0
解决方案架构师应该怎么做才能做到这一点”
A.创建一个NAT网关并更新EC2实例的子网的路由表
B.创建一个VPC端点并更新EC2实例的子网的路由表
C.创建一个VPN连接并更新EC2实例的子网的路由表
D.创建一个VPC对等连接并更新EC2实例的子网的路由表
QUESTION 298
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company runs a production application on a fleet of Amazon EC2 instances.
The application reads the data from an Amazon SQS queue and processes the messages in
parallel.
The message volume is unpredictable and often has intermittent traffic,
This application should continually process messages without any downtime
Which solution meets these requirements MOST cost-effectively?
A. Use Spot Instances exclusively to handle the maximum capacity required
B. Use Reserved Instances exclusively to handle the maximum capacity required
C. Use Reserved Instances for the baseline capacity and use Spot InstaKes to handle additional
capacity
D. Use Reserved instances for the baseline capacity and use On-Demand Instances to handle
additional capacity

一家公司在一系列Amazon EC2实例上运行生产应用程序。
该应用程序从Amazon SQS队列读取数据并并行处理消息。
消息量是无法预测的,并且经常具有间歇性流量,
此应用程序应持续处理消息,而不会造成任何停机
哪种解决方案可以最经济地满足这些要求?
A.仅使用竞价型实例来处理所需的最大容量
B.仅使用预留实例来处理所需的最大容量
C.使用预留实例作为基准容量,并使用Spot InstaKes处理其他实例
容量
D.使用预留实例作为基准容量,并使用按需实例进行处理
额外的容量

Answer: D Explanation:EC2 Spot Instances

  • Can get a discount of up to 90% compared to On-demand
  • Instances that you can “lose” at any point of time if your max price is less than the current spot pnce
  • The MOST cost-efficient instances in AWS . Useful for workdoads that are resilient to failure Batch jobs
  • Data analysis
  • lmage processing
  • Not great for critical jobs or databases
  • Great combo: Reserved Instances for baseline + On-Demand & Spot for peaks
QUESTION 299
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company with facilities in North America. Europe, and Asia is designing new distributed
application to optimize its global supply chain and manufacturing process.
The orders booked on one continent should be visible to all Regions in a second or less. The
database should be able to support failover with a short Recovery Time Objective (RTO).
The uptime of the application is important to ensure that manufacturing is not impacted.
What should a solutions architect recommend?
A. Use Amazon DynamoDB global tables
B. Use Amazon Aurora Global Database
C. Use Amazon RDS for MySQL with a cross-Region read replica
D. Use Amazon RDS for PostgreSQL with a cross-Region read replica
Answer: B
在北美设有工厂的公司。欧洲和亚洲正在设计新的分布式 应用程序以优化其全球供应链和制造流程。 
在一个大陆上预订的订单应该在一秒钟或更短的时间内对所有地区都可见。的 数据库应该能够以较短的恢复时间目标(RTO)支持故障转移。 
应用程序的正常运行时间对于确保不影响生产非常重要。 解决方案架构师应该建议什么? 
A.使用Amazon DynamoDB全局表 B.使用Amazon Aurora全局数据库
C.将Amazon RDS for MySQL与跨区域只读副本一起使用 D.将Amazon RDS for PostgreSQL与跨区域只读副本一起使用

Explanation: Cross-Region Disaster Recovery If your primary region suffers a performance degradation or outage, you can promote one of the secondary regions to take read/write responsibilities. An Aurora cluster can recover in less than 1 minute even in the event of a complete regional outage. This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan.

跨区域灾难恢复如果您的主要区域性能下降或中断,则可以提升其中一个辅助区域来承担读/写职责。即使发生完全区域性故障,Aurora群集也可以在不到1分钟的时间内恢复。这为您的应用程序提供了1秒的有效恢复点目标(RPO)和不到1分钟的恢复时间目标(RTO),为全球业务连续性计划奠定了坚实的基础。

QUESTION 300
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
A company has several Amazon EC2 instances set up in a private subnet for security reasons.
These instances host applications that read and write large amounts of data to and from Amazon
S3 regularly.
Currently, subnet routing directs all the traffic destined for the internet through a NAT gateway.
The company wants to optimize the overall cost without impacting the ability of the application to
communicate with Amazon S3 or the outside internet.
What should a solutions architect do to optimize costs?
A. Create an additional NAT gateway Update the route table to route to the NAT gateway.
Update the network ACL to allow S3 traffic
B. Create an internet gateway Update the route table to route traffic to the internet gateway.
Update the network ACL to allow S3 traffic.
C. Create a VPC endpoint for Amazon S3 Attach an endpoint policy to the endpoint.
Update the route table to direct traffic to the VPC endpoint
D. Create an AWS Lambda function outside of the VPC to handle S3 requests.
Attach an IAM policy to the EC2 instances, allowing them to invoke the Lambda function.
Answer: C
一家公司出于安全原因在私有子网中设置了多个Amazon EC2实例。
这些实例托管着定期在AmazonS3上读取和写入大量数据的应用程序。
当前,子网路由通过NAT网关定向发往Internet的所有流量。
该公司希望在不影响应用程序与Amazon S3或外部Internet通信的能力的情况下优化总体成本。
解决方案架构师应采取什么措施来优化成本?
A.创建另一个NAT网关更新路由表以路由到NAT网关。
更新网络ACL以允许S3流量
B.创建一个Internet网关更新路由表以将流量路由到Internet网关。
更新网络ACL以允许S3通信。
C.为Amazon S3创建VPC终端节点将终端节点策略附加到终端节点。
更新路由表以将流量定向到VPC端点
D.在VPC外部创建一个AWS Lambda函数以处理S3请求。
将IAM策略附加到EC2实例,允许它们调用Lambda函数。

VPC终端节点能建立VPC和一些AWS服务之间的高速、私密的“专线”。这个专线叫做PrivateLink,使用了这个技术,你无需再使用Internet网关、NAT网关、VPN或AWS Direct Connect连接就可以访问到一些AWS资源了!

知识点

VPC内的服务(比如EC2)需要访问S3的资源,只需要通过VPC终端节点和更改路由表,就可以通过AWS内网访问到这些服务。在这个情况下,VPC内的服务(EC2)甚至不需要连接任何外网。

**终端节点(Endpoints)**是虚拟设备,它是以能够自动水平扩展、高度冗余、高度可用的VPC组件设计而成,你也不需要为它的带宽限制和故障而有任何担忧。

QUESTION 301
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
A company hosts a training site on a fleet of Amazon EC2 instances.
The company anticipates that its new course, which consists of dozens of training videos on the
site, will be extremely popular when it is released in 1 week.
What should a solutions architect do to minimize the anticipated server load?
A. Store the videos in Amazon ElastiCache for Redis.
Update the web servers to serve the videos using the Elastic ache API
B. Store the videos in Amazon Elastic File System (Amazon EFS).
Create a user data script for the web servers to mount the EFS volume.
C. Store the videos in an Amazon S3 bucket.
Create an Amazon CloudFlight distribution with an origin access identity (OAl) of that S3 bucket.
Restrict Amazon S3 access to the OAl.
D, Store the videos in an Amazon S3 bucket.
Create an AWS Storage Gateway file gateway to access the S3 bucket.
Create a user data script for the web servers to mount the file gateway
Answer: C

一家公司在一系列Amazon EC2实例上托管一个培训站点。
该公司期望其新课程包括数十个有关培训课程的培训视频。
该网站在1周内发布后将非常受欢迎。
解决方案架构师应该怎么做才能最大程度地减少预期的服务器负载?
A.将视频存储在Amazon ElastiCache for Redis中。
使用Elastic ache API更新Web服务器以提供视频
B.将视频存储在Amazon Elastic File System(Amazon EFS)中。
为Web服务器创建用户数据脚本以挂载EFS卷。
C.将视频存储在Amazon S3存储桶中。
创建具有该S3存储桶的原始访问身份(OAl)的Amazon CloudFlight分配。
限制Amazon S3对OAl的访问。
D,将视频存储在Amazon S3存储桶中。
创建一个AWS Storage Gateway文件网关以访问S3存储桶。
为Web服务器创建用户数据脚本以安装文件网关
QUESTION 302
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume.
A certain video file has become popular and a large number of users across the world are
accessing this content.
This has resulted in a cost increase.
Which action will DECREASE cost without compromising user accessibility?
A. Change the EBS volume to Provisioned IOPS (PIOPS).
B. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.
C. Split the video into multiple, smaller segments so users are routed to the requested video
segments only.
D. Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the
nearest S3 bucket.
Answer: B

一家媒体公司将视频内容存储在Amazon Elastic Block Store(Amazon EBS)卷中。
某个视频文件变得很流行,世界各地的大量用户
访问此内容。
这导致成本增加。
在不影响用户可访问性的情况下,减少费用的措施是什么?
A.将EBS卷更改为Provisioned IOPS(PIOPS)。
B.将视频存储在Amazon S3存储桶中并创建Amazon CloudFront发行版。
C.将视频分成多个较小的段,以便将用户路由到请求的视频
仅细分。
D.清除每个区域中的Amazon S3存储桶并上传视频,以便将用户路由到
最近的S3存储桶。
QUESTION 303
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A solutions architect is designing the cloud architecture for a new application being deployed to
AWS. The application allows users to interactively download and upload files. Files older than 2
years will be accessed less frequently. The solutions architect needs to ensure that the
application can scale to any number of files while maintaining high availability and durability,
Which scalable solutions should the solutions architect recommend? (Choose two.)
A. Store the files on Amazon S3 with a lifecycle policy that moves objects older than2 years to S3
Glacier.
B. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3
Standard-lnfrequent Access (S3 Standard-lA)
C. Store the files on Amazon Elastic File System (Amazon EFS) with a lifecycle policy that moves
objects older than 2 years to EFS Infrequent Access (EFS IA),
D. Store the files in Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the
volumes. Use the snapshots to archive data older than 2 years.
E. Store the files in RAID-striped Amazon Elastic Block Store (Amazon EBS) volumes. Schedule
snapshots of the volumes. Use the snapshots to archive data older than 2 years.
Answer: AB
解决方案架构师正在为将要部署到的新应用程序设计云架构。
AWS。该应用程序允许用户以交互方式下载和上传文件。文件早于2
年将不那么频繁地访问。解决方案架构师需要确保应用程序可以扩展到任意数量的文件,同时保持高可用性和持久性,
解决方案架构师应推荐哪些可扩展解决方案? (选择两个。)
A.使用生命周期策略将文件存储在Amazon S3上,该策略将2年以上的对象移动到S3Glacier。
B.使用生命周期策略将文件存储在Amazon S3上,该策略将2年以上的对象移动到S3Standard-Infrequent Access(S3 Standard-IA)
C.使用生命周期策略将文件存储在Amazon Elastic File System(Amazon EFS)上,该策略将2年以上的对象移动到EFS不频繁访问(EFS IA),
D.将文件存储在Amazon Elastic Block Store(Amazon EBS)卷中。计划卷的快照。使用快照可存档2年以上的数据。
E.将文件存储在RAID分割的Amazon Elastic Block Store(Amazon EBS)卷中。预定卷快照。使用快照可存档2年以上的数据。

Explanation: https://docs. .aws .amazon.com/efs/latest/ug/enable-lifecycle-management.html https://docs. aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

QUESTION 304
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company is hosting multiple websites for several lines of business under its registered parent
domain. Users accessing these websites will be routed to appropriate backend Amazon EC2
instances based on the subdomain. The websites host static webpages, images, and server-side
scripts like PHP and JavaScript.

Some of the websites experience peak access during the first two hours of business with
constant usage throughout the rest of the day. A solutions architect needs to design a solution
that will automatically adjust capacity to these traffic patterns while keeping costs low.
Which combination of AWS services or features will meet these requirements? (Choose two.)
A. AWS Batch
B. Network Load Balancer
C. Application Load Balancer
D. Amazon EC2 Auto Scaling
E. Amazon S3 website hosting
Answer: CD
一家公司在其注册母公司下托管多个网站以开展多项业务 域。访问这些网站的用户将被路由到适当的后端Amazon EC2 基于子域的实例。网站托管静态网页,图像和服务器端 脚本,例如PHP和JavaScript。 
一些网站在与您合作的前两个小时内访问量达到峰值 在一天的其余时间内保持不变的使用率。
解决方案架构师需要设计解决方案 这将自动调整这些流量模式的容量,同时保持较低的成本。 哪种AWS服务或功能组合可以满足这些要求? (选择两个。) 
A.AWS批处理 B.网络负载平衡器 C.应用程序负载平衡器 D.Amazon EC2自动扩展 E.Amazon S3网站托管

Explanation: https://docs. .aws. amazon.com/AmazonS3/latest/dev/WebsiteHosting.html https://medium.com/awesome-cloud/aws-difference-between-application-load-balancer-and- network-load-balancer-cb8b6cd296a4

S3 cannot handle server side scripting like PHPC

“网站托管静态网页,图像和服务器端脚本,例如PHP和 JavaScript。” E.(不正确)。 Amazon S3不支持服务器端脚本。 答:(无效) B.(不正确)NLB在第4层上运行并且不能确保应用程序的可用性。网络负载平衡器在检查可用性时不会在应用程序A和应用程序B之间进行区分(实际上除非端口不同,否则它不能),但是应用程序负载平衡器将在区别下通过检查可用的应用程序层数据在两个应用程序之间切换。这种差异意味着网络负载平衡器最终可能会向已崩溃或脱机的应用程序发送请求,但是应用程序负载平衡器永远不会犯同样的错误。

QUESTION 305
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A solutions architect is creating an application that will handle batch processing of large amounts
of data. The inpüt data will be held in Amazon S3 and the output data will be stored in a different
S3 bucket. For processing, the application will transfer the data over the network between
multiple Amazon EC2 instances.
What should the solutions architect do to reduce the overall data transfer costs?
A. Place all the EC2 instances in an Auto Scaling group.
B. Place all the EC2 instances in the same AWS Region.
C. Place all the EC2 instances in the same Availability Zone.
D. Place all the EC2 instances in private subnets in multiple Availability Zones.
Answer: B
解决方案架构师正在创建一个将处理批处理大量数据的应用程序。 输入数据将保存在Amazon S3中,输出数据将存储在别的S3存储桶中。
为了进行处理,该应用程序将通过网络在多个Amazon EC2实例之间传输数据。
解决方案架构师应采取什么措施来降低总体数据传输成本?
A.将所有EC2实例放入Auto Scaling组。
B.将所有EC2实例放置在同一AWS区域中。
C.将所有EC2实例放置在同一可用区中。
D.将所有EC2实例放置在多个可用区中的专用子网中。

Explanation: There is no data transfer cost between eC2 & S3 with in same region.

QUESTION 306
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A company is hosting an election reporting website on AWS for users around the world. The
website uses Amazon EC2 instances for the web and application tiers in an Auto Scaling group
with Application Load Balancers. The database tier uses an Amazon RDS for MySQL database.
The website is updated with election results once an hour and has historically observed hundreds
of users accessing the reports.
The company is expecting a significant increase in demand because of upcoming elections in
different countries. A solutions architect must improve the website's ability to handle additional
demand while minimizing the need for additional EC2 instances.
Which solution will meet these requirements?
A. Launch an Amazon ElastiCache cluster to cache common database queries.
B. Launch an Amazon CloudFront web distribution to cache commonly requested website content.
C. Enable disk -based caching on the EC2 instances to cache commonly requested website content.

D. Deploy a reverse proxy into the design using an EC2 instance with caching enabled for commonly
requested website content.
Answer: B
一家公司正在AWS上为全球用户托管选举报告网站。该网站使用带有应用程序负载平衡器的Auto Scaling组中的Web和应用程序层使用Amazon EC2实例。数据库层使用Amazon RDS for MySQL数据库。
该网站每小时都会更新一次选举结果,并且在历史上已经观察到数百名用户在访问报告。
由于不同国家即将举行的选举,该公司预计需求将大幅增加。解决方案架构师必须提高网站处理额外需求的能力,同时最大程度地减少对额外EC2实例的需求。
哪种解决方案可以满足这些要求?
A.启动Amazon ElastiCache集群以缓存常见的数据库查询。
B.启动Amazon CloudFront Web分发以缓存常用的网站内容。
C.在EC2实例上启用基于磁盘的缓存,以缓存通常请求的网站内容。
D.使用EC2实例将反向代理部署到设计中,并为常规启用缓存
要求的网站内容。
QUESTION 307
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
A company is running a three-tier web application to process credit card payments. The front-end
user interface consists of static webpages. The application tier can have long-running processes.
The database tier uses MySQL.
The application is currently running on a single, general purpose large Amazon EC2 instance. A
solutions architect needs to decouple the services to make the web application highly available.
Which solution would provide the HIGHEST availability?
A. Move static assets to Amazon CloudFront.
Leave the application in EC2 in an Auto Scaling group.
Move the database to Amazon RDS to deploy Multi-AZ.
B. Move static assets and the application into a medium EC2 instance.
Leave the database on the large instance.
Place both instances in an Auto Scaling group.
c. Move static assets to Amazon S3, Move the application to AWS Lambda with the concurrency
limit set.
Move the database to Amazon DynamoDB with on-demand enabled.
D. Move static assets to Amazon S3.
Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto
Scaling enabled.
Move the database to Amazon RDS to deploy Multi-AZ.
Answer: D
一家公司正在运行一个三层Web应用程序来处理信用卡付款。前端用户界面由静态网页组成。应用程序层可以具有长时间运行的进程。
数据库层使用MySQL。
该应用程序当前在单个通用大型Amazon EC2实例上运行。解决方案架构师需要解耦服务以使Web应用程序高度可用。
哪种解决方案将提供最高的可用性?
A.将静态资产移动到Amazon CloudFront。
将应用程序保留在Auto Scaling组中的EC2中。
将数据库移至Amazon RDS以部署多可用区。
B.将静态资产和应用程序移到中等EC2实例中。
将数据库保留在大型实例上。
将两个实例都放在一个Auto Scaling组中。
C。将静态资产移动到Amazon S3,将应用程序移动到设置了并发限制的AWS Lambda。
已启用按需将数据库移至Amazon DynamoDB。
D.将静态资产移动到Amazon S3。
使用Auto将应用程序移动到Amazon Elastic Container Service(Amazon ECS)容器
已启用扩展将数据库移至Amazon RDS以部署多可用区。
QUESTION 308
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company operates an ecommerce website on Amazon EC2 instances behind an Application
Load Balancer (ALB) in an Auto Scaling group, The site is experiencing performance issues
related to a high request rate from illegitimate external systems with changing IP addresses. The
security team is worried about potential DDoS attacks against the website. The company must
block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.
What should a solutions architect recommend?
A. Deploy Amazon Inspector and associate it with the ALB,
B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
C. Deploy rules to the network ACLs associated with the ALB to block the incoming traffic.
D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
Answer: B
一家公司在Amazon EC2实例上的Auto Scaling组中的Application Load Balancer(ALB)后面运营一个电子商务网站,该网站遇到性能问题
与非法IP地址不断变化的外部系统的高请求率有关。安全团队担心对网站的潜在DDoS攻击。 公司必须
以对合法用户影响最小的方式阻止非法输入请求。
解决方案架构师应该建议什么?
A.部署Amazon Inspector并将其与ALB关联,
B.部署AWS WAF,将其与ALB关联,然后配置速率限制规则。
C.将规则部署到与ALB关联的网络ACL以阻止传入流量。
D.部署Amazon GuardDuty并在配置GuardDuty时启用速率限制保护。

Explanation: Rate limit For a rate-based rule, enter the maximum number of requests to allow in any five-minute period from an IP address that matches the rule’s conditions. The rate limit must be at least 100.

You can specify a rate limit alone, or a rate limit and conditions. If you specify only a rate limit, AWS WAF places the limit on all IP addresses. If you specify a rate limit and conditions, AWS WAF places the limit on IP addresses that match the conditions. When an IP address reaches the rate limit threshold, AWS WAF applies the assigned action (block or count) as quickly as possible, usually within 30 seconds. Once the action is in place, if five minutes pass with no requests from the IP address, AWS WAF resets the counter to zero.

QUESTION 309
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company is creating an architecture for a mobile app that requires minimal latency for its users.
The company's architecture consists of Amazon EC2 instances behind an Application Load
Balancer running in an Auto Scaling group. The EC2 instances connect to Amazon RDS.
Application beta testing showed there was a slowdown when reading the data. However, the
metrics indicate that the EC2 instances do not cross any CPU utilization thresholds.
How can this issue be addressed?
A. Reduce the threshold for CPU utilization in the Auto Scaling Group
B, Replace the Application Load Balancer with a Network Load Balancer
C. Add read replica for the RDS instances and direct read traffic to the replica
D. Add Multi-AZ support to the RDS instances and direct read traffic to the new EC2 instance
Answer: C
一家公司正在为移动应用程序创建一种架构,该架构需要为其用户提供最小的延迟。 
该公司的架构由应用程序负载背后的Amazon EC2实例组成 在Auto Scaling组中运行的Balancer。
EC2实例连接到Amazon RDS。 应用程序Beta测试表明,读取数据时速度变慢。但是,那 指标表明EC2实例未超过任何CPU使用率阈值。 
如何解决这个问题?
A.降低Auto Scaling组中CPU利用率的阈值 B,用网络负载平衡器替换应用程序负载平衡器 
C.为RDS实例添加只读副本,并将读取流量定向到该副本 D.向RDS实例添加多可用区支持,并将读取流量定向到新的EC2实例

说明数据读取是瓶颈,添加只读副本

QUESTION310
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company is hosting its static website in an Amazon S3 bucket, which is the origin for Amazon
CloudFront. The company has users in the United States, Canada, and Europe and wants to
reduce.
What should a solutions architect recommend?
A. Adjust the CloudFront caching time to live (TTL) from the default to a longer timeframe
B. Implement CloudFront events with Lambda@edge to run the website's data processing
C. Modify the CloudFront price class to include only the locations of the countries that are served
D. Implement a CloudFront Secure Socket Layer (SSL) certificate to push security closer to the
locations of the countries that are served
Answer: C
一家公司将其静态网站托管在Amazon S3存储桶中,这是Amazon CloudFront的起源。 该公司在美国,加拿大和欧洲都有用户,并且希望减少用户数量。
解决方案架构师应该建议什么?
A.将CloudFront缓存生存时间(TTL)从默认值调整为更长的时间范围
B.使用Lambda @ edge实施CloudFront事件以运行网站的数据处理
C.修改CloudFront价格类别以仅包括所服务国家/地区的位置
D.实施CloudFront安全套接字层(SSL)证书以将安全性推向更高
服务国家的位置

降低价格?

QUESTION 311
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume.
A certain video files has become popular and a large number of user across the world are
accessing this content.
This has resulted in a cost increase.
Which action will DECREASE cost without compromising user accessibility?
A. Change the EBS volume to provisioned IOPS (PIOPS)
B. Store the video in an Amazon S3 bucket and create and Amazon CloudFront distribution
C. Split the video into multiple, smaller segments So users are routed to the requested video
segments only
D. Create an Amazon S3 bucket in each Region and upload the videos so users are routed to the
nearest S3 bucket
Answer: B
一家媒体公司将视频内容存储在Amazon Elastic Block Store(Amazon EBS)卷中。
某些视频文件已变得很流行,世界各地的大量用户正在访问此内容。
这导致成本增加。
在不影响用户可访问性的情况下,减少费用的措施是什么?
A.将EBS卷更改为预配置IOPS(PIOPS)
B.将视频存储在Amazon S3存储桶中并创建和Amazon CloudFront发行版
C.将视频分成多个较小的段,因此用户仅被路由到请求的视频段
D.在每个区域中创建一个Amazon S3存储桶并上传视频,以便将用户路由到最近的S3存储桶
QUESTION 312
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company built a new VPC with the intention of the hosting Amazon EC2 based workloads on
AWS. A solutions architect specified that an Amazon S3 gateway endpoint be created and
attached to this new VPC, Once the first Application server is built, developers report that server
time out when accessing data stored in the S3 bucket.
Which scenario could be causing this issue? ( Select TWO)
A. The S3 bucket is in a region other than the VPC
B. The endpoint has a policy that blocks the CIDR of the VPC
C. The route to the S3 endpoint is not configured in the route table
D. The access is routed through an internet gateway rather than the endpoint
E. The S3 bucket has a bucket policy that does not allow access to the CIDR of the VPC
Answer: CE
一家公司构建了一个新的VPC,旨在在AWS上托管基于Amazon EC2的工作负载。 
解决方案架构师指定创建Amazon S3网关终端节点并将其附加到此新VPC。一旦构建了第一个应用程序服务器,
开发人员将在访问存储在S3存储桶中的数据时报告服务器超时。
哪种情况可能导致此问题? (选择两个)
A. S3存储桶位于VPC以外的区域
B.端点具有阻止VPC的CIDR的策略
C.到S3端点的路由未在路由表中配置
D.访问通过Internet网关而不是端点进行路由
E. S3存储桶的存储桶策略不允许访问VPC的CIDR
QUESTION 313
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A solution architect is designing a shared storage solution for an Auto Scaling web application,
The company anticipates making frequent changes to the content, so the solution must have
strong consistency.
Which solution requires the LEAST amount of effort?
A. Create an Amazon S3 bucket to store the web content and use Amazon Cloudfront to deliver the
content
B. Create an Amazon Elastic File system ( Amazon EFS ) file system and mount it on the individual
Amazon EC2 instance
C. Create a shared Amazon Elastic Block store (Amazon EBS) volume and mount it on the individual
Amazon EC2 instance
. Use AWS Datasync to perform continuous synchronization of data between Amazon EC2 hosts in
the Auto scaling group.
Answer: B
解决方案架构师正在为Auto Scaling Web应用程序设计共享存储解决方案,
公司期望对内容进行频繁的更改,因此解决方案必须具有很强的一致性。
哪种解决方案需要最少的努力?
A.创建一个Amazon S3存储桶以存储Web内容,并使用Amazon Cloudfront交付内容
B.创建一个Amazon Elastic File System(Amazon EFS)文件系统并将其安装在单个Amazon EC2实例上
C.创建一个共享的Amazon Elastic Block Store(Amazon EBS)卷并将其安装在单个Amazon EC2实例上
。 使用AWS Datasync在自动扩展组中的Amazon EC2主机之间执行数据的连续同步。

Amazon EFS=一致性

QUESTION 314
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A solution architect creating an application that will handle batch processing of large amount of
data. The input data will be held in Amazon S3 and the output data will be stored in a different S3
bucket. For processing the application will transfer the data over the network between multiple
Amazon EC2 instances.
What should the solution architect do to reduce the overall data transfer costs ?
A. Place all the EC2 instances in an Auto scaling group,
B. Place all the EC2 instance in the same AWS Region
C. Place all the EC2 instance in the same Availability Zone
D. Place all the EC2 instances in private subnets in multiple Availability zones
解决方案架构师创建一个将处理批处理大量数据的应用程序。 输入数据将保存在Amazon S3中,输出数据将存储在其他S3bucket中。 为了进行处理,应用程序将在多个Amazon EC2实例之间通过网络传输数据。
解决方案架构师应采取什么措施来降低总体数据传输成本?
A.将所有EC2实例放置在自动伸缩组中,
B.将所有EC2实例放置在同一AWS区域中
C.将所有EC2实例放置在同一可用区中
D.将所有EC2实例放置在多个可用区中的专用子网中

Answer: B

QUESTION 315
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company previously migrated its data warehouse solution to AWS. The company also has an
AWS Direct Connect connection Corparate office user query the data warehouse using a
visulization tool. Th average size of a query returned by th data warehouse is 50 MB and each
webpage sent by the visualization tool is approximately 500 KB. Result sets returned by the data
warehouse are not cached,
Which solution provides the LOWEST data transfer egress cost for the company?
A. Host the visualization tool on-premises and query the data warehouse directly over the internet.
B. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the
internet,
C. Host the visualization tool on-premises and query the data warehouse directly over a Direct
Connect connection at a location in the same AWS Region.
D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a
Direct Connect connection at a location in the same AWS Region.
Answer: D
一家公司以前将其数据仓库解决方案迁移到了AWS。 该公司还拥有一个AWS Direct Connect连接公司办公室用户,
可以使用可视化工具查询数据仓库。 数据仓库返回的查询的平均大小为50 MB,每个大小
可视化工具发送的网页大约为500 KB。 数据仓库返回的结果集不会被缓存,
哪种解决方案为公司提供了最低的数据传输成本?
A.在内部托管可视化工具,并直接通过Internet查询数据仓库。
B.将可视化工具托管在与数据仓库相同的AWS区域中。 通过互联网访问它,
C.在内部托管可视化工具,并通过Direct Connect连接直接在同一AWS区域中的某个位置查询数据仓库。
D.将可视化工具托管在与数据仓库相同的AWS区域中,并通过Direct Connect连接在同一AWS区域中的某个位置进行访问。

相同的AWS区域中,并通过Direct Connect连接

QUESTION 316
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A company provides an API to its users that automates inquiries for tax complutations based on
item prices. The company experiences a larger number of inquires during the holiday season only
that cause slower response times. A solution architect needs to design a solution that is scalable
and elastic.
What should the solutions architect do to accomplish this?
A. Provide an API hosted on an Amazon EC2 instance.
The EC2 instance performs the required computations when the API request is made.
B. Design a REST API using Amazon API Gateway that accepts the item names, API Gateway
passes item names to AWS Lambada for tax computations.
C. Create ans Application Load Balancer that has two Amazon EC2 instances behind it.
The EC2 instances will compute the tax on the recieved item names.
D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon
EC2 instance, API Gateway accepts and passes the item names to the EC2 instance for tax
computations.
Answer: B
公司向其用户提供API,该API可根据商品价格自动进行税款查询。 该公司在假期期间会遇到大量咨询,这只会导致响应时间变慢。 
解决方案架构师需要设计一个可扩展且具有弹性的解决方案。
解决方案架构师应该怎么做才能做到这一点?
答:提供在Amazon EC2实例上托管的API。
发出API请求时,EC2实例执行所需的计算。
B.使用接受项目名称的Amazon API Gateway设计REST API,API Gateway将项目名称传递给AWS Lambada进行税金计算。
C.创建一个具有两个Amazon EC2实例后面的应用程序负载均衡器。
EC2实例将对收到的商品名称计算税额。
D.使用连接到Amazon EC2实例上托管的API的Amazon API Gateway设计REST API,API Gateway接受并将商品名称传递给EC2实例以征税
计算。

API Gateway没有最低使用成本,我们用多少服务内容就花费多少。

比如在最新的A Cloud Guru的serverless 会议上面提到了,他们整个网站都是基于API Gateway和Lambda的,并没有任何计算服务器(EC2,ECS等),永远不用担心性能和扩容的问题。并且他们每个月的花销只是580美金!

API Gateway和Lambda的结合可以构成如下图所示的无服务(Serverless)架构。

关于API Gateway,我们需要了解这些

  • 理解什么是API Gateway,它能用来做什么
  • API Gateway可以缓存内容,从而更快地将一些常用内容发送给用户
  • API Gateway是一种低成本的无服务(serverless)方案,而且它可以自动弹性伸缩(类似ELB,NAT网关)
  • 可以对API Gateway进行节流,以防止恶意攻击
  • 可以将API Gateway的日志放到CloudWatch中
QUESTION 317
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
A company uses a legacy on-premises analytics application that operate on gigabytes of .CSV and
represents months of data. The legacy application cannit handle the growing size of .cSV files.
New CSV files added daily from various data sources to a central on-premises storage location.
The company wants to continuew to support the legacy application while user learn AWS
analytics services. To achieve this, a solution architect wants to maintain two synchronizes copies
of all the .csV files on-premises and in Amazon S3.
Which solution should the solution architect recommend?

A. Deploy AWS Datasync on-premises. configure Datasync to continuously replicate the .csv files
between the company's S3 bucket.
B. Deploye an on-premises file gateway, Configur data source to write the .csv files to the file
gateway, point the legacy analytics application to the file gatway.
The file gaeway should replicate the .csV file to Amazon S3.
C. Deploy an on-premises volume gateway .configure data source to write the .CSV files to the volume
gateway.Point the legacy analytics application to the volume gateway.
The volume gateway should replicate data to Amazon S3.
D. Deploy AWS datasync on-premises. Configure datasync to continuously replicate the .csv files
between on-premises and Amazon Elastic file system (Amazon EFS) enable replication from
Amazon EFS to the comapny's S3 Bucket.
Answer: A
公司使用旧式本地分析应用程序,该应用程序以千兆字节.CSV格式运行,代表数月的数据。旧版应用程序可以处理不断增长的.cSV文件。
每天都会从各种数据源向中央本地存储位置添加新的CSV文件。
该公司希望在用户学习AWSanalytics服务时继续支持旧版应用程序。为了实现这一点,解决方案架构师希望维护两个同步副本
本地和Amazon S3中的所有.csV文件。
解决方案架构师应建议哪种解决方案?

A.在本地部署AWS Datasync。配置Datasync以在公司的S3存储桶之间连续复制.csv文件。
B.部署本地文件网关,配置数据源以将.csv文件写入文件网关,将旧版分析应用程序指向文件网关。
文件gaeway应该将.csV文件复制到Amazon S3。
C.部署本地卷网关.configure数据源,将.CSV文件写入卷网关。将旧版分析应用程序指向卷网关。
卷网关应将数据复制到Amazon S3。
D.在本地部署AWS datasync。配置数据同步以在本地和Amazon Elastic File System(Amazon EFS)之间连续复制.csv文件,以实现从
Amazon EFS到公司的S3存储桶。
QUESTION 318
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Management has decided to deploy all AWS VPCs with lPv6 enabled. After sometime, a
solutions architect tries to launch a new instance and receives an error stating that there is no
enough IP address space available in the subnet.
What should the solutions architect do to fix this?
A. Check to make sure that only lPv6 was used during the VPC creation
B. Create a new IPv4 subnet with a larger range, and then launch the instance
C. Create a new lPv6-only subnet with a larger range, and then launch the instance
D. Disable the lPv4 subnet and migrate all instances to IPv6 only. Once that is complete, launch the
instance.
Answer: C
管理层已决定部署所有启用了lPv6的AWS VPC。 一段时间后,解决方案架构师尝试启动一个新实例,并收到一条错误消息,指出子网中没有足够的IP地址空间。
解决方案架构师应该怎么做才能解决此问题?
A.检查以确保在VPC创建过程中仅使用了lPv6
B.创建一个更大范围的新IPv4子网,然后启动实例
C.创建一个具有更大范围的仅lpv6的新子网,然后启动实例
D.禁用lPv4子网,并将所有实例仅迁移到IPv6。 完成后,启动实例。
QUESTION 319
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
A company is developing a new machine leamning model solution in AWS. The models are
developed as independent microservices that fetch about 1 GB of model data from Amazon S3 at
startup and load the data into memory. users access the models through an asychronous API.
Users can send a request or a batch of requests and specify where the result should be sent. The
comapny provides models to hundreds of users. The usage patterns for the models are irregular.
somes models could be unused for days or weeks. other models could receive batches of
thousands of requests at a time.
Which solution meets these requirements?
A. The requests from from the API are sent to an Application Load Balancer (ALB).Models are
deployed as AWS lambda functions invoked by the ALB
B. The requests from the API are sent to the models Amazon Simple Queue service( Amazon SOS)
queue.
Models are deployed as AWS Lambda functions trigeered by SOS events.
AWS auto scaling is enabled on Lambda to increse the number vCPUSs based on the SQS
queue size.
C. The requests from the API are sent to the model's Amazon simple Queue service ( Amazon SQS)
queue.
Model are deployed as Amazon Elastic container service ( AMAzon ECS) service reading from
the queue.
AWS App Mesh scales the instances of the ECS cluster based on the SQS queue size.

D. The requests from the API are sent to the model's Amazon simple Queue service (Amazon SQS)
queue.
Models are deployed as Amazon Elastics container service ( Amazon ECS) services reading from
the queue.
AWS Auto Scaling is enabled ECS for both the cluster and copies the service based on the queue
size.
Answer: D
一家公司正在AWS中开发新的机器学习模型解决方案。这些模型是作为独立的微服务开发的,可在启动时从Amazon S3提取约1 GB的模型数据,
并将数据加载到内存中。用户通过异步API访问模型。
用户可以发送一个请求或一批请求,并指定将结果发送到何处。公司提供了数百个用户的模型。模型的使用模式是不规则的。
有些型号可能会在几天或几周内不使用。其他模型可以一次接收成千上万的请求。
哪种解决方案满足这些要求?
A.来自API的请求被发送到应用程序负载平衡器(ALB)。模型被部署为ALB调用的AWS lambda函数
B.来自API的请求被发送到模型Amazon Simple Queue服务(Amazon SOS)队列。
模型被部署为由SOS事件触发的AWS Lambda函数。
在Lambda上启用了AWS自动缩放功能,以根据SQSqueue大小增加vCPUS的数量。
C.来自API的请求被发送到模型的Amazon简单队列服务(Amazon SQS)队列。
模型被部署为从队列读取的Amazon Elastic Container Service(AMAzon ECS)服务。
AWS App Mesh根据SQS队列大小扩展ECS集群的实例。
D.来自API的请求被发送到模型的Amazon简单队列服务(Amazon SQS)队列。
模型被部署为从队列读取的Amazon Elastics容器服务(Amazon ECS)服务。
群集均启用了AWS Auto Scaling ECS,并根据队列大小复制了服务。
QUESTION 320
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company has a mobile game that reads most of its metadata from an Amazon RDS DB
instances. As the game increased in popularity, developer noticed slowdowns related to the
game's metadata load times. Performance metrics Indicate that simply scaling the database will
not help. A solutions architect must explore all options that include capabilities for snapshots,
replication, and sub-millisecond response times.
What should the solutions architect recommend to solve the issues?
A. Migrate the database to Amazon Aurora with Aurora Replicas.
B. Migrate the database to Amazon DynamoDB with global tables.
C. Add an Amazon ElastiCache for Redis layer in front of the database.
D. Add an Amazon ElastiCache for Memcached layer in front of the database.
Answer: C
一家公司拥有一个移动游戏,该游戏从Amazon RDS数据库实例读取其大部分元数据。 
随着游戏受欢迎程度的提高,开发人员注意到与游戏元数据加载时间有关的速度降低。 
性能指标表明仅扩展数据库将无济于事。 解决方案架构师必须探索所有选项,包括快照,复制和亚毫秒级响应时间的功能。
解决方案架构师应建议什么来解决问题?
A.使用Aurora副本将数据库迁移到Amazon Aurora。
B.使用全局表将数据库迁移到Amazon DynamoDB。
C.在数据库前面添加一个Amazon ElastiCache for Redis层。
D.在数据库前面添加一个Amazon ElastiCache for Memcached层。
QUESTION 321
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company runs and application that uses multiple Amazon EC2 instances to gather data from its
i users. The data is then processed and transferred to Amazon S3 for long-term storage. A review
of the application shows that there were long periods of time when the EC2 instances were not
being used. A solution architect needs to design a solution that optimizes utilization and reduces
costs.
Which solution meets these requirements?
A. Use Amazon EC2 in an Auto Scaling group with On-Demand instances.
B. Build the application to use Amazon Lightsail with On-Demand instances.
C. Create an Amazon CloudWatch cros job to automatically stop the EC2 instance when there is no
activity.
D. Redesign the application to use an event-driven design with Amazon Simple Queue Service
(Amazon SQS) and AWS Lambda.
Answer: D
一家公司运行并使用多个Amazon EC2实例从其i用户收集数据的应用程序。然后,数据将被处理并传输到Amazon S3进行长期存储。 回顾
该应用程序的结果表明,有很长时间没有使用EC2实例。解决方案架构师需要设计一种解决方案,以优化利用率并降低
成本。哪种解决方案满足这些要求?
A.在按需实例的Auto Scaling组中使用Amazon EC2。
B.构建应用程序以将Amazon Lightsail与按需实例一起使用。
C.创建一个Amazon CloudWatch cros作业以在没有活动时自动停止EC2实例。
D.重新设计应用程序,以将事件驱动的设计与Amazon Simple Queue Service(Amazon SQS)和AWS Lambda一起使用。
QUESTION 322
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
A solutions architect is designing a VPC with public and private subnets. The VPC and subnets
use IP 4 CIDR blocks. There is one public subnet and one private subnet in each of three
Availability Zone (AZs) for high availability. An internet gateway is used to provide internet access
for the public subnets. The private subnets require access to the internet to allow Amazon EC2
instances to download software updates.
What should the solutions architect do to enable Intrnet access for the private subnets?
A. Create three NAT gateways, one for each public subnet in each AZ.
Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its
AZ.
B. Create three NAT gateways, one for each private subnet in each AZ.
Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its
AZ.
C. Create second internet gateway on one of the private subnets.
Update the rout table for the private subnets that forward non-VPC traffic to the private internt
gateway.
D. Create an egress-only internet gateway on one of the public subnets.
Update the route table for the private subnets that forward non-VPC traffic to the egress-only
internet gateway,
Answer: A
解决方案架构师正在设计具有公共和私有子网的VPC。 VPC和子网使用IP 4 CIDR块。
三个可用性区域(AZ)中的每个都有一个公共子网和一个私有子网,以实现高可用性
。 Internet网关用于为公共子网提供Internet访问。专用子网需要访问互联网以允许Amazon EC2
实例以下载软件更新。
解决方案架构师应该怎么做才能为私有子网启用Intrnet访问?
A.创建三个NAT网关,每个AZ中的每个公共子网一个。
为每个将非VPC流量转发到其AZ中的NAT网关的AZ创建一个专用路由表。
B.创建三个NAT网关,每个AZ中的每个专用子网一个。
为每个将非VPC流量转发到其AZ中的NAT网关的AZ创建一个专用路由表。
C.在一个专用子网中创建第二个Internet网关。
更新将非VPC流量转发到专用Internt网关的专用子网的路由表。
D.在一个公共子网中创建仅出口互联网网关,为将非VPC流量转发到仅出口互联网网关的专用子网更新路由表,
QUESTION 323
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
A solutions architect needs to design a network that will allow multiple Amazon EC2 instances to
access a common data source used for mission-critical data that can be accessed by all the EC2
instances simultaneously. The solution must be highly scalable, easy to implement, and support
the NFS protocol.
Which solution meets these requirements?
A. Create an Amazon EFS file system.
Configure a mount target in each Availability Zone.
Attach each instance to the appropriate mount target.
B. Create and additional EC2 instance and configure it as a file server.
Create security group that allows communication between the instances and apply that to the
addiotional instance.
. Create an Amazon S3 bucket with the appropriate permissions.
Create a role in AWS IAM that grants the correct permissions to the S3 bucket.
Attach the role to the EC2 instances that need access to the data.
D. Create an Amazon EBS volume with the appropriate permissions.
Crate a role in AWS IAM that grants the correct permissions to the EBS volume.
Attach the role to then EC2 instances that need access to the data.
Answer: A

解决方案架构师需要设计一个网络,该网络将允许多个Amazon EC2实例访问用于关键任务数据的通用数据源,所有EC2实例都可以同时访问该数据源。该解决方案必须具有高度的可扩展性,易于实现并支持NFS协议。
哪种解决方案满足这些要求?
A.创建一个Amazon EFS文件系统。
在每个可用区中配置安装目标。
将每个实例附加到适当的安装目标。
B.创建另一个EC2实例,并将其配置为文件服务器。
创建允许实例之间进行通信的安全组,并将其应用于其他实例。
。创建具有适当权限的Amazon S3存储桶。
在AWS IAM中创建一个角色,该角色向S3存储桶授予正确的权限。
将角色附加到需要访问数据的EC2实例。
D.创建具有适当权限的Amazon EBS卷。
在AWS IAM中创建一个角色,该角色向EBS卷授予正确的权限。然后将该角色分配给需要访问数据的EC2实例。
QUESTION 324
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A company has a multi-tier application deployed on several Amazon EC2 instance in an Auto
Scaling group. An Amazon RDS for Oracle instance is the application's data layer that uses
Oracle-specific PUISQL functions Traffic to the application has been steadily increasing, This is
causing the EC2 instances to become overloaded and the RDS instance to run out of storage.
The Auto Scaling group does not have any scaling metrics and defines the minimum healthy
instance count only. The company predicts that traffic will continue to increase at a steady but
unpredictable rate before leveling off.
What should a solution architect do to ensure the system can automatically scale for the
increased traffic? (Select TWO.)
A. Configure storage auto scaling on the RDS for Oracle instance.

B. Migrate the database to Amazon Aurora to use Auto Scaling storage.
C. Configure an alarm on the RDS for Oracle instance for low free storage space,
D. Configure the Auto Scaling group to use the average CPU as the scaling metric.
E. Configure the Auto Scaling group to use the average free memory as the scaling metric.
Answer: AD
公司在Auto Scaling组中的多个Amazon EC2实例上部署了多层应用程序。 Amazon RDS for Oracle实例是使用特定于Oracle的PUISQL函数的应用程序数据层,到该应用程序的流量一直在稳定增长,这导致EC2实例过载,并且RDS实例用尽了存储空间。
Auto Scaling组没有任何扩展指标,仅定义了最小运行状况实例数。该公司预测,流量趋于平稳之前将继续以稳定但不可预测的速度增长。
解决方案架构师应采取什么措施来确保系统可以自动扩展以适应
流量增加了吗? (选择两个。)
A.在RDS for Oracle实例上配置存储自动扩展。
B.将数据库迁移到Amazon Aurora以使用Auto Scaling存储。
C.在RDS for Oracle实例上配置警报,指出可用存储空间不足,
D.将Auto Scaling组配置为使用平均CPU作为缩放指标。
E.将Auto Scaling组配置为使用平均可用内存作为缩放指标。
QUESTION 325
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company is preparing to launch a public-facing web application in the AWS Cloud. The
architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer
(ELB). A third-party service is used for the DNS, The company's solutions architect must
recommend a solution to detect and protect against large-scale DDoS attacks,
Which solution meets these requirements?
A. Enable Amazon Guard Duty on th account
B. Enable Amazon Inspector on the EC2 instances
C. Enable AWS Shield and assign Amazon Route 53 to it.
D. Enable AWS Shield Advancd and assign the ELB to it.
Answer: D
一家公司正准备在AWS云中启动面向公众的Web应用程序。 该架构由位于弹性负载均衡器(ELB)后面的VPC中的Amazon EC2实例组成。 DNS使用第三方服务,公司的解决方案架构师必须
建议一种解决方案,以检测和防御大规模DDoS攻击,
哪种解决方案满足这些要求?
A.在账户上启用Amazon Guard Duty
B.在EC2实例上启用Amazon Inspector
C.启用AWS Shield并为其分配Amazon Route 53。
D.启用AWS Shield Advancd并将ELB分配给它。
答案:D
QUESTION 326
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company has a 10 Gbps AWS Direct Connect connection from its on-premises servers to AWS.
Th workloads using the connection are critical. The company requires a disastr recovery strategy
with maximum resiliency that maintains the current connection bandwidth at a minimum,
What should a solutions architect recommend?
A. Set up a new Direct Connect connection in anothr AWS Region.
B. Set up a new AWS managed VPN connection in another AWS Region.
C. Set up two new Direct Connect connections one in the curnt AWS Region and one in another
Region,
D. Set up two new AWS managed VPN connctions one in the current AWS Region and one in
another Region.
Answer: A
公司拥有从本地服务器到AWS的10 Gbps AWS Direct Connect连接,使用该连接的工作负载至关重要。 该公司需要灾难恢复策略
具有最大的弹性,可将当前的连接带宽保持在最低水平,解决方案架构师应该建议什么?
A.在另一个AWS区域中建立新的Direct Connect连接。
B.在另一个AWS区域中建立新的AWS托管VPN连接。
C.在当前的AWS区域中建立两个新的Direct Connect连接,在另一个地区中建立一个,
D.在当前AWS区域中设置两个新的AWS托管VPN连接,在另一个区域中设置一个。
QUESTION 327
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company stores user data in AWS. The data is used continuously with peak usage during
business hours. Access patterns vary, with some data not being used for months at a time. A
solutions architect must choose a cost-effective solution that maintains the highest level of
durability while maintaining high availability.
Which storage solution meets these requirements?
A. Amazon S3
B. Amazon S3 Intelligent Tiering
C. Amazon S3 Glacier Deep Archive
D. Amazon S3 One Zone-lnfequent Access (S3 One Zone-lA)

Answer: B
一家公司在AWS中存储用户数据。 数据在工作时间内连续使用,高峰使用。 访问模式各不相同,有些数据一次不能使用几个月。 解决方案架构师必须选择一种经济高效的解决方案,该解决方案可以在保持最高可用性的同时保持最高可用性。
哪种存储解决方案满足这些要求?
A.亚马逊S3
B.Amazon S3智能分层
C.Amazon S3 Glacier Deep存档
D.Amazon S3一区不频繁访问(S3一区lA)
QUESTION 328
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company has no existing file share services. A new project requires access to file storage that
is mountable as a drive for on-premises desktops. The file server must authenticate usrs to an
Activ Directory domain before they are able to access the storage.
Which service will allow Active Directory users to mount storage as a drive on their desktops?
A. AWS S3 Glacier
AWS DataSync
C. AWS Snowball Edge
D. AWS Storage Gateway
Answer: B
公司没有现有的文件共享服务。 一个新项目需要访问可作为本地桌面的驱动器安装的文件存储。 文件服务器必须将用户身份验证到
激活目录域之前,他们才可以访问存储。
哪些服务将允许Active Directory用户将存储作为驱动器安装在桌面上?
A.AWS S3 Glacier
AWS DataSync
C.AWS Snowball Edge
D.AWS Storage Gateway

Explanation: https://docs.aws .amazon.com/storagegateway/latest/userguide/CreatingAnSMBFileShare.html

AWS DataSync使您可以轻松地通过网络在本地存储和AWS存储服务之间传输数据。DataSync自动执行数据传输过程和高性能,安全数据传输所需的基础结构的管理。DataSync还包括加密和完整性验证,因此您的数据可以安全,完整地传输并可以使用。所有这些都最大限度地减少了快速,可靠和安全的传输所需的内部开发和管理。

QUESTION 329
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company is planning to migrate a legacy application to AWS. The application currently uses
NFS to communicate to an on-premises storage solution to store application data. The application
cannot be modified to use any other communication protocols other than NFS for this purpose.
Which storage solution should a solutions architect recommend for use after the migrations?
A. AWS DataSync
B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon Elastic File System (Amazon EFS)
Amazon EMR File System (Amazon EMRFS)
Answer: C
一家公司计划将旧版应用程序迁移到AWS。 该应用程序当前使用NFS与本地存储解决方案进行通信以存储应用程序数据。 
为此,不能将应用程序修改为使用NFS以外的任何其他通信协议。
解决方案架构师应建议在迁移后使用哪种存储解决方案?
A.AWS DataSync
B.Amazon Elastic Block Store(Amazon EBS)
C.Amazon弹性文件系统(Amazon EFS)
D.Amazon EMR文件系统(Amazon EMRFS)

Explanation: https://aws. .amazon.com/efs/

QUESTION 330
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A company has a dynamic web application hostes on two Amazon EC2 instances. The company
has its own SSL certificate, which is on each instance to perform SSL termination.
There has been an increase in traffic recently, and the operations team determined that SSL
encryption and decryption is causing the compute capacity of the web servers to reach their
maximum limit.
What should a solutions architect do to increase the application's performance?
A. Create a new SSL certificate using AWS Certificate Manager (ACM).
Install the ACM certificate on each instance.
B. Create an Amazon S3 bucket Migrate the SSL certificate to the S3 bucket.
Configure the EC2 instances to reference the bucket for SSL termination.
C. Create another EC2 instance as a proxy server.
Migrate the SSL certificate to the new instance and configure it to direct connctions to the existing
EC2 instances.
D. Import the SSL certificate into AWS Crtificate Manager (ACM).
Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from
ACM.
Answer: C
一家公司在两个Amazon EC2实例上托管了一个动态Web应用程序。该公司拥有自己的SSL证书,该证书在每个实例上执行SSL终止。
最近流量有所增加,运营团队确定SSL加密和解密正在导致Web服务器的计算能力达到其水平。
最大限制。
解决方案架构师应该怎么做才能提高应用程序的性能?
A.使用AWS Certificate Manager(ACM)创建新的SSL证书。在每个实例上安装ACM证书。
B.创建一个Amazon S3存储桶将SSL证书迁移到S3存储桶。配置EC2实例以引用存储桶以终止SSL。
C.创建另一个EC2实例作为代理服务器。将SSL证书迁移到新实例,并将其配置为将连接定向到现有EC2实例。
D.将SSL证书导入AWS Crtificate Manager(ACM)。
使用使用来自ACM的SSL证书的HTTPS侦听器创建应用程序负载平衡器。
QUESTION 331
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A solutions architect is designing a security solution for a company that wants to provider
developers with individual AWS accounts through AWS Organizations, while also maintaining
standard security controls. Because the individual developers will have AWS account root user-
level access to their own account, the solutions architect wants to ensure that the mandatory
AWS CloudTrail configuration that is applied to new developer accounts is not modified.
Which action meets these requirements?
A. Create an IAM policy that prohibits changes to CloudTrail, and attach it to the root user.
B. Create a new trail in ClouT rail from within the developer accounts with the organization trails
options enabled.
c. Create a service control policy (SCP) the prohibits changes to CloudTrail, and attach it to the
developer accounts.
D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from
an Amazon Resource Name (ARN) in the master account.
Answer: C
解决方案架构师正在为一家公司设计安全解决方案,该公司希望通过AWS Organizations为开发人员提供单独的AWS账户,
同时还要保持标准的安全控制。 由于各个开发人员将对他们自己的帐户具有AWS帐户根用户级别的访问权限,因此解决方案架构师希望确保不会修改应用于新开发人员帐户的AWSA CloudTrail强制配置。
哪些动作符合这些要求?
A.创建一个禁止更改CloudTrail的IAM策略,并将其附加到root用户。
B.在开发人员帐户中使用组织跟踪在ClouT rail中创建新的跟踪
选项已启用。
C。 创建一个禁止对CloudTrail进行更改的服务控制策略(SCP),并将其附加到开发人员帐户。
D.使用策略条件为CloudTrail创建服务链接角色,该策略条件仅允许从主帐户中的Amazon资源名称(ARN)进行更改。

服务控制策略(SCP)提供对以下项的最大可用权限的集中控制 组织中的所有帐户,可确保您将帐户保留在自己的帐户中 组织的访问控制准则,

仅SCP尚不足以允许访问您组织中的帐户。 附加 一个AWS Organizations实体(根,OU或帐户)的SCP定义了什么的护栏 委托人可以执行的操作。 您仍然需要附加基于身份的或基于资源的 组织帐户中的委托人或资源策略,以实际授予他们。

参照Organization部分。

QUESTION 332
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
A company is building a media sharing application and decides to use Amazon S3 for storage.
When a media file uploaded, the company starts a multi-step to create thumbnails, identify
obkects in the image, transcode videos into standard formats and resolutons, and extract and
store the metadata to an Amazon DynamoDB table, The metadata is used for searching and
navigation, The amount of traffic is variable. the solution must be able to scale handle spikes in
load without unnecessary expenses,
What should a solution architect recommend to support this workload?
A. Build the processing into the website or mobile app used to upload the content to Amazon S3
save the required data to the DynamDB table when the obkects are uploaded
B. Trigger an AWS Lambda function when an object is stored in the S3 bucket.
Have the step functions perform the steps needed to process the object and then write the
metadata to the DynamoDB table.
C. Trigger an AWS Lambda function when an object is stored in the S3 bucket.
Have the Lambda function start AWS batch to perform the steps to process the object. Place the
object data in the DynamoDB table when complete.
D. Trigger an AWS Lambda function to store an initial entry in the DynamoDB table when an object
is uploaded to Amzon S3 use a program running on an Amazon EC2 instance in an Auto Scaling
group to poll the Index for unprOcessed items, and use the program to perform the processing.
Answer: C
一家公司正在构建媒体共享应用程序,并决定使用Amazon S3进行存储。
上载媒体文件后,公司开始执行多个步骤,以创建缩略图,识别图像中的对象,将视频转码为标准格式和分辨率,以及提取和
将元数据存储到Amazon DynamoDB表中,该元数据用于搜索和导航,流量量是可变的。解决方案必须能够在不产生不必要费用的情况下扩展负载峰值,
解决方案架构师应建议什么来支持此工作负载?
A.将处理过程内置到用于将内容上传到Amazon S3的网站或移动应用程序中,然后在上传对象后将所需数据保存到DynamDB表中
B.当对象存储在S3存储桶中时,触发一个AWS Lambda函数。
让步骤函数执行处理对象所需的步骤,然后将元数据写入DynamoDB表。
C.当对象存储在S3存储桶中时,触发AWS Lambda函数。
让Lambda函数启动AWS批处理以执行处理对象的步骤。完成后,将对象数据放在DynamoDB表中。
D.当将对象上传到Amzon S3时,触发一个AWS Lambda函数在DynamoDB表中存储一个初始条目,使用在Auto Scaling组中的Amazon EC2实例上运行的程序来轮询索引中是否有未处理的项目,然后使用该程序执行处理。
QUESTION 333
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company is preparing to migrate its on-premiss application to AWS. The application consists of application servers and a Microsoft SQL Server database. The database cannot be migrated to a different engine because SQL Server features are used in the application's NET code. The company wants to attain the greatest availability possible while minimizing operational and management overhead.
What should a solutions architect do to accomplish this?
A. Install SQL Server on Amazon C2 in a Multi-AZ deployment.
B. Migrate teh data to Amazon RDS for SQL Server in a Multi-AZ deployment.
C. Deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas.
D, Migrate the data to Amazon RDS for SQL Server in a cross- Region Multi-AZ deployment
Answer: B

一家公司正准备将其预置型应用程序迁移到AWS。 该应用程序由应用程序服务器和Microsoft SQL Server数据库组成。 无法将数据库迁移到其他引擎,因为在应用程序的NET代码中使用了SQL Server功能。 该公司希望获得最大的可用性,同时最大程度地减少运营和管理开销。
解决方案架构师应该怎么做才能做到这一点?
A.在多可用区部署中的Amazon C2上安装SQL Server。
B.在多可用区部署中,将数据迁移到SQL Server的Amazon RDS。
C.在具有多可用区副本的SQL Server的Amazon RDS上部署数据库。
D,在跨区域多可用区部署中将数据迁移到用于SQL Server的Amazon RDS
QUESTION 334
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company is using Site-Site VPN connection for secure connectivity to its AWS cloud resource
from on premises. Due to an increase in traffic across the VPN connections to the Amazon EC2
instances,users are experiencing slower VPN connectivity,
Which solution will improve the VPN throughput?
A. Implement multiple customer gateways for the same network to scale the throughput
Use a Transit Gateway with equal cost multipath routing and add additional VPN tunnels.
C. Configure a virtual gateway with equal cost multipath routing and multiple cahnnels.
D. Increase the number of tunnels in the VPN configuration to scale the throughput beyond the
default limit.
Answer: B

一家公司正在使用Site-Site VPN连接来从本地安全连接到其AWS云资源。 由于与Amazon EC2实例的VPN连接之间的流量增加,因此用户的VPN连接速度变慢,
哪种解决方案将提高VPN吞吐量?
A.为同一网络实现多个客户网关以扩展吞吐量
B将Transit Gateway与等价的多路径路由一起使用,并添加其他VPN隧道。
C.用等价的多路径路由和多个通道配置一个虚拟网关。
D.增加VPN配置中的隧道数量,以将吞吐量扩展到超出
默认限制。

AWS Transit Gateway通过中央集线器连接VPC和本地网络。这简化了您的网络,并结束了复杂的对等关系。它充当云路由器–每个新连接仅建立一次。 当您进行全球扩展时,区域间对等使用AWS全球网络将AWS Transit网关连接在一起。您的数据将自动加密,并且永远不会通过公共互联网传输。而且,由于其居中地位,AWS Transit Gateway Network Manager在整个网络上都具有独特的视图,甚至可以连接到软件定义的广域网(SD-WAN)设备。

QUESTION 335
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A mobile gaming company runs apllication servers on Amazon EC2 instances. The servers reciev
updates from players every 15 minutes. The mobile game creates a JSON object of the progress
made in the game since the last update, and sends the JSON object an Application Load Balacer.
As the mobile game is played, game updates are being lost. The company wants to create a
durable way to get the updates in order.
What should a solution architect recommend to decouple the system?
A. Use Amazon Kinesis Data streams to capture the data and store the JSON object in Amazon S3.
B. Use Amazon Kinesis Data Firehouse to capture the data and store the JSON object in Amzon S3
Use Amazon simple Queue service (Amzon SQS) FIFO queue to captur the data and EC2
instances to process the messages in the queue.
D. Use Amazon simple Notification Service (Amazon SNS) to capture the data and EC2 instances to
process the messages sent to Application Load balancer.
Answer: C
一家移动游戏公司在Amazon EC2实例上运行复制服务器。 服务器每15分钟从玩家那里收到一次更新。 该移动游戏会创建一个自上次更新以来在游戏中取得的进展的JSON对象,然后将该JSON对象发送给Application Load Balacer。
在玩手机游戏时,游戏更新丢失了。 该公司希望创建一种持久的方式来按顺序获取更新。
解决方案架构师应建议采取什么措施来解耦系统?
A.使用Amazon Kinesis Data Streams捕获数据并将JSON对象存储在Amazon S3中。
B.使用Amazon Kinesis Data Firehouse捕获数据并将JSON对象存储在Amzon S3中
C使用Amazon简单队列服务(Amzon SQS)FIFO队列来捕获数据,并使用EC2实例来处理队列中的消息。
D.使用Amazon简单通知服务(Amazon SNS)捕获数据,并使用EC2实例处理发送到应用程序负载均衡器的消息。

QUESTION 336

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
A recently created startup built a three-tier web application. The front end has staic content. The
application layer is based on microservices. User data is stored as JSON documents that needs

to be access with low latency. The company expects regular traffic to be low during the first year,
with peaks in traffic when it publicizes new features every month. The startup team needs to
minimize operational overhead costs.
What should a solutions architect recommend to accomplish this?
A. Use Amazon S3 static website hosting to store and serve the front end.
Use AWS Elastic Benstalk for the applications layer.
Use Amazon DynamoDB to store user data.
B. Use Amazon S3 static website hosting to store and serve the front end.
Use Amazon Elastic Kubernets Service (Amazon EKS) for application layer.
Use Amazon DynamoDB to store user data.
C. Use Amazon S3 static website hosting to store and serve the front end .
Use Amazon API Gateway and Lambda functions for application layer.
Use Amazon DynamoDB to store user data.
D. Use Amazon S3 static website hosting to store and serve the front end.
Use Amazon API Gateway and Lambda functions for application layer.
Use Amazon RDS with read replica to store user data.
Answer: C
最近创建的一家初创公司构建了一个三层Web应用程序。前端具有静态内容。应用层基于微服务。
用户数据存储为JSON文档,需要以低延迟进行访问。该公司预计第一年的常规流量会很低,每个月发布新功能时流量会达到峰值。启动团队需要将运营开销成本降至最低。
解决方案架构师应该推荐什么来实现这一目标?
A.使用Amazon S3静态网站托管来存储和服务前端。
将AWS Elastic Benstalk用于应用程序层。
使用Amazon DynamoDB存储用户数据。
B.使用Amazon S3静态网站托管来存储和服务前端。
将Amazon Elastic Kubernets服务(Amazon EKS)用于应用程序层。
使用Amazon DynamoDB存储用户数据。
C.使用Amazon S3静态网站托管来存储和服务前端。
将Amazon API Gateway和Lambda函数用于应用程序层。
使用Amazon DynamoDB存储用户数据。
D.使用Amazon S3静态网站托管来存储和服务前端。
将Amazon API Gateway和Lambda函数用于应用程序层。
将Amazon RDS与只读副本一起使用以存储用户数据。

QUESTION 337

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company needs comply with a regulatory requirement that states all emails must be stored and
archieved externally for 7 years. An administrator has created compressed email files on-
premises and wants a managed service to transfer the files to AWS storage.
Which, managed service should a solution architect recommend?
A. Amazon Elastic File System (Amazon EFS),
B. Amazon S3 Glacier,
C. AWS Backup.
D. AWS Storage Gateway,
Answer: C
公司需要遵守一项法规要求,该要求规定所有电子邮件必须在外部存储和存档7年。 管理员已在本地创建了压缩的电子邮件文件,并希望通过托管服务将文件传输到AWS存储。
解决方案架构师应该推荐哪种托管服务?
A. Amazon弹性文件系统(Amazon EFS),
B.Amazon S3 Glacier,
C. AWS备份。
D.AWS Storage Gateway,

AWS Backup是一项完全托管的备份服务,可促进跨AWS服务的集中式和自动化数据备份。通过AWS Backup,您可以集中设置备份策略并监视AWS资源(例如Amazon EBS卷,Amazon EC2实例,Amazon RDS数据库,Amazon DynamoDB表,Amazon EFS文件系统和AWS Storage Gateway卷)的备份活动。我会。AWS Backup集成并自动执行了以前按服务执行的备份任务,从而无需创建自定义脚本或手动流程。只需单击几下并在AWS Backup控制台中进行设置,您就可以创建自动执行备份计划和保留管理的备份策略。AWS Backup提供了一个基于策略的完全托管的备份解决方案,可以简化备份管理,并使您能够满足业务和法规备份合规性要求。

QUESTION 338

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Acompany's near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the data and tales 30 minutes to complete. 
 The workload frequently experiences high latency due to large amount of incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance. 
 Which combination of steps should the solutions architect take? (Select TWO) 
 A. Use Amazon Kinesis Data Firehose to ingest the data.
 B. Use AWS Lambda with AWS Step Fucntions to process the data.
 C. Use AWS Database Migration Service (AWS DMS) to ingest the data. 
 D. Use Amazon EC2 instances in an Auto Scaling group to process the data.
 E. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data. Answer: AB
 Acompany的近实时流应用程序正在AWS上运行。 提取数据后,作业将在数据和故事上运行30分钟以完成。
  由于大量传入数据,工作负载经常遇到高延迟。 解决方案架构师需要设计可扩展的无服务器解决方案以提高性能。
  解决方案架构师应采取哪些步骤组合? (选择两个)
  A.使用Amazon Kinesis Data Firehose提取数据。
  B.将AWS Lambda与AWS Step Fucntions结合使用来处理数据。
  C.使用AWS Database Migration Service(AWS DMS)提取数据。
  D.在Auto Scaling组中使用Amazon EC2实例来处理数据。
  E.结合使用AWS Fargate和Amazon Elastic Container Service(Amazon ECS)来处理数据

AWS Step Functions是一个无服务器功能协调器,可让您轻松地在关键业务应用程序中安排AWS Lambda函数和多个AWS服务。您可以创建并运行一系列经过检查点,事件驱动的工作流,这些工作流通过可视界面使应用程序保持最新状态。一步的结果用作下一步的输入。应用程序中的每个步骤都是根据用户定义的业务逻辑顺序执行的。

将各个无服务器应用程序一起协调,管理重试和修复故障可能很麻烦。交付的应用程序越复杂,它们的管理就越复杂。Step Functions自动管理错误处理,重试逻辑和状态。通过内置的操作控制,Step Functions可以管理阵列,错误处理,重试逻辑和状态,从而减少了大量工作量。

QUESTION 339
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company company is planning to transfer multiple terabytes of data to AWS. The data is
collected offline from ships. The company wants to run complex transformations before
transferring the data.
Which AWS service should a solutions architect recommend for this migrations?
A. AWS Snowball.
AWS Snowmobile.
C. AWS Snowball Edge Storage Optimized.
D. AWS Snowball Edge Compute Optimized.
Answer: D
一家公司公司计划将多个TB的数据传输到AWS。 数据是从船上离线收集的。 该公司希望在传输数据之前进行复杂的转换。
解决方案架构师应为此迁移推荐哪种AWS服务?
A. AWS Snowball。
B AWS Snowmobile。
C.优化的AWS Snowball Edge存储。
D.优化AWS Snowball Edge计算。

Snowball和Snowball Edge是两个不同的设备。本指南适用于Snowball。有关Snowball Edge文档,请参阅《AWS Snowball Edge开发人员指南》。两种设备都可以与Amazon S3交换大量数据。两者都具有相同的作业管理API,并且都具有相同的控制台使用。但是,这两种设备之间的硬件规格,某些功能,使用的传输工具和费用有所不同。

AWS Snowball用例的差异

下表显示了每个AWS Snowball设备的不同用例。

用例 雪球 雪球边缘
将数据导入Amazon S3
从Amazon S3导出
耐用的本地存储
AWS Lambda上的本地计算
Amazon EC2计算实例
在设备集群中使用
与AWS IoT Greengrass(IoT)一起使用
通过NFS传输GUI文件
QUESTION 340
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company maintains a searchable repository of items on its website. The data is stored in an
Amazon RDS for MySQL database table that contains over 10 million rows. The database has 2
TB of General Purpose SSD (gp2) storage. There are millions of updates against this data every
day through the company's website. The company has noticed some operations are taking 10
seconds or longer, and has determined that the database storage performance is bottleneck.
Which solution addresses the performance issues?
A. Change the storage type to Provissioned IOPS SSD (io1).
B. Change the instance to a memory-optimized instance class.
C. Change the instance to a burstable performance DB instance class.
Enable Multi-AZ RDS read replicas with MySQL natice asynchronous replication.
Answer: A
公司在其网站上维护可搜索的项目存储库。 数据存储在包含超过1000万行的Amazon RDS for MySQL数据库表中。 
该数据库具有2 TB的通用SSD(gp2)存储。 每天通过公司网站都会有数百万次针对此数据的更新。 
该公司已注意到某些操作需要10秒钟或更长时间,并且已确定数据库存储性能是瓶颈。
哪种解决方案可以解决性能问题?
A.将存储类型更改为Provisionneded IOPS SSD(io1)。
B.将实例更改为内存优化的实例类。
C.将实例更改为可爆发的性能数据库实例类。
D使用MySQL natice异步复制启用Multi-AZ RDS只读副本。
QUESTION 341
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
A company has a hybrid application hosted on multiple on-premises servers with static IP
addresses. There is already a VPN that provides connectivity between the VPC and the on-
premises network. The company wants to distribute TCP traffic across the on-premises servers
for internet users.
What should a solution architect recommend to provides a highly available and scalable solution?
A. Launch an internet-facing Network Load Balancer (NLB) and register on-premises IP addresses
with the NLB.
B. Launch an internet-facing Application Load Balancer (ALB) and register on-premises IP
addresses with the ALB.
C. Launch an Amazon EC2 instance, attach an Elastic IP address, and distribute traffic to the on-
premises servers.
D. Launch and Amazon EC2 instance woth public IP addresses in an Auto Scaling group distribute
traffic to the on-premises servers.
Answer: A
一家公司的混合应用程序托管在具有静态IP地址的多个本地服务器上。 已经存在一个VPN,可以在VPC和内部网络之间提供连接。 该公司希望在本地服务器上为Internet用户分配TCP流量。
解决方案架构师应建议什么以提供高度可用且可扩展的解决方案?
A.启动一个面向互联网的网络负载平衡器(NLB),并在NLB中注册本地IP地址。
B.启动一个面向互联网的应用程序负载平衡器(ALB),并向ALB注册本地IP地址。
C.启动一个Amazon EC2实例,附加一个弹性IP地址,并将流量分配到本地服务器。
D.启动和Amazon EC2实例将Auto Scaling组中的公共IP地址分配给本地服务器流量。
QUESTION 342
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
A company has na application that generates a large number of files, each approximately 5 MB in
size. The files are stored in Amazon S3. Company policy requires teh files to be stored for 4
years before they can be deleted. Immediate accessibility is always required as teh files contain
critical business data that is not easy to reproduce. The files are frequently accessed in the first
30 days of the object creation but are rarely accessed after the first 30 days.
Which storage solution is MOST cost effective?
A. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from
object creation.
Delete the files 4 years after the object creation.
B. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent
Access (S3 One Zone-lA) 30 days from object creation.
Delete the files 4 years after the object creation.
C. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-lnfrequent
Access (S3 Standard-lA) 30 days from object creation.
Delete the files 4 years after the object creation.
D. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-lnfrequent
Access (S3 Standard-lA) 30 days from object creation.
Move the file to S3 Glacier 4 years after object creation.
Answer: C
一家公司没有可以生成大量文件的应用程序,每个文件的大小约为5 MB。文件存储在Amazon S3中。
公司政策要求文件必须保存4年,然后才能删除。始终需要立即可访问性,因为文件包含不容易复制的关键业务数据。在创建对象的前30天中经常访问文件,但在前30天后很少访问文件。
哪种存储解决方案最符合成本效益?
A.创建一个S3存储桶生命周期策略,以在对象创建30天后将文件从S3 Standard迁移到S3 Glacier。
创建对象4年后删除文件。
B.创建一个S3存储桶生命周期策略,以在对象创建后30天之内将文件从S3标准移动到S3一区不频繁访问(S3 One Zone-IA)。
创建对象4年后删除文件。
C.创建一个S3存储桶生命周期策略,以在对象创建30天后将文件从S3标准不频繁访问(S3 Standard-1A)移到S3 Standard。
创建对象4年后删除文件。
D.创建一个S3存储桶生命周期策略,以在对象创建30天后将文件从S3标准不频繁访问(S3 Standard-1A)移到。
创建对象4年后,将文件移动到S3 Glacier。
QUESTION 343
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
An online shopping application accesses an Amazon RDS Multi-AZ DB instance. Database
performance is slowing down the application. After upgrading to the next generation instance
type, there was no significant performance improvemnt.
Analysis shows approximately 700 IOPS are sustained, common queries run for long durations,
and memory utilization is high.
Which application change should a solution architect recommend to resolve these issue?
A. Migrate the RDS instance to an Amazon Redshift cluster and enable weekly garbage collection.

B.Spearate teh long-running queries into a new Multi-AZ RDS database and modify the application
to query whichever database only if needed.
C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query whichever
database only if needed.
D. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue for common queries and
query it first and query the database only if needed

Answer: C
在线购物应用程序访问Amazon RDS Multi-AZ数据库实例。 数据库性能正在减慢应用程序的速度。 升级到下一代实例类型后,性能没有明显改善。
分析表明,大约700 IOPS是可持续的,常见查询可以长时间运行,并且内存利用率很高。
解决方案架构师应建议哪种应用程序更改来解决这些问题?
A.将RDS实例迁移到Amazon Redshift集群并启用每周垃圾收集。
B.将长时间运行的查询隔离到新的Multi-AZ RDS数据库中,并修改应用程序以仅在需要时查询哪个数据库。
C.部署两个节点的Amazon ElastiCache集群,并修改应用程序以仅在需要时查询哪个数据库。
D.为常见查询创建Amazon Simple Queue Service(Amazon SQS)FIFO队列并先查询它,然后仅在需要时查询数据库
QUESTION 344
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company hosts its web application on AWS using server Amazon EC2 instances. The company
requires that the IP addresses of all healthy EC2 instances be refumed in response to DNS
queries.
Which policy should be used to meet this requirement?
A. Simple routing policy.
B. Latency routing policy.
C. Multivalue routing policy.

D. Geolocation routing policy.
Answer: C
一家公司使用服务器Amazon EC2实例在AWS上托管其Web应用程序。 该公司要求对所有正常EC2实例的IP地址进行重新命名以响应DNS查询。
应该使用哪个策略来满足此要求?
A.简单的路由策略。
B.延迟路由策略。
C.多值路由策略。

D.地理位置路由策略。
答案:C

使用多值应答路由策略可帮助跨多个资源分发 DNS 响应。例如,在需要将路由记录与 Route 53 运行状况检查关联时,使用多值应答路由。例如,在需要为 DNS 查询返回多个值并将流量路由到多个 IP 地址时,使用多值应答路由。

QUESTION 345
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
As part of budget planning, management wants a report of AWS billed items listed by user. The
data will be used to create department budgets, A solutions architect needs to determine the
most effective way to obtain this report information.
Which solution meets these requirements?
A. Run a query with Amazon Athena to generate the report.
B. Create a report in Cost Explorer and download the report,
C. Access the bill details from the blling dashboard and download the bill.
D. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
Answer: B
作为预算计划的一部分,管理层希望获得一份由用户列出的AWS计费项目的报告。 数据将用于创建部门预算。解决方案架构师需要确定获取此报告信息的最有效方法。
哪种解决方案满足这些要求?
A.使用Amazon Athena运行查询以生成报告。
B.在Cost Explorer中创建报告并下载报告,
C.从模糊的仪表板访问账单详细信息并下载账单。
D.修改AWS预算中的成本预算以通过Amazon Simple Email Service(Amazon SES)发出警报。
QUESTION 346
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A company is preparing to store confidential data in Amazon S3. For compliance reasons, the
data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Key
must be rotated every year.
Which solution meets these requirements and is the MOST operationally effecient?
A. Server-side encryption with customer-provided keys (SSE-C)
B. Server-side encryption with Amazon S3 managed keys (SSE-S3)
C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual
rotation,
D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic
rotation.
Answer: D
一家公司正在准备将机密数据存储在Amazon S3中。 出于合规性原因,必须对静态数据进行加密。 必须记录加密密钥的使用情况以进行审核。 键
必须每年轮换一次。
哪种解决方案符合这些要求,并且在运营上最有效?
A.使用客户提供的密钥的服务器端加密(SSE-C)
B.使用Amazon S3托管密钥(SSE-S3)进行服务器端加密
C.使用手动旋转的AWS KMS(SSE-KMS)客户主密钥(CMK)进行服务器端加密,
D.使用自动旋转的AWS KMS(SSE-KMS)客户主密钥(CMK)进行服务器端加密。
QUESTION 347
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company has 700 TB of backup data stored in network attached storage (NAS) in its data
center. This backup data need to be accessible for infrequent regulatory requests and must be
retained 7 years. The company has decided to migrate this backup data from its data center to
AWS. The migrations must be complete within 1 month. The company has 500 Mbps of
dedicated bandwidth on its public internet connection available for data transfer.
What should a solutions architect do to migrate and store the data at the LOWEST cost?
A. Order AWS Snowball devices to transfer the data.
Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
B. Deploy a VPN connection between the data center and Amazon VPC.
Use the AWS CLI to copy the data from on-premises to Amazon S3 Glacier.
C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3.
Use a lifecycle policy to transistion the files to Amazon S3 Glacier Deep Archive.
D. Use AWS DataSync to transfer the data and deploy a DataSync agent on-premises.
Use the DataSync task to copy files from the on-premises NAS Storage to Amazon S3 Glacier,

Answer: A
一家公司在其数据中心的网络连接存储(NAS)中存储了700 TB的备份数据。对于很少的法规要求,必须可以访问此备份数据,并且必须
保留了7年。该公司已决定将此备份数据从其数据中心迁移到AWS。迁移必须在1个月内完成。该公司在其公共Internet连接上具有500 Mbps的专用带宽,可用于数据传输。
解决方案架构师应该怎么做才能以最低的成本迁移和存储数据?
A.订购AWS Snowball设备以传输数据。
使用生命周期策略将文件过渡到Amazon S3 Glacier Deep Archive。
B.在数据中心和Amazon VPC之间部署VPN连接。
使用AWS CLI将数据从本地复制到Amazon S3 Glacier。
C.提供500 Mbps的AWS Direct Connect连接并将数据传输到Amazon S3。
使用生命周期策略将文件传输到Amazon S3 Glacier Deep Archive。
D.使用AWS DataSync传输数据并在本地部署DataSync代理。
使用DataSync任务将文件从本地NAS存储复制到Amazon S3 Glacier,
QUESTION 348
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A company wants to migrate its MySQL database from on-premises to AWS. The company
recently experienced a database outage that significantly impacted the business. To ensure this
does not happen again, the company wants a reliable database solution on AWS that minimizes
data loss and stores every transaction on at least two nodes. 
Which solution meets these requirements?
A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three
Availability Zones.
B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously
replicate the data.
C. Create an Amazon RDS MySQL DB instance with Multi-AZ and the create a read replica in a
separate AWS Region that synchronously replicates the data.
D. Create and Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda
fucntion to synchronously replicate the data to an Amazon RDS MySQL DB instance.
Answer: B
一家公司希望将其MySQL数据库从本地迁移到AWS。该公司最近经历了数据库中断,这对业务产生了重大影响。
为了确保不再发生这种情况,该公司希望在AWS上使用可靠的数据库解决方案,
以最大程度地减少数据丢失并将每笔交易存储在至少两个节点上。
哪种解决方案满足这些要求?
A.创建一个Amazon RDS数据库实例,并将其同步复制到三个可用区中的三个节点。
B.创建一个具有多可用区功能的Amazon RDS MySQL数据库实例,以同步复制数据。
C.使用Multi-AZ创建一个Amazon RDS MySQL数据库实例,并在一个单独的AWS区域中创建一个只读副本,以同步复制数据。
D.创建并安装了安装了MySQL引擎的Amazon EC2实例,该实例会触发AWS Lambda功能以将数据同步复制到Amazon RDS MySQL数据库实例。

QUESTION 349
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A application running on an Amazon EC2 instance needs to securely access files on an Amazon
Elastic File System (Amazon EFS) file system. The EFS files are stores using encryptions at rest.
Which solution for accessing teh files in MOST secure?
A. Enable TLS when mounting Amazon EFS.
B. Store the encryption key in the code of the application.
C. Enable AWS Key MAnagement Service (AKS KMS) when mounting Amazon EFS.
D. Store the encryption key in an Amazon S3 bucket and use IAM roles to grand the EC2 instance
access permission.
Answer: C
在Amazon EC2实例上运行的应用程序需要安全地访问Amazon Elastic File System(Amazon EFS)文件系统上的文件。
EFS文件是使用静态加密存储的。
哪种方法可以在MOST中安全地访问文件?
A.挂载Amazon EFS时启用TLS。
B.将加密密钥存储在应用程序的代码中。
C.挂载Amazon EFS时启用AWS Key管理服务(AKS KMS)。
D.将加密密钥存储在Amazon S3存储桶中,并使用IAM角色来授予EC2实例访问权限。
QUESTION 350
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
An ecommerce website is deploying its web application as Amazon Elastic Container Service
(Amazon ECS) container instance behind an Application Load Balancer (ALB). During periods of
high activity, the website slows down and availability is reduced. A solutions architect uses
Amazon CloudWatch alarms to receive notifications whenever there is an availability issues so
they can scale out resource Company management wants a solution that automatically responds
to such events.
Which solution meets these requirements?
A. Set up AWS Auto Scaling to scale out the ECS service when there are timeouts on the ALB. Set
up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too
high.
B. Set up AWS Auto Scaling to scale out the ECS service when the ALB CPU utilization is too high.
Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is

too high.
C. Set up AWS Auto Scaling to scale out the ECS service when the service's CPU utilization is too
Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is
too high.
D. Set up AWS Auto Scaling to scale out the ECS service when the ALB target group CPU utilization
is too high, Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory
reservation is too high.
Answer: D

一家电子商务网站正在将其Web应用程序部署为Application Load Balancer(ALB)之后的Amazon Elastic Container Service(Amazon ECS)容器实例。
在活跃期间,网站会变慢,可用性会降低。解决方案架构师会在存在可用性问题时使用Amazon CloudWatch警报来接收通知,
以便他们可以扩展资源。公司管理层需要一种能够自动响应此类事件的解决方案。
哪种解决方案满足这些要求?
A.设置AWS Auto Scaling以在ALB上存在超时时扩展ECS服务。设置AWS Auto Scaling以在CPU或内存预留过高时扩展ECS集群。
B.设置AWS Auto Scaling以在ALB CPU利用率过高时扩展ECS服务。
设置AWS Auto Scaling以在CPU或内存预留过高时扩展ECS集群。
C.设置AWS Auto Scaling以在服务的CPU使用率过高时扩展ECS服务。
设置AWS Auto Scaling以在CPU或内存预留过高时扩展ECS集群。
D.设置AWS Auto Scaling以在ALB目标组CPU利用率过高时扩展ECS服务,设置AWS扩展,以在CPU或内存预留过高时扩展ECS群集。
QUESTION 351
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company is reviewing a recent migration of a three-tier application to a VPC. The security team
discovers that the principle of least privilege is not being applied to Amazon EC2 security group
ingress and egress rules between the application tiers.
What should a solutions architect do to correct this issue?
A. Create security group rules using the instance ID as the source or destination.
B. Create security group rules using the security group ID as the source or destination.
C. Create security group rules using the VPC CIDR block as the source or destination.
D. Create security group rules using the subnet CIDR block as the source or destination.
Answer: B
一家公司正在审查最近将三层应用程序迁移到VPC的情况。 安全团队发现最低特权原则未应用于Amazon EC2安全组
应用程序层之间的入口和出口规则。
解决方案架构师应该怎么做才能解决此问题?
A.使用实例ID作为源或目标创建安全组规则。
B.使用安全组ID作为源或目标创建安全组规则。
C.使用VPC CIDR块作为源或目标创建安全组规则。
D.使用子网CIDR块作为源或目标创建安全组规则。
QUESTION 352
1
2
3
4
5
6
7
8
A company is developing a video conversion application hosted on AWS. The application will be available in to tiers: a free tier and paid tier. User in teh paid tier will have their videos converted first, and then teh free tier users will have their videos converted. Which solution meets these requirements and is MOST cost-effective? A. One FIFO queue for the paid tier and one standard queue for the free tier B. A single FIFO Amazon Simple Queue Service (Amazon SQS) queue for all files types. C. A single standard Amazon Simple Queue Service (Amazon SQS) queue for all files types. D. Two standard Amazon Simple Queue Service (Amazon SQS) queues with one for the paid tier and one for the free tier. Answer: A
一家公司正在开发托管在AWS上的视频转换应用程序。 该应用程序可用于以下各层:免费层和付费层。 
付费层的用户将首先对其视频进行转换,然后免费层的用户将对其视频进行转换。
哪种解决方案符合这些要求,并且最具成本效益?
A.付费层有一个FIFO队列,免费层有一个标准队列
B.针对所有文件类型的单个FIFO Amazon Simple Queue Service(Amazon SQS)队列。
C.适用于所有文件类型的单个标准Amazon Simple Queue Service(Amazon SQS)队列。
D.两个标准的Amazon Simple Queue Service(Amazon SQS)队列,其中一个用于付费层,一个用于免费层
QUESTION 353
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company is building a website that relies on reading and writing to an Amazon DynamoDB
database. The traffic associated with the website predictably peaks during business hours on
weekdays and declines overnight and during weekends A solutions architect needs to design a
cost-effective solution that can handle the load,
What should the solutions architect do to meet these requirements?
A. Enable DynamoDB Accelerator (DAX) to cache the data.
B. Enable Multi-AZ replication for the DynamoDB database,
C. Enable DynamoDB auto scaling when creating the tables.
D. Enable DynamoDB On-Demand capacity allocation when creating the tables.
Answer: C
一家公司正在建立一个网站,该网站依赖于对Amazon DynamoDB数据库的读写。 与网站相关的流量可预测在工作日的工作时间内达到峰值,
而在一夜之间和周末则下降。解决方案架构师需要设计一种经济高效的解决方案来处理负载,
解决方案架构师应怎么做才能满足这些要求?
A.启用DynamoDB加速器(DAX)缓存数据。
B.为DynamoDB数据库启用多可用区复制,
C.在创建表时启用DynamoDB自动缩放。
D.在创建表时启用DynamoDB按需容量分配。

从今天开始,当您创建一个新的DynamoDB表时,它将启用Auto Scaling,它将随着动态更改请求量自动扩展读写吞吐量。

QUESTION 354
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company is preparing to deploy a data lake on AWS. A solutions architect must define the
encryption strategy for data at rest in Amazon S3. The company's security policy states.
· Keys must be rotated every 90 days .
· Strict separation of duties betweenkey users and key administrators
must be implemented .
· Auditing key usage must be possible .
What should the solutions architect recommend?
A. Server-side encryption with AWS KMS managed keys (SSE-KMS) with customer managed
customer master keys (CMKs).
B. Server-side encryption with AWS KMS managed keys (SSE-KMS) with AWS managed customer
master keys (CMKS).
C. Server-side encryption with Amazon S3 managed keys (SSE-S3) with customer managed
customer master keys (CMKS).
D. Server-side encryption with Amazon S3 managed keys (SSE-S3) with AWS managed customer
master keys (CMKs).
Answer: A
一家公司正在准备在AWS上部署数据湖。 解决方案架构师必须定义
Amazon S3中静态数据的加密策略。 公司的安全政策已阐明。
·钥匙必须每90天旋转一次。
·严格区分关键用户和关键管理员之间的职责
必须执行。
·必须能够审核密钥使用情况。
解决方案架构师应该建议什么?
A.使用AWS KMS托管密钥(SSE-KMS)和客户托管的客户主密钥(CMK)进行服务器端加密。
B.使用带有AWS托管客户主密钥(CMKS)的AWS KMS托管密钥(SSE-KMS)进行服务器端加密。
C.使用带有客户管理的客户主密钥(CMKS)的Amazon S3托管密钥(SSE-S3)进行服务器端加密。
D.使用Amazon S3托管密钥(SSE-S3)和AWS托管客户主密钥(CMK)进行服务器端加密。

Explanation: SSE-KMS要求AWS管理数据密钥,但您需要管理AWS KMS中的客户主密钥(CMK)。您可以在账户中选择客户托管的CMK或适用于Amazon S3的AWS托管的CMK。客户管理的CMK是您在AWS账户中创建,拥有和管理的CMK。您可以完全控制这些CMK,包括建立和维护其关键策略,IAM策略和授权,启用和禁用它们,旋转其加密材料,添加标签,创建引用CMK的别名以及安排CMK删除。对于这种情况,解决方案架构师应将SSE-KMS与客户管理的CMK结合使用。这样,KMS将管理数据密钥,但是公司可以配置密钥策略,定义谁可以访问密钥

Data lakes built on AWS primarily use two types of encryption: Server-side encryption (SSE) and client-side encryption. SSE provides data-at-rest encryption 在AWS上构建的数据湖主要使用两种加密类型:服务器端加密(SSE)和客户端加密。 SSE提供静态数据加密

QUESTION 355
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company has an on-premises application that generates a large amount of time-sensitive data
that is backed up to Amazon S3. The application has grown and there are user complaints about
internet bandwidth limitations. A solutions architect needs to design a long-term solution that
allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for
internal users.
Which solution meets these requirements?
A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new
connection.
C. Order daily AWS Snowball devices Load the data onto the Snowball devices and return the
devices to AWS each day.
D. Submit a support ticket through the AWS Management Console Request the removal of S3
service limits from the account.
Answer: B
一家公司拥有一个本地应用程序,该应用程序会生成大量对时间敏感的数据,这些数据将备份到Amazon S3。 
该应用程序已经增长,并且用户抱怨互联网带宽限制。
解决方案架构师需要设计一个长期解决方案,该解决方案既要及时备份到Amazon S3,又要尽量减少对内部用户的互联网连接的影响。
哪种解决方案满足这些要求?
A.建立AWS VPN连接并通过VPC网关终端节点代理所有流量
B.建立一个新的AWS Direct Connect连接,并通过该新连接引导备份流量。
C.每天订购AWS Snowball设备,每天将数据加载到Snowball设备上,然后每天将设备返回给AWS。
D.通过AWS管理控制台提交支持凭单请求从帐户中删除S3服务限制。
QUESTION 356
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company uses Amazon Redshift for its data warehouse. The company wants to ensure high
durability for its data in case öf any component failure.
What should a solutions architect recommend?

A. Enable concurrency scaling.
B. Enable cross-Region snapshots.
C. Increase the data retention period.
D. Deploy Amazon Redshift in Multi-AZ.
Answer: B
一家公司将Amazon Redshift用于其数据仓库。 该公司希望确保其数据的高耐久性,以防万一任何组件出现故障。
解决方案架构师应该建议什么?
A.启用并发缩放。
B.启用跨区域快照。
C.增加数据保留期限。
D.在多可用区中部署Amazon Redshift。
QUESTION 357
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company is migrating a Linux-based web server group to AWS. The web servers must access
files in a shared file store for some content to meet the migration date, minimal changes can be
made.
What should a solutions architect do to meet these requirements?
A. Create an Amazon S3 Standard bucket with access to the web server.
B. Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin.
C. Create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers.
D. Configure Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volumes and
mount them on all web servers.
Answer: C
一家公司正在将基于Linux的Web服务器组迁移到AWS。 Web服务器必须访问共享文件存储中的文件,才能满足迁移日期的某些内容,因此所做的更改最少。
解决方案架构师应该怎么做才能满足这些要求?
A.创建一个可以访问Web服务器的Amazon S3 Standard存储桶。
B.配置一个以Amazon S3存储桶为源的Amazon CloudFront分发。
C.创建一个Amazon Elastic File System(Amazon EFS)卷并将其安装在所有Web服务器上。
D.配置Amazon Elastic Block Store(Amazon EBS)预置的IOPS SSD(io1)卷,并将其安装在所有Web服务器上。
QUESTION 358
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A solutions architect is planning the deployment of a new static website. The solution must
minimize costs and provide at least 99% availability.
Which solution meets these requirements?
A. Deploy the application to an Amazon S3 bucket in one AWS Region that has versioning disabled.
B. Deploy the application to Amazon EC2 instances that run in two AWS Regions and two
Availability Zones.
C. Deploy the application to an Amazon S3 bucket that has versioning and cross-Region replication
enabled.
D. Deploy the application to an Amazon EC2 instance that runs in one AWS Region and one
Availability Zone.
Answer: A C?
解决方案架构师正在计划部署新的静态网站。 该解决方案必须最小化成本,并提供至少99%的可用性。
哪种解决方案满足这些要求?
A.将应用程序部署到一个禁用版本控制的AWS区域中的Amazon S3存储桶。
B.将应用程序部署到在两个AWS区域和两个可用区中运行的Amazon EC2实例。
C.将应用程序部署到已启用版本控制和跨区域复制的Amazon S3存储桶。
D.将应用程序部署到在一个AWS区域和一个可用区中运行的Amazon EC2实例。
QUESTION 359
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company hosts an online shopping application that stores all orders in an Amazon RDS for
PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and
has asked a solutions architect to recommend an approach to minimize database downtime
without requiring any changes to the application code.
Which solution meets these requirements?
A. Convert the existing database instance to a Multi-AZ deployment by modifying the database
instance and specifying the Multi-AZ option.
B. Create a new RDS Multi-AZ deployment.
Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the
snapshot.

C. Create a read-only replica of the PostgreSQL database in another Availability Zone.
Use Amazon Route 53 weighted record sets to distribute requests across the databases.
D. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum
group size of two.
Use Amazon Route 53 weighted record sets to distribute requests across instances.
Answer: A
一家公司托管着一个在线购物应用程序,该应用程序将所有订单存储在Amazon RDS for PostgreSQL Single-AZ数据库实例中。
管理层希望消除单点故障,并已要求解决方案架构师推荐一种在不需更改应用程序代码的情况下最大程度地减少数据库停机时间的方法。
哪种解决方案满足这些要求?
A.通过修改数据库实例并指定Multi-AZ选项,将现有数据库实例转换为Multi-AZ部署。
B.创建一个新的RDS多可用区部署。
拍摄当前RDS实例的快照,并使用快照还原新的多可用区部署。
C.在另一个可用区中创建PostgreSQL数据库的只读副本。
使用Amazon Route 53加权记录集在数据库之间分配请求。
D.将RDS for PostgreSQL数据库放置在Amazon EC2 Auto Scaling组中,组的最小大小为2。
使用Amazon Route 53加权记录集在实例之间分配请求。
QUESTION 360
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
A company is deploying an application in three AWS Regions using an Application Load
Balancer. Amazon Route 53 will be used to distribute traffic between these Regions.
Which Route 53 configuration should a solutions architect use to provide the MOST high-
performing experience?
A. Create an A record with a latency policy,
B. Create an A record with a geolocation policy
C. Create a CNAME record with a failover policy.
D. Create a CNAME record with a geoproximity policy.
Answer: A
一家公司正在使用Application Load Balancer在三个AWS区域中部署应用程序。 Amazon Route 53将用于在这些区域之间分配流量。
解决方案架构师应使用哪种Route 53配置来提供MOST高性能体验?
A.创建带有延迟策略的A记录,
B.使用地理位置策略创建A记录
C.使用故障转移策略创建CNAME记录。
D.使用地理邻近策略创建CNAME记录。
QUESTION 361
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A company hosts an application used to upload files to an Amazon S3 bucket. Once uploaded,
the files are processed to extract metadata, which takes less than 5 seconds. The volume and
frequency of the uploads varies from a few files each hour to hundreds of concurrent uploads.
The company has asked a solutions architect to design a cost-effective architecture that will meet
these requirements.
What should the solutions architect recommend?
A. Configure AWS Cloud Trail trails to log S3 API calls.
Use AWS AppSync to process the files.
B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda
function to process the files.
C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3.
Invoke an AWS Lambda function to process the files.
D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process the files
uploaded to Amazon S3.
Invoke an AWS Lambda function to process the files.
Answer: B C?
公司托管用于将文件上传到Amazon S3存储桶的应用程序。 上载后,将对文件进行处理以提取元数据,所需时间不到5秒。
上传的数量和频率从每小时几个文件到数百个并发上传不等。
该公司已要求解决方案架构师设计一种符合这些要求的经济高效的体系结构。
解决方案架构师应该建议什么?
A.配置AWS Cloud Trail路径以记录S3 API调用。
使用AWS AppSync来处理文件。
B.在S3存储桶中配置一个对象创建的事件通知,以调用AWS Lambda函数来处理文件。
C.配置Amazon Kinesis Data Streams以处理数据并将数据发送到Amazon S3。调用AWS Lambda函数以处理文件。
D.配置一个Amazon Simple Notification Service(Amazon SNS)主题以处理上传到Amazon S3的文件。
调用AWS Lambda函数来处理文件。
QUESTION 362
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
A company has data stored in an on-premises data center that is used by several on-premises
applications. The company wants to maintain its existing application environment and be able to
use AWS services for data analytics and future visualizations.
Which storage service should a solutions architect recommend?
A. Amazon Redshift.

B. AWS Storage Gateway for files.
C. Amazon Elastic Block Store (Amazon EBS).
D. Amazon Elastic File System (Amazon EFS).
Answer:B A?
公司将数据存储在本地数据中心中,供多个本地应用程序使用。 该公司希望维护其现有的应用程序环境,并能够将AWS服务用于数据分析和未来的可视化。
解决方案架构师应建议哪种存储服务?
答:Amazon Redshift。
B.用于文件的AWS Storage Gateway。
C. Amazon Elastic Block Store(Amazon EBS)。
D. Amazon弹性文件系统(Amazon EFS)。

如之前的课程中所说,Redshift是一种**联机分析处理OLAP(Online Analytics Processing)**的类型,支持复杂的分析操作,侧重决策支持,并且能提供直观易懂的查询结果

QUESTION 363
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
A company is developing a mobile game that streams score updates to a backend processor and
then posts results on a leaderboard. A solutions architect needs to design a solution that can
handle large traffic spikes, process the mobile game updates in order of receipt, and store the
processed updates in a highly available database. The company also wants to minimize the
management overhead required to maintain the solution.
What should the solutions architect do to meet these requirements?
A. Push score updates to Amazon Kinesis Data Streams.
Process the updates in Kinesis Data Streams with AWS Lambda.
Store the processed updates in Amazon DynamoDB.
B. Push score updates to Amazon Kinesis Data Streams.
Process the updates with a fleet of Amazon EC2 instances set up for Auto Scaling,
Store the processed updates in Amazon Redshift.
C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic,
Subscribe an AWS Lambda function to the SNS topic to process the updates.
Store the processed updates in a SQL database running on Amazon EC2.
D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue.
Use a fleet of Amazon EC2 instances with Auto Scaling to process the updates in the SOS
queue.
Store the processed updates in an Amazon RDS Multi-AZ DB instance.
Answer: A
一家公司正在开发一种移动游戏,该游戏将分数更新流式传输到后端处理器,然后将结果发布在排行榜上。
解决方案架构师需要设计一种解决方案,该解决方案可以处理大量流量高峰,按收据顺序处理移动游戏更新,
并将处理后的更新存储在高度可用的数据库中。该公司还希望最小化维护该解决方案所需的管理开销。
解决方案架构师应怎么做才能满足这些要求?
A.将分数更新到Amazon Kinesis Data Streams。
使用AWS Lambda处理Kinesis Data Streams中的更新。
将已处理的更新存储在Amazon DynamoDB中。
B.将分数更新推送到Amazon Kinesis Data Streams。
使用为Auto Scaling设置的Amazon EC2实例队列处理更新,并将处理后的更新存储在Amazon Redshift中。
C.将分数更新推送到Amazon Simple Notification Service(Amazon SNS)主题,
将AWS Lambda函数订阅到SNS主题以处理更新。将处理后的更新存储在Amazon EC2上运行的SQL数据库中。
D.将分数更新推送到Amazon Simple Queue Service(Amazon SQS)队列。使用具有Auto Scaling的Amazon EC2实例队列在SOSqueue中处理更新。
将已处理的更新存储在Amazon RDS Multi-AZ数据库实例中。

Explanation: Keywords to focus on would be highly available database - DynamoDB would be a better choice for leaderboard.

QUESTION 364
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
A company has a three-tier environment on AWS that ingests sensor data from its users' devices.
The traffic flows through a Network Load Balancer (NLB), then to Amazon EC2 instances for the
web tier, and finally to EC2 instances for the application tier that makes database calls.
What should a solutions architect do to improve the security of data in transit to the web tier?
A. Configure a TLS listener and add the Server Certificate on the NLB.
B. Configure AWS Shield Advanced and enable AWS WAF on the NLB.
C. Change the Load Balancer to an Application Load Balancer and attach AWS WAF to it.
D. Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS
Key Management Service (AWS KMS)
Answer: A
一家公司在AWS上具有三层环境,该环境从其用户设备中提取传感器数据。
流量先流经网络负载平衡器(NLB),然后流至Web层的Amazon EC2实例,最后流至进行数据库调用的应用程序层的EC2实例。
解决方案架构师应采取什么措施来提高传输到Web层的数据的安全性?
答:配置TLS侦听器,并在NLB上添加服务器证书。
B.配置AWS Shield Advanced并在NLB上启用AWS WAF。
C.将负载均衡器更改为应用程序负载均衡器,并将AWS WAF附加到它。
D.使用AWS Key Management Service(AWS KMS)在EC2实例上加密Amazon Elastic Block Store(Amazon EBS)卷

Explanation: User - NLB- EC2 (Web) + DB

要使用TLS侦听器,必须在负载均衡器上至少部署一个服务器证书。负载平衡器使用服务器证书来终止前端连接,然后在将请求发送到目标之前解密来自客户端的请求。

Elastic Load Balancing使用称为安全策略的TLS协商设置来协商客户端与负载均衡器之间的TLS连接。安全策略是协议和密码学的组合。该协议在客户端和服务器之间建立安全连接,并保证在客户端和负载均衡器之间传递的所有数据的私密性。密码术是一种加密算法,它使用加密密钥来创建编码消息。该协议使用多个密码来加密Internet上的数据。在连接协商过程中,客户端和负载均衡器会按照优先顺序显示支持的密码和协议的列表。选择服务器列表中与客户端密码匹配的第一个密码以进行安全连接。

QUESTION 365
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
A company uses Application Load Balancers (ALBs) in different AWS Regions.
The ALBs receive inconsistent traffic that can spike and drop throughout the year. The company's
networking team needs to allow the IP addresses of the ALBs in the on-premises firewall to
enable connectivity.
Which solution is the MOST scalable with minimal configuration changes?
A. Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions.
Update the 0ก- premises firewalls rule to allow the IP addresses of the ALBs.
B. Migrate all ALBs in different Regions to the Network Load Balancers (NLBs).
Update the on-premises firewall's rule to allow the Elastic IP addresses of all the NLBs.
C. Launch AWS Global Accelerator Register the ALBs in different Regions to the accelerator.
Update the on-premises firewall's rule to allow static IP addresses associated with the
accelerator.
D. Launch a Network Load Balancer (NLB) in one Region Register the private IP addresses of the
ALBs in different Regions with the NLB.
Update the on-premises firewall's rule to allow the Elastic IP address attached to the NLB.
Answer: C
一家公司在不同的AWS区域中使用应用程序负载平衡器(ALB)。
ALB接收的流量不一致,全年可能会出现高峰和下降。公司的
网络团队需要允许本地防火墙中ALB的IP地址启用连接。
通过最少的配置更改,MOST可以扩展哪种解决方案?
A.编写一个AWS Lambda脚本以获取不同区域中ALB的IP地址。
更新0x-前提防火墙规则以允许ALB的IP地址。
B.将不同区域中的所有ALB迁移到网络负载平衡器(NLB)。
更新本地防火墙的规则,以允许所有NLB的弹性IP地址。
C.启动AWS Global Accelerator在加速器的不同区域中注册ALB。
更新本地防火墙的规则,以允许与加速器关联的静态IP地址。
D.在一个区域中启动网络负载平衡器(NLB)向NLB注册不同区域中ALB的专用IP地址。
更新本地防火墙的规则,以允许将弹性IP地址附加到NLB。
QUESTION 366
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
A company receives inconsistent service from its data center provider because the company is
headquartered in an area affected by natural disasters.
The company is not ready to fully migrate to the AWS Cloud, but it wants a failure environment on
AWS in case the on-premises data center fails,
The company ruทs web servers that connect to external vendors. The data available on AWS and
on premises must be uniform.
Which solution should a solutions architect recommend that has the LEAST amount of downtime?
A. Configure an Amazon Route 53 failover record.
Run application servers on Amazon EC2 instances behind an Application Load Balancer in an
Auto Scaling group.
Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
B, Configure an Amazon Route 53 failover record.
Execute an AWS CloudFormation template from a script to create Amazon EC2 instances behind
an Application Load Balancer.
Set นp AWS Storage Gateway with stored volumes to back up data to Amazon S3.
C. Configure an Amazon Route 53 failover record.
Set up an AWS Direct Connect connection between a VPC and the data center.
Run application servers on Amazon EC2 in an Auto Scaling group.
Run an AWS Lambda function to execute an AWS CloudFormation template to create an
Application Load Balancer.
D. Configure an Amazon Route 53 failover record.
Run an AWS Lambda function to execute an AWS CloudF ormation template to launch two
Amazon EC2 instances.
Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
Set up an AWS Direct Connect connection between a VPC and the data center.
Answer: D
公司的数据中心提供商的服务不一致,因为该公司的总部位于遭受自然灾害影响的地区。
该公司尚未准备好完全迁移到AWS Cloud,但希望在AWS上出现故障环境以防内部部署数据中心发生故障,该公司将Web服务器连接到外部供应商。 AWS和内部场所中可用的数据必须统一。
解决方案架构师应建议哪种解决方案停机时间最少?
A.配置Amazon Route 53故障转移记录。
在Auto Scaling组中Application Load Balancer后面的Amazon EC2实例上运行应用程序服务器。
设置具有存储卷的AWS Storage Gateway,以将数据备份到Amazon S3。
B,配置Amazon Route 53故障转移记录。
从脚本执行AWS CloudFormation模板,以在Application Load Balancer之后创建Amazon EC2实例。
将นp AWS Storage Gateway设置为具有存储卷,以将数据备份到Amazon S3。
C.配置Amazon Route 53故障转移记录。
在VPC和数据中心之间建立AWS Direct Connect连接。
在Auto Scaling组中的Amazon EC2上运行应用程序服务器。
运行AWS Lambda函数以执行AWS CloudFormation模板以创建应用程序负载均衡器。
D.配置一个Amazon Route 53故障转移记录。
运行AWS Lambda函数以执行AWS CloudF ormation模板以启动两个Amazon EC2实例。
设置具有存储卷的AWS Storage Gateway,以将数据备份到Amazon S3。
在VPC和数据中心之间建立AWS Direct Connect连接。
QUESTION 367
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
A company has two AWS accounts Production and Development.
There are code changes ready in the Development account to push to the Production account. In
the alpha phase, only two senior developers on the development team need access to the
Production account. In the beta phase, more developers might need access to perform testing as
well.
What should a solutions architect recommend?
A. Create two policy documents using the AWS Management Console in each account.
Assign the policy to developers who need access.
B. Create an IAM role in the Development account Give one IAM role access to the Production
account.
Allow developers to assume the role.
C. Create an IAM role in the Production account with the trust policy that specifes the Development
account.
Allow developers to assume the role.
D. Create an IAM group in the Production account and add it as a principal in the trust policy that
specifies the Production account.
Add developers to the group.
Answer: C
一家公司有两个AWS账户生产和开发。
开发帐户中准备好代码更改以推送到生产帐户。 在Alpha阶段,开发团队中只有两名高级开发人员需要访问Production帐户。 
在测试阶段,更多的开发人员可能还需要访问权限才能执行测试。
解决方案架构师应该建议什么?
A.在每个帐户中使用AWS管理控制台创建两个策略文档。
将策略分配给需要访问权限的开发人员。
B.在开发帐户中创建IAM角色授予一个IAM角色对生产帐户的访问权限。
允许开发人员担任该角色。
C.使用指定开发帐户的信任策略在生产帐户中创建IAM角色。
允许开发人员担任该角色。
D.在生产帐户中创建一个IAM组,并将其添加为指定生产帐户的信任策略中的主体。
将开发人员添加到组中。
QUESTION 368
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
A company has a custom application with embedded credentials that retrieves information from
an Amazon RDS MySQL DB instance Management says the application must be made more
secure with the least amount of programming effort.
What should a solutions architect do to meet these requirements?
A. Use AWS Key Management Service (AWS KMS) customer master keys (CMKs) to create keys.
Configure the application to load the database credentials from AWS KMS.
Enable automatic key rotation.
B. Create credentials on the RDS for MySQL database for the application user and store the
credentials in AWS Secrets Manager.
Configure the application to load the database credentials from Secrets Manager.
Create an AWS Lambda function that rotates the credentials in Secret Manager.
C. Create credentials on the RDS for MySQL database for the application user and store the
credentials in AWS Secrets Manager.
Configure the application to load the database credentials from Secrets Manager.
Set up a credentials rotation schedule for the application user in the RDS for MySQL database
using Secrets Manager.
D. Create credentials on the RDS for MySQL database for the application user and store the
credentials in AWS Systems Manager Parameter,.
Store Configure the application to load the database credentials from Parameter Store.
Set up a credentials rotation schedule for the application user in the RDS for MySQL database
using Parameter Store.
Answer: B
一家公司拥有一个带有嵌入式证书的自定义应用程序,该应用程序可从Amazon RDS MySQL数据库实例管理中检索信息。
该公司表示,必须以最少的编程工作量来使该应用程序更安全。
解决方案架构师应该怎么做才能满足这些要求?
A.使用AWS Key Management Service(AWS KMS)客户主密钥(CMK)创建密钥。配置应用程序以从AWS KMS加载数据库凭证。启用自动密钥轮换。
B.在RDS for MySQL数据库上为应用程序用户创建凭证,并将凭证存储在AWS Secrets Manager中。
配置应用程序以从Secrets Manager加载数据库凭证。创建一个AWS Lambda函数以旋转Secret Manager中的凭证。
C.在RDS for MySQL数据库上为应用程序用户创建凭证,并将凭证存储在AWS Secrets Manager中。
配置应用程序以从Secrets Manager加载数据库凭据。在RDS for MySQL数据库中为应用程序用户设置凭据轮换时间表
使用Secrets Manager。
D.在RDS for MySQL数据库上为应用程序用户创建凭证,并将凭证存储在AWS Systems Manager的参数中。
Store将应用程序配置为从Parameter Store加载数据库凭证。使用参数存储的MySQL数据库的RDS。
QUESTION 369
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
A web application must persist order data to Amazon S3 to support near-real-time processing.
A solutions architect needs create an architecture that is both scalable and fault tolerant.
Which solutions meet these requirements? (Select TWO.)
A. Write the order event to an Amazon DynamoDB table.
Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes
the data to Amazon
B. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue.
Use the queue to trigger an AWS Lambda function that parses the payload and writes the data to
Amazon S3.
C. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic.
Use the SNS topic to trigger an AWS Lambda function that parses the payload and writes the data
to Amazon S3.
D. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue.
Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda
function that parses the payload and writes the data to Amazon S3.
E. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic.
Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda
function that parses the payload and writes the data to Amazon S3,
Answer: AD
Web应用程序必须将订单数据持久保存到Amazon S3,以支持近实时处理。
解决方案架构师需要创建可扩展且容错的架构。
哪些解决方案满足这些要求? (选择两个。)
A.将订单事件写入Amazon DynamoDB表。
使用DynamoDB Streams触发AWS Lambda函数,该函数解析有效负载并将数据写入Amazon
B.将订单事件写入Amazon Simple Queue Service(Amazon SQS)队列。
使用队列来触发AWS Lambda函数,该函数解析有效负载并将数据写入Amazon S3。
C.将订单事件写入Amazon Simple Notification Service(Amazon SNS)主题。
使用SNS主题触发AWS Lambda函数,该函数解析有效负载并将数据写入Amazon S3。
D.将订单事件写入Amazon Simple Queue Service(Amazon SQS)队列。
使用Amazon EventBridge(Amazon CloudWatch Events)规则来触发AWS Lambda函数,该函数解析有效负载并将数据写入Amazon S3。
E.将订单事件写入Amazon Simple Notification Service(Amazon SNS)主题。
使用Amazon EventBridge(Amazon CloudWatch Events)规则触发AWS Lambda函数,该函数解析有效负载并将数据写入Amazon S3,
QUESTION 370
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A company has an application workflow that uses an AWS Lambda function to download and
decrypt files from Amazon S3. These files are encrypted using AWS Key Management Service
Customer Master Keys (AWS KMS CMKs).
A solutions architect needs to design a solution that will ensure the required permissions are set
correctly.
Which combination of actions accomplish this? (Select TWO.)
A. Attach the kms. decrypt permission to the Lambda function's resource policy.
B. Grant the decrypt permission for the Lambda IAM role in the KMS key's policy,
C. Grant the decrypt permission for the Lambda resource policy in the KMS key's policy.
D. Create a new IAM policy with the kms:decrypt permission and attach the policy to the Lambda
function.
E. Create a new IAM role with the kms decrypt permission and attach the execution role to the
Lambda function.
Answer: BE
一家公司拥有一个应用程序工作流程,该工作流程使用AWS Lambda函数从Amazon S3下载和解密文件。 
这些文件使用AWS Key Management Service客户主密钥(AWS KMS CMK)进行加密。
解决方案架构师需要设计一种解决方案,以确保正确设置所需的权限。
哪些动作组合可以达到目的? (选择两个。)
A.附加kms。 解密对Lambda函数的资源策略的权限。
B.在KMS密钥的策略中授予Lambda IAM角色的解密权限,
C.在KMS密钥的策略中授予对Lambda资源策略的解密权限。
D.使用kms:decrypt权限创建一个新的IAM策略,并将该策略附加到Lambda函数。
E.使用kms解密权限创建一个新的IAM角色,并将执行角色附加到Lambda函数。
QUESTION 371
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
A company is building a document storage application on AWS. The application runs on Amazon
EC2 instances in multiple Availability Zones. The company requires the document store to be
highly available. The documents need to be returned immediately when requested.
The lead engineer has configured the application to use Amazon Elastic Block Store (Amazon
EBS) to store the documents, but is willing to consider other options to meet the availability requirement.
What should a solutions architect recommend?
A. Snapshot the EBS volumes regularly and build new volumes using those snapshots in additional
Availability Zones.
B. Use Amazon EBS for the EC2 instance root volumes.
Configure the application to build the document store on Amazon S3.
C. Use Amazon EBS for the EC2 instance root volumes.
Configure the application to build the document store on Amazon S3 Glacier,
D. Use at least three Provisioned lOPS EBS volumes for EC2 instances.
Mount the volumes to the EC2 instances in a RAID 5 configuration.
Answer: B
一家公司正在AWS上构建文档存储应用程序。该应用程序在多个可用区中的Amazon EC2实例上运行。公司要求文件存储为
高可用性。需要在要求时立即将文件退还。
首席工程师已将应用程序配置为使用Amazon Elastic Block Store(Amazon EBS)来存储文档,但愿意考虑其他选项来满足可用性要求。
解决方案架构师应该建议什么?
A.定期快照EBS卷,并在其他可用区中使用这些快照构建新卷。
B.将Amazon EBS用于EC2实例根卷。
配置应用程序以在Amazon S3上构建文档存储。
C.将Amazon EBS用于EC2实例根卷。
配置应用程序以在Amazon S3 Glacier上构建文档存储,
D.对于EC2实例,至少使用三个Provisioned IOPS EBS卷。
将卷安装到RAID 5配置中的EC2实例。
QUESTION 372
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data
sources. The data is in JSON format and ingestion rates can be as high as 1 MB/s. When an EC2
instance is rebooted, the data in-flight is lost.
The company's data science team wants to query ingested data near-real time.
Which solution provides near-real-time data querying that is scalable with minimal data loss?
A. Publish data to Amazon Kinesis Data Streams.
Use Kinesis Data Analytics to query the data.
B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination.
Use Amazon Redshift to query the data.
C. Store ingested data in an EC2 instance store.
Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination.
Use Amazon Athena to query the data.
D. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume.
Publish data to Amazon ElastiCache for Redis.
Subscribe to the Redis channel to query the data.
Answer: C
一家公司正在使用Amazon EC2实例团队从本地数据源提取数据。 数据采用JSON格式,接收速率可以高达1 MB / s。 重新引导EC2实例时,运行中的数据将丢失。
该公司的数据科学团队希望近乎实时地查询摄取的数据。
哪种解决方案可提供可伸缩且数据丢失最少的近实时数据查询?
A.将数据发布到Amazon Kinesis Data Streams。
使用Kinesis Data Analytics查询数据。
B.将数据发布到以Amazon Redshift为目的地的Amazon Kinesis Data Firehose。
使用Amazon Redshift查询数据。
C.将提取的数据存储在EC2实例存储中。
以Amazon S3为目的地将数据发布到Amazon Kinesis Data Firehose。
使用Amazon Athena查询数据。
D.将提取的数据存储在Amazon Elastic Block Store(Amazon EBS)卷中。
将数据发布到Amazon ElastiCache for Redis。订阅Redis通道以查询数据。

Amazon Athena 是一种交互式查询服务,让您能够使用标准 SQL 直接在 Amazon Simple Storage Service (Amazon S3) 中轻松分析数据。只需在 AWS 管理控制台中执行几项操作,即可将 Athena 指向 Amazon S3 中存储的数据,并开始使用标准 SQL 运行临时查询,然后在几秒钟内获得结果。

Athena 是无服务器,因此没有要设置或管理的基础设施,并且您只为运行的查询付费。 Athena 自动缩放—并行运行查询—因此,即使具有大型数据集和复杂查询,结果也非常快速。

QUESTION 373

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
A company has a website deployed on AWS. The database backend is hosted on Amazon RDS
for MySQL with a primary instance and five read replicas to support scaling needs. The read
replicas should lag no more than 1 second behind the primary instance to support the user
experience.
As traffic on the website continues to increase, the replicas are falling further behind during
periods of peak load, resulting in complaints from users when searches yield inconsistent results.
A solutions architect needs to reduce the replication lag as much as possible, with minimal
changes to the application code or operational requirements.
Which solution meets these requirements?
A. Migrate the database to Amazon Aurora MySQL.
Replace the MySQL read replicas with Aurora Replicas and enable Aurora Auto Scaling.
B. Deploy an Amazon ElastiCache for Redis cluster in front of the database.
Modify the website to check the cache before querying the database read endpoints.
C. Migrate the database from Amazon RDS to MySQL running on Amazon EC2 compute instances.
Choose very large compute optimized instances for all replica nodes.
D. Migrate the database to Amazon DynamoDB.
Initially provision a large number of read capacity units (RCUs) to support the required throughput
with on-demand capacity.
Answer: A
一家公司在AWS上部署了一个网站。数据库后端托管在Amazon RDS for MySQL上,具有一个主实例和五个只读副本以支持扩展需求。只读副本应比主实例落后不超过1秒,以支持用户体验。
随着网站上流量的持续增长,在高峰负载期间,副本副本的数量进一步下降,当搜索结果不一致时,会引起用户的抱怨。
解决方案架构师需要在对应用程序代码或操作要求进行最小更改的情况下,尽可能减少复制滞后。
哪种解决方案满足这些要求?
A.将数据库迁移到Amazon Aurora MySQL。
将MySQL只读副本替换为Aurora副本,然后启用Aurora Auto Scaling。
B.在数据库前面部署一个Amazon ElastiCache for Redis集群。
修改网站以在查询数据库读取终结点之前检查缓存。
C.将数据库从Amazon RDS迁移到在Amazon EC2计算实例上运行的MySQL。
为所有副本节点选择非常大型的计算优化实例。
D.将数据库迁移到Amazon DynamoDB。
最初提供大量的读取容量单位(RCU),以按需容量支持所需的吞吐量。

QUESTION 374

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
A group requires permissions list an Amazon S3 bucket and delete objects from that bucket. An
administrator has created the following IAM policy to provide access to the bucket and applied
that policy to the group. The groüp Ís not able to delete objects in the bucket.
The company follows least-privilege access rules.
一组需要权限的用户会列出一个Amazon S3存储桶,并从该存储桶中删除对象。一个 管理员已创建以下IAM策略,
以提供对存储桶的访问权限并已应用 对小组的政策。 groüpÍs无法删除存储桶中的对象。 该公司遵循最低特权访问规则。
"Version": *2012-10-17* ,
"statement" :[
"Action":I
"s3:ListBacket" ,
"s3: Deleteobject .
"Resource":I
aws:s3: : : bucket -hame .
,,
"Effect": "Allow"

Which statement should a solutions architect add to the policy to correct bucket access?
A.
"Action":[
"s3: *Object"
"Re source":
arn:aws:s3;:sbucket-name/*"
"Effect": "Allow"

B
"Action":[
"s3:.
l,
"Resource":[
"arn:awsts3:::bucket-name/-"
1,
"Elfect": "Allow"

C.
"Action":
"s3:Deleteobject *
"arn:awa:s3: : ibucket -name* .
"Effect": "Allow"
D.
"Action":
"e3:deleteobject .
"arn:awa:s3: : ibucket -name* .

"Effect": "Allow"

Answer: B
QUESTION 375
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company has an API-based inventory reporting application running on Amazon EC2 instances.
The application stores information in an Amazon DynamoDB table, The company's distribution
centers have an on-premises shipping application that calls 'an API to update the inventory before
printing shipping labels.
The company has been experiencing application interruptions several times each day, resulting in
lost transactions.
What should a solutions architect recommend to improve application resiliency?
A. Modify the shipping application to write to a local database.
B. Modify the application APls to run serverless using AWS Lambda.
C. Configure Amazon API Gateway to call the EC2 inventory application APls.
D. Modify the application to send inventory updates using Amazon Simple Queue Service (Amazon
SQS).
Answer: D
公司拥有在Amazon EC2实例上运行的基于API的清单报告应用程序。
该应用程序将信息存储在Amazon DynamoDB表中。该公司的配送中心有一个本地运输应用程序,该应用程序调用“ API以在更新库存之前
打印运输标签。
该公司每天多次遇到应用程序中断的情况,从而导致交易丢失。
解决方案架构师应建议什么以提高应用程序的弹性?
A.修改运输应用程序以写入本地数据库。
B.修改应用程序APls以使用AWS Lambda运行无服务器。
C.配置Amazon API Gateway以调用EC2库存应用程序AP1。
D.修改应用程序以使用Amazon Simple Queue Service(Amazon SQS)发送清单更新。
QUESTION 376
1
2
3
4
5
6
7
8
9
A user has underutilized on-premises resources.
Which AWS Cloud concept can BEST address this issue?
A. High Availability
B, Elasticity
C. Security
D. Loose Coupling
Answer: B
用户未充分利用本地资源。 哪种AWS Cloud概念可以最好地解决这个问题? 
A.高可用性 B,弹性 C.安全 D.松耦合

QUESTION 377

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company has an automobile sales website that stores its listings in an database on Amazon
RDS When an automobile is sold, the listing needs to be removed from the website and the data
must be sent to multiple target systems.
Which design should a solutions architect recommend?
A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to
send the information to an Amazon Simple Queue Service (Amazon SQS) queue for the targets to
consume.

B.Create an AWS Lambda function triggered when the database on Amazon RDS is updated to
send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the
targets to consume.
C. Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon
SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use
AWS Lambda functions to update the targets.
D. Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon
SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues Use
AWS Lambda functions to update the targets.
Answer: B
公司拥有一个汽车销售网站,该网站将其列表存储在Amazon RDS上的数据库中。销售汽车时,需要从网站上删除列表,并且必须将数据发送到多个目标系统。
解决方案架构师应建议哪种设计?
A.创建一个AWS Lambda函数,该函数在更新Amazon RDS上的数据库以将信息发送到Amazon Simple Queue Service(Amazon SQS)队列以供目标使用时触发。
B.创建在Amazon RDS上的数据库更新以将信息发送到Amazon Simple Queue Service(Amazon SQS)FIFO队列以供目标使用时触发的AWS Lambda函数。
C.订阅RDS事件通知,然后将扇形展开的Amazon Simple Queue Service(Amazon SQS)队列发送到多个Amazon Simple Notification Service(Amazon SNS)主题。使用AWS Lambda函数更新目标。
D.订阅RDS事件通知,然后将扇出的Amazon Simple Notification Service(Amazon SNS)主题发送到多个Amazon Simple Queue Service(Amazon SQS)队列。使用AWS Lambda函数更新目标。

QUESTION 378

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
An application is running on an Amazon EC2 instance and must have millisecond latency when
running the workload. The application makes many small reads and writes to the file system, but
the file system itself is small.
Which Amazon Elastic Block Store (Amazon EBS) volume type should a solutions architect
attach to their EC2 instance?
A. Cold HDD (sc1)
B. General Purpose SSD (gp2)
C. Provisioned IOPS SSD (io1)
D. Throughput Optimized HDD (st1)
Answer: B
应用程序正在Amazon EC2实例上运行,并且在运行工作负载时必须具有毫秒级的延迟。 该应用程序对文件系统进行许多小的读写操作,但是文件系统本身很小。
解决方案架构师应将哪种Amazon Elastic Block Store(Amazon EBS)卷类型附加到其EC2实例?
A.冷硬盘(sc1)
B.通用SSD(gp2)
C.预配的IOPS SSD(io1)
D.吞吐量优化的硬盘(st1)
  • 通用型SSD – GP2 (高达10,000 IOPS),适用于启动盘,低延迟的应用程序等
  • 预配置型SSD – IO1 (超过10,000 IOPS),适用于IO密集型的数据库
  • 吞吐量优化型HDD -ST1,适用于数据仓库,日志处理
  • HDD Cold – SC1 – 适合较少使用的冷数据
  • HDD, Magnetic

QUESTION 379

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company runs a static website through its on-premises data center. The company has multiple
servers that handle all of its traffic, but on busy days, services are interrupted and the website
becomes unavailable. The company wants to expand its presence globally and plans to triple its
website traffic
What should a solutions architect recommend to meet these requirements?
A. Migrate the website content to Amazon S3 and host the website on Amazon CloudFront.
B. Migrate the website content to Amazon EC2 instances with public Elastic IP addresses in multiple
AWS Regions.
C. Migrate the website content to Amazon EC2 instances and vertically scale as the load increases.
D. Use Amazon Route 53 to distribute the loads across multiple Amazon CloudFront distributions for
each AWS Region that exists globally.
Answer: A
公司通过其本地数据中心运行一个静态网站。 该公司拥有多台服务器来处理所有流量,但是在繁忙的日子里,
服务会中断并且网站将不可用。 该公司希望在全球扩展业务,并计划将其网站访问量增加两倍
解决方案架构师应建议哪些以满足这些要求?
A.将网站内容迁移到Amazon S3并将网站托管在Amazon CloudFront上。
B.将网站内容迁移到具有多个公共弹性IP地址的Amazon EC2实例
AWS区域。
C.将网站内容迁移到Amazon EC2实例,并随着负载的增加垂直扩展。
D.使用Amazon Route 53在多个Amazon CloudFront分配中分配负载
全球存在的每个AWS区域。

QUESTION 380

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
A company has a media catalog with metadata for each item in the catalog. Different types of
metadata are extracted from the media items by an application running on AWS Lambda.
Metadata is extracted according to a number of rules, with the output stored in an Amazon
ElastiCache for Redis cluster, The extraction process is done in batches and takes around 40
minutes to complete. The update process is triggered manually whenever the metadata extraction
rules change.
The company wants to reduce the amount of time it takes to extract metadata from its media
catalog. To achieve this, a solutions architect has split the single metadata extraction Lambda
function into a Lambda function for each type of metadata.
Which additional steps should the solutions architect take to meet the requirements?
A. Create an AWS Step Functions workflow to run the Lambda functions in parallel.
Create another Step Functions workflow that retrieves a list of media items and executes a
metadata extraction workflow for each one.
B. Create an AWS Batch compute environment for each Lambda function.
Configure an AWS Batch job queue for the compute environment.
Create a Lambda function to retrieve a list of media items and write each item to the job queue.
C. Create an AWS Step Functions workflow to run the Lambda functions in parallel.
Create a Lambda
function to retrieve a list of media items and write each item to an Amazon SQS queue.
Configure the SQS queue as an input to the Step Functions workflow,
D. Create a Lambda function to retrieve a list of media items and write each item to an Amazon SQS
queue.
Subscribe the metadata extraction Lambda functions to the SQS queue with a large batch size.
Answer: C
公司有一个媒体目录,其中包含目录中每个项目的元数据。 AWS Lambda上运行的应用程序从媒体项目中提取不同类型的元数据。
根据许多规则提取元数据,并将输出存储在Amazon ElastiCache for Redis集群中。提取过程是分批完成的,大约需要40分钟才能完成。每当元数据提取规则更改时,手动触发更新过程。
该公司希望减少从媒体目录中提取元数据所花费的时间。为此,解决方案架构师已将单个元数据提取Lambda拆分
函数将每种类型的元数据转换为Lambda函数。
解决方案架构师应采取哪些其他步骤来满足要求?
A.创建一个AWS Step Functions工作流以并行运行Lambda函数。
创建另一个Step Functions工作流,以检索媒体项目列表并执行
每个元数据提取工作流程。
B.为每个Lambda函数创建一个AWS Batch计算环境。为该计算环境配置一个AWS Batch作业队列。
创建Lambda函数以检索媒体项目列表并将每个项目写入作业队列。
C.创建一个AWS Step Functions工作流以并行运行Lambda函数。创建Lambda函数以检索媒体项目列表并将每个项目写入Amazon SQS队列。
将SQS队列配置为“步骤功能”工作流的输入,
D.创建一个Lambda函数以检索媒体项目列表,并将每个项目写入Amazon SQS队列。
将元数据提取Lambda函数订阅到具有大批处理大小的SQS队列。

QUESTION 381

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company is deploying a public-facing global application on AWS using Amazon CloudFront.
The application communicates with an external system. A solutions architect needs to ensure the
data is secured during end-to-end transit and at rest.
Which combination of steps will satisfy these requirements? (Select TWO)

A. Create a public certificate for the required domain in AWS Certificate Manager and deploy it to
CloudFront, an Application Load Balancer, and Amazon EC2 instances.
B. Acquire a public certificate from a third-party vendor and deploy it to CloudFront, an Application
Load Balancer, and Amazon EC2 instances.
C. Provision Amazon EBS encrypted volumes using AWS KMS and ensure explicit encryption of data
when writing to Amazon EBS,
D. Use SSL or encrypt data while communicating with the external system using a VPN.
E. Communicate with the external system using plaintext and use the VPN to encrypt the data in
transit.
Answer: CD
一家公司正在使用Amazon CloudFront在AWS上部署面向公众的全局应用程序。
该应用程序与外部系统通信。 解决方案架构师需要确保在端到端传输和静态数据安全的情况下,哪些步骤组合才能满足这些要求? (选择两个)
A.在AWS Certificate Manager中为所需域创建一个公共证书,并将其部署到CloudFront,Application Load Balancer和Amazon EC2实例。
B.从第三方供应商那里获取公共证书,然后将其部署到CloudFront,应用程序负载平衡器和Amazon EC2实例。
C.使用AWS KMS设置Amazon EBS加密卷,并确保在写入Amazon EBS时对数据进行显式加密,
D.在使用VPN与外部系统通信时,使用SSL或加密数据。
E.使用纯文本与外部系统通信,并使用VPN加密传输中的数据。

QUESTION 382

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company's lease of a co-located storage facility will expire in 90 days. The company wants to
move to AWS to avoid signing a contract extension. The company's environment consists of 200
virtual machines and a NAS with 40 TB of data. Most of the data is archival, yet instant access is
required when data is requested,
Leadership wants to ensure minimal downtime during the migration. Each virtual machine has a
number of customized configurations. The company's existing 1 Gbps network connection is
mostly idle, especially after business hours.
Which combination of steps should the company take to migrate to AWS while minimizing
downtime and operational impact? (Select TWO.)
A. Use new Amazon EC2 instances and reinstall all application code,
B. Use AWS SMS to migrate the virtual machines.
C. Use AWS Storage Gateway to migrate the data to cloud-native storage.
D. Use AWS Snowball to migrate the data.
E. Use AWS SMS to copy the infrequently accessed data from the NAS.
Answer: BC
公司在同一地点的存储设施的租约将在90天内到期。 该公司希望迁移到AWS以避免签署合同延期。 
该公司的环境由200个虚拟机和一个具有40 TB数据的NAS组成。 大多数数据都是归档文件,但是在请求数据时需要即时访问,
领导层希望确保迁移期间的停机时间最少。 每个虚拟机都有许多自定义配置。 该公司现有的1 Gbps网络连接大部分处于空闲状态,尤其是在下班时间之后。
公司应在迁移到AWS的同时将停机时间和运营影响降至最低,应采取哪些步骤组合? (选择两个。)
A.使用新的Amazon EC2实例并重新安装所有应用程序代码,
B.使用AWS SMS迁移虚拟机。
C.使用AWS Storage Gateway将数据迁移到云原生存储。
D.使用AWS Snowball迁移数据。
E.使用AWS SMS从NAS复制不常访问的数据。

AWS Server Migration Service自动将本地VMware vSphere,Microsoft Hyper-V / SCVMM和Azure虚拟机迁移到AWS云。AWS SMS逐步将服务器虚拟机复制为云托管的Amazon计算机映像(AMI),并使用Amazon EC2。AMI轻松测试,更新并将基于云的映像部署到生产中。我会。

要使用以下方法管理AWS SMS服务器迁移:

  • 简化了向云的迁移过程。 只需单击几下,即可开始迁移一组服务器。AWS管理控制台:迁移开始时,AWS SMS管理迁移过程的所有复杂性,包括将实时服务器卷自动复制到AWS并定期创建新的AMI。您可以从控制台AMI快速启动EC2实例。
  • **多服务器迁移的编排。**您可以计划AWS SMS复制并使用Application。Console计划第一次复制,设置复制间隔并跟踪每个服务器的进度。启动迁移的应用程序时,可以应用在启动时运行的自定义配置脚本。
  • **服务器迁移的分步测试:**对增量复制的支持可对AWS SMS迁移的服务器进行快速且可扩展的测试。因为AWS SMS将增量更改复制到本地服务器,并且仅将增量转发到云。您可以迭代测试小的更改以节省网络带宽。
  • **支持使用最广泛的操作系统。**AWS SMS支持几种主要的Linux发行版,以及操作系统映像(包括Windows)的复制。
  • **最大限度地减少停机时间。**增量AWS SMS复制在最终转换期间最大程度地减少了与应用程序停机相关的业务影响。

使用AWS SMS受到以下限制:

  • 除非客户要求增加限额,否则每个帐户可以同时迁移50个VM。
  • 从首次复制VM开始,每个VM(不是每个帐户)90天的服务使用期。除非您请求增加限制,否则连续复制将在90天后结束。
  • 您每个帐户可以同时迁移50个应用程序。每个应用程序仅限于50个服务器和10个组。

QUESTION 383

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company is planning a large event where a promotional offer will be introduced. The company's
website is hosted on AWS and backed by an Amazon RDS for PostgreSQL DB instance. The
website explains the promotion and includes a sign-up page that collects user information and
preferences. Management expects large and unpredictable volumes of traffic periodically, which
will create many database writes.
A solutions architect needs to build a solution that does not change the underlying data model
and ensures that submissions are not dropped before they are committed to the database.
Which solutions meets these requirements?
A. Immediately before the event, scale up the existing DB instance to meet the anticipated demand.
Then scale down after the event.
B. Use Amazon SQS to decouple the application and database layers.
Configure an AWS Lambda function to write items from the queue into the database.
C. Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.
D. Use Amazon ElastiCache for Memcached to increase write capacity to the DB instance.
Answer: B C?
一家公司正在计划举办一项大型活动,届时将推出促销优惠。该公司的网站托管在AWS上,
并由适用于PostgreSQL数据库实例的Amazon RDS支持。该网站解释了该促销活动,
并包括一个收集用户信息和首选项的注册页面。管理层期望定期出现大量无法预测的流量,
将创建许多数据库写入。
解决方案架构师需要构建一个不会更改基础数据模型的解决方案,并确保在将提交提交到数据库之前不会删除提交。
哪些解决方案符合这些要求?
A.在活动开始之前,立即扩展现有数据库实例以满足预期的需求。
活动结束后再按比例缩小。
B.使用Amazon SQS分离应用程序和数据库层。
配置一个AWS Lambda函数,以将队列中的项目写入数据库。
C.迁移到Amazon DynamoDB并通过自动扩展管理吞吐量。
D.将Amazon ElastiCache用于Memcached,以增加对数据库实例的写入容量。

QUESTION 384

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A solutions architect is designing a publicly accessible web application that is on an Amazon
CloudFront distribution with an Amazon S3 website endpoint as the origin.
When the solution is deployed, the website returns an Error 403: Access Denied message.
Which steps should the solutions architect take to correct the issue? (Select TWO.)
A. Remove the S3 block public access option from the S3 bucket.
B. Remove the requester pays option from the S3 bucket.
C. Remove the origin access identity (OAI) from the CloudFront distribution.
D. Change the storage class from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-lA).
E. Disable S3 object versioning
Answer: AB
解决方案架构师正在设计位于Amazon上的可公开访问的Web应用程序
以Amazon S3网站终端节点为源的CloudFront分发。
部署解决方案后,网站将返回错误403:访问被拒绝消息。
解决方案架构师应采取哪些步骤纠正问题? (选择两个。)
答:从S3存储桶中删除S3块公共访问选项。
B.从S3存储桶中删除请求者付款选项。
C.从CloudFront分发中删除原始访问身份(OAI)。
D.将存储类别从S3标准更改为S3一区不频繁访问(S3一区-IA)。
E.禁用S3对象版本控制

QUESTION 385

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company is running a media store across multiple Amazon EC2 instances distributed across
multiple Availability Zones in a single VPC.
The company wants a high-performing solution to share data between all the EC2 instances, and
prefers to keep the data within the VPC only,
What should a solutions architect recommend?
A. Create an Amazon S3 bucket and call the service APls from each instance's application.
B. Create an Amazon S3 bucket and configure all instances to access it as a mounted volume.
C. Configure an Amazon Elastic Block Store (Amazon EBS) volume and mount it across all
instances.
D. Configure an Amazon Elastic File System (Amazon EFS) file system and mount it across all
instances.
Answer: D
一家公司正在跨多个Amazon EC2实例运行媒体存储,这些实例分布在单个VPC中的多个可用区中。
该公司希望获得一种高性能的解决方案,以便在所有EC2实例之间共享数据,并且更倾向于仅将数据保留在VPC内,
解决方案架构师应该建议什么?
A.创建一个Amazon S3存储桶,然后从每个实例的应用程序调用服务AP1。
B.创建一个Amazon S3存储桶并配置所有实例以将其作为已安装卷访问。
C.配置一个Amazon Elastic Block Store(Amazon EBS)卷并将其安装在所有实例上。
D.配置一个Amazon Elastic File System(Amazon EFS)文件系统,并将其安装在所有实例上。

QUESTION 386

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A company has a 143 TB MySQL database that it wants to migrate to AWS. The plan is to use
Amazon Aurora MySQL as the platform going forward. The company has a 100 Mbps AWS
Direct Connect connection to Amazon VPC.
Which solution meets the company's needs and takes the LEAST amount of time?
A. Use a gateway endpoint for Amazon S3.
Migrate the data to Amazon S3. Import the data into Aurora.
B. Upgrade the Direct Connect link to 500 Mbps.
Copy the data to Amazon S3 Import the data into Aurora.
C. Order an AWS Snowmobile and copy the database backup to it.
Have AWS import the data into Amazon S3. Import the backup into Aurora.
D. Order four 50-TB AWS Snowball devices and copy the database backup onto them.
Have AWS import the data into Amazon S3. Import the data into Aurora.

Answer: D
一家公司拥有一个143 TB MySQL数据库,希望将其迁移到AWS。 计划将使用Amazon Aurora MySQL作为平台。 该公司具有到Amazon VPC的100 Mbps AWS Direct Connect连接。
哪种解决方案可以满足公司的需求并花费最少的时间?
A.对Amazon S3使用网关终端节点。
将数据迁移到Amazon S3。 将数据导入Aurora。
B.将直接连接链接升级到500 Mbps。
将数据复制到Amazon S3将数据导入Aurora。
C.订购一台AWS Snowmobile并将数据库备份复制到其中。
让AWS将数据导入Amazon S3。 将备份导入Aurora。
D.订购四台50 TB的AWS Snowball设备并将数据库备份复制到它们上。AWS将数据导入到Amazon S3中。 将数据导入Aurora。

QUESTION 387

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
A media company has an application that tracks user clicks on its websites and performs
analytics to provide near-real time recommendations. The application has a fleet of Amazon EC2
instances that receive data from the websites and send the data to an Amazon RDS DB instance.
Another fleet of EC2 instances hosts the portion of the application that is continuously checking
changes in the database and executing SQL queries to provide recommendations. Management
has requested a redesign to decouple the infrastructure.
The solution must ensure that data analysts are writing SQL to analyze the data only. No data
can be lost during the deployment,
What should a solutions architect recommend?
A. Use Amazon Kinesis Data Streams to capture the data from the websites, Kinesis Data Firehose
to persist the data on Amazon S3, and Amazon Athena to query the data.
B. Use Amazon Kinesis Data Streams to capture the data from the websites, Kinesis Data Analytics
to query the data, and Kinesis Data Firehose to persist the data on Amazon S3.
C. Use Amazon Simple Queue Service (Amazon SQS) to capture the data from the websites, keep
the fleet of EC2 instances, and change to a bigger instance type in the Auto Scaling group
configuration.
D. Use Amazon Simple Notification Service (Amazon SNS) to receive data from the websites and
proxy the messages to AWS Lambda functions that execute the queries and persist the data.
Change Amazon RDS to Amazon Aurora Serverless to persist the data.
Answer: B
一家媒体公司拥有一个应用程序,可跟踪用户在其网站上的点击并执行分析以提供近乎实时的建议。该应用程序具有大量Amazon EC2
实例从网站接收数据并将数据发送到Amazon RDS数据库实例。
另一批EC2实例托管了应用程序中不断检查数据库中的更改并执行SQL查询以提供建议的部分。管理层要求进行重新设计以分离基础架构。
该解决方案必须确保数据分析师正在编写SQL来仅分析数据。部署期间不会丢失任何数据,
解决方案架构师应该建议什么?
A.使用Amazon Kinesis Data Streams捕获来自网站的数据,使用Kinesis Data Firehose将数据持久存储在Amazon S3上,使用Amazon Athena来查询数据。
B.使用Amazon Kinesis Data Streams从网站捕获数据,使用Kinesis Data Analytics查询数据,并使用Kinesis Data Firehose将数据持久保存在Amazon S3上。
C.使用Amazon Simple Queue Service(Amazon SQS)从网站捕获数据,保留EC2实例数量,并在Auto Scaling组配置中更改为更大的实例类型。
D.使用Amazon Simple Notification Service(Amazon SNS)从网站接收数据,并将消息代理到执行查询并保留数据的AWS Lambda函数。将Amazon RDS更改为Amazon Aurora Serverless以保留数据。

Kinesis Data Analytics

使用Kinesis Data Analytics,我们可以使用标准的SQL语句来处理和分析我们的数据流。这个服务可以让我们使用强大的SQL代码来做实时的数据流分析、创建实时的参数

QUESTION 388
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
A company has two VPCs named Management and Production. The Management VPC uses
VPNs through a customer gateway to connect to a single device in the data center. The
Production VPC uses a virtual private gateway with two attached AWS Direct Connect
connections. The Management and Production VPCs both use a single VPC peering connection
to allow communication between the applications.
What should a solutions architect do to mitigate any single point of failure in this architecture?
A. Add a set of VPNs between the Management and Production VPCs.
i. Add a second virtual private gateway and attach it to the Management VPC.
C. Add a second set of VPNs to the Management VPC from a second customer gateway device.
D. Add a second VPC peering connection between the Management VPC and the Production VPC,
Answer: C
一家公司有两个名为“管理”和“生产”的VPC。 管理VPC通过客户网关使用VPN连接到数据中心中的单个设备。 
生产VPC使用具有两个附加的AWS Direct Connect连接的虚拟专用网关。 
管理VPC和生产VPC都使用单个VPC对等连接,以允许应用程序之间进行通信。
解决方案架构师应采取什么措施来减轻该体系结构中的任何单点故障?
A.在管理和生产VPC之间添加一组VPN。
B.添加第二个虚拟专用网关并将其附加到管理VPC。
C.从第二个客户网关设备向管理VPC添加第二组VPN。
D.在管理VPC和生产VPC之间添加第二个VPC对等连接,
QUESTION 389
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A solutions architect is designing a solution that involves orchestrating a series of Amazon Elastic
Container Service (Amazon ECS) task types running on Amazon EC2 instances that are part of
an ECS cluster. The output and state data for all tasks needs to be stored.
The amount of data output by each task is approximately 10MB, and there could be hundreds of

tasks running at a time. The system should be optimized for high-frequency reading and writing.
As old outputs are archived and deleted, the storage size is not expected to exceed 1TB.
Which storage solution should the solutions architect recommend?
A. An Amazon DynamoDB table accessible by all ECS cluster instances.
B. An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.
C. An Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode.
D. An Amazon Elastic File System (Amazon EFS) volume mounted to the ECS cluster instances.
Answer: C
解决方案架构师正在设计一种解决方案,其中涉及协调在作为其一部分的Amazon EC2实例上运行的一系列Amazon Elastic Container Service(Amazon ECS)任务类型。
ECS集群 所有任务的输出和状态数据都需要存储。每个任务输出的数据量约为10MB,一次可能有数百个任务在运行。 该系统应针对高频读写进行优化。
由于旧的输出已存档和删除,因此存储大小预计不会超过1TB。
解决方案架构师应建议哪种存储解决方案?
答:所有ECS集群实例均可访问的Amazon DynamoDB表。
B.具有预配置吞吐量模式的Amazon Elastic File System(Amazon EFS)。
C.具有突发吞吐量模式的Amazon Elastic File System(Amazon EFS)文件系统。
D.安装到ECS集群实例的Amazon Elastic File System(Amazon EFS)卷。
QUESTION 390
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
A company has three VPCs named Develpoment, T esting, and Production in the us-east-1
Region. The three VPCs need to be connected to and on-premesis data center and are designed
to be separate to maintain security and prevent any resource sharing.
A solution architect needs to find a scalable and secure solution.
What should the solution archtect recommend?
A. Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back
to the data center.
B. Create VPC peers from all the VPCs to the Production VPC,
Use an AWS Direct Connect connection from the Production VPC back to the data center.
C. Connect VPN connections from all the VPCs to a VPN in the Production VPC.
Use a VPN connection from the Production VPC back to the data center.
D. Create a new VPC called Network, Within the Network VPC, create and AWS Transit Gateway
with an AWS Direct Connect connection back to the data center.
Attach all the other VPCs to the Network VPC.
一家公司在us-east-1地区拥有三个名为Develpoment,Testing和Production的VPC。这三个VPC需要连接到数据中心和内部数据中心,
并且被设计为独立的以维护安全性并防止任何资源共享。解决方案架构师需要找到可扩展且安全的解决方案。
解决方案架构师应建议什么?
A.为每个VPC创建一个AWS Direct Connect连接和一个VPN连接,以连接回数据中心。
B.从所有VPC到生产VPC创建VPC对等方,使用从生产VPC到数据中心的AWS Direct Connect连接。
C.将所有VPC的VPN连接连接到生产VPC中的VPN。使用从生产VPC到数据中心的VPN连接。
D.在网络VPC中创建一个称为网络的新VPC,创建并通过AWS Direct Connect连接回到数据中心的AWS Transit Gateway。将所有其他VPC连接到网络VPC。

Answer: D

QUESTION 391
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
 A company wants to build a scalable key management infrastructure to support developers who need to encrypt data in their applications.
 What should a solutions architect do to reduce the operational burden? 
 A. Use multi-factor authentication (MFA) to protect the encryption keys 
 B. Use AWS Key Management Service (AWS KMS) to protect the encryption keys 
 C. Use AWS Certificate Manager (ACM) to create, store and assign the encryption keys 
 D. Use an IAM policy to limit the scope of users who have access permissions to protect the encryption keys Answer: B
 公司希望建立可扩展的密钥管理基础结构,以支持需要在其应用程序中加密数据的开发人员。 解决方案架构师应采取什么措施来减轻运营负担?
A.使用多因素身份验证(MFA)保护加密密钥
B.使用AWS Key Management Service(AWS KMS)保护加密密钥
C.使用AWS Certificate Manager(ACM)创建,存储和分配加密密钥
  D.使用IAM策略来限制具有访问权限以保护加密密钥的用户范围
QUESTION 392
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A development team is collaborating with another company to create an integrated product. The
other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is
contained in the development team's account. The other company wants to poll the queue
without giving up its own account permissions to do So.
How should a solutions architect provide access to the SQS queue?
A. Create an instance profile that provides the other company access to the SQS queue.
B. Create an IAM policy that provides the other company access to the SQS queue,
C. Create an SQS access policy that provides the other company access to the SQS queue.
D. Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the
other company access to the SQS queue.
Answer: C
一个开发团队正在与另一家公司合作创建一个集成产品。 另一家公司需要访问开发团队帐户中包含的Amazon Simple Queue Service(Amazon SQS)队列。
另一家公司希望在不放弃自己的帐户权限的情况下轮询队列。
解决方案架构师应如何提供对SQS队列的访问?
A.创建一个实例配置文件,以提供其他公司对SQS队列的访问权限。
B.创建一个IAM策略,以提供其他公司对SQS队列的访问权限,
C.创建一个SQS访问策略,以提供其他公司对SQS队列的访问。
D.创建一个Amazon Simple Notification Service(Amazon SNS)访问策略,该策略提供其他公司对SQS队列的访问。

以下示例策略向所有用户(匿名用户)授予对111122223333/queue1名为的队列的ReceiveMessage访问权限。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
   "Version": "2012-10-17",
   "Id": "Queue1_Policy_UUID",
   "Statement": [{
      "Sid":"Queue1_AnonymousAccess_ReceiveMessage",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "sqs:ReceiveMessage",
      "Resource": "arn:aws:sqs:*:111122223333:queue1"
   }]
}
QUESTION 393
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A disaster response team is using drones to collect images from recent storm damage. The
response team's laptops lack the storage and compute capacity to transfer the images and
process the data. While the team has Amazon EC2 instances for processing and Amazon S3
buckets for storage, network connectivity is intermittent and unreliable. The images need to be
processed to evaluate the damage.
What should a solutions architect recommend?
A. Use AWS Snowball Edge devices to process and store the images.
B. Upload the images to Amazon Simple Queue Service (Amazon SQS) during intermittent
connectivity to EC2 instances.
C. Configure Amazon Kinesis Data Firehose to create multiple delivery streams aimed separatelyat
the S3 buckets for storage and the EC2 instances for processing the images.
D. Use AWS Storage Gateway pre installed on a hardware appliance to cache the images locally for
Amazon S3 to process the images when connectivity becomes available.
Answer: A
灾难响应团队正在使用无人机来收集来自最近风暴破坏的图像。 响应团队的笔记本电脑缺乏存储和计算能力,
无法传输图像和处理数据。 尽管团队拥有用于处理的Amazon EC2实例和用于存储的Amazon S3存储桶,
但网络连接是断断续续且不可靠的。 需要对图像进行处理以评估损坏。
解决方案架构师应该建议什么?
A.使用AWS Snowball Edge设备处理和存储图像。
B.在与EC2实例的间歇连接期间,将图像上传到Amazon Simple Queue Service(Amazon SQS)。
C.配置Amazon Kinesis Data Firehose以创建分别针对S3存储桶和EC2实例处理图像的多个交付流。
D.使用预先安装在硬件设备上的AWS Storage Gateway在本地缓存图像,以便Amazon S3在连接可用时处理图像。
QUESTION 394
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
A company has a live chat application running on list on-premises servers that use WebSockets.
The company wants to migrate the application to AWS Application traffic is inconsistent, and the
company expects there to be more traffic with sharp spikes in the future.The company wants a
highly scalable solution with no server maintenance nor advanced capacity planning,
Which solution meets these requirements?
A. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store
Configure the DynamoDB table for provisioned capacity
B. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store
Configure the DynaiWDB table for on-demand capacity
C. Run Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group with an
Amazon DynamoDB table as the data store Configure the DynamoDB table for on-demand
capacity
D. Run Amazon EC2 instances behind a Network Load Balancer in an Auto Scaling group with an
Amazon DynamoDB table as the data store Configure the DynamoDB table for provisioned
capacity
一家公司有一个实时聊天应用程序运行在使用WebSocket的本地列表服务器上。
该公司希望将应用程序迁移到AWS应用程序流量不一致,并且该公司希望将来会有更多流量并出现尖峰峰值。
该公司希望具有高度可扩展性的解决方案,而无需服务器维护或高级容量规划,哪个解决方案可以满足这些要求要求?
A.将Amazon API Gateway和AWS Lambda与Amazon DynamoDB表一起用作数据存储为配置的容量配置DynamoDB表
B.将Amazon API Gateway和AWS Lambda与Amazon DynamoDB表一起用作数据存储将DynaiWDB表配置为按需容量
C.在具有Amazon DynamoDB表作为数据存储的Auto Scaling组中的Application Load Balancer后面的Amazon EC2实例运行将DynamoDB表配置为按需容量
D.使用Amazon DynamoDB表作为数据存储,在Auto Scaling组中的网络负载均衡器后面运行Amazon EC2实例,将DynamoDB表配置为配置容量

Answer: B

QUESTION 395

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
A company has applications hosted on Amazon EC2 instances with lPv6 addresses. The applications must initiate communications with other external applications using the internet, However, the company's security policy states that any external service cannot initiate a connection to the EC2 instances.
 What should a solutions architect recommend to resolve this issue? 
A. Create a NAT gateway and make it the destination of the subnet's route table 
B. Create an internet gateway and make it the destination of the subnet's route table
 C. Create a virtual private gateway and make it the destination of the subnet's route table 
D. Create an egress-only internet gateway and make it the destination of the subnet's route table
Answer: D
一家公司的应用程序托管在具有lPv6地址的Amazon EC2实例上。 这些应用程序必须使用Internet启动与其他外部应用程序的通信,
但是,该公司的安全策略规定,任何外部服务都无法启动与EC2实例的连接。
  解决方案架构师应建议什么来解决此问题?
A.创建一个NAT网关并将其作为子网路由表的目标
B.创建一个Internet网关,并将其作为子网路由表的目的地
  C.创建一个虚拟专用网关并将其作为子网路由表的目的地
D.创建一个仅出口的互联网网关,并使其成为子网路由表的目的地

QUESTION 396

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
A company is deploying a web portal. The company wants to ensure that only the web portion of
the application is publiclÿ accessible. To accomplish this, the VPC was designed with two public
subnets and two private subnets. The application will run on several Amazon EC2 instances in an
Auto Scaling group. SSL termination must be offloaded from the EC2 instances.
What should a solutions architect do to ensure these requirements are met?
A. Configure the Network Load Balancer in the public subnets.
Configure the Auto Scaling group in the private subnets and associate it with the Application Load
Balancer
B. Configure the Network Load Balancer in the public subnets.
Configure the Auto Scaling group in the public subnets and associate it with the Application Load
Balancer
C. Configure the Application Load Balancer in the public subnets.
Configure the Auto Scaling group in the private subnets and associate it with the Application Load
Balancer
D. Configure the Application Load Balancer in the private subnets,
Configure the Auto Scaling group in the private subnets and associate it with the Application Load
Balancer
Answer: C
一家公司正在部署Web门户。该公司希望确保仅公开访问应用程序的Web部分。为此,VPC设计有两个公用子网和两个专用子网。
该应用程序将在Auto Scaling组中的多个Amazon EC2实例上运行。必须从EC2实例卸载SSL终止。
解决方案架构师应怎么做才能确保满足这些要求?
A.在公共子网中配置网络负载平衡器。
在专用子网中配置Auto Scaling组,并将其与Application Load Balancer关联
B.在公共子网中配置网络负载平衡器。
在公共子网中配置Auto Scaling组,并将其与Application Load Balancer关联
C.在公共子网中配置应用程序负载平衡器。
在专用子网中配置Auto Scaling组,并将其与Application Load Balancer关联
D.在专用子网中配置应用程序负载平衡器,
在专用子网中配置Auto Scaling组,并将其与Application Load Balancer关联

QUESTION 397

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A company is running a multi-tier web application on premises. The web application is
containerized and runs on a number of Linux hosts connected to a PostgreSQL database that
contains user records. The operational overhead of maintaining the infrastructure and capacity
planning is limiting the company's growth. A solutions architect must improve the application's
infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Select
TWO.)
A. Migrate the PostgreSQL database to Amazon Aurora
B. Migrate the web application to be hosted on Amazon EC2 instances.
C. Set up an Amazon CloudFront distribution for the web application content.

D. Set up Amazon ElastiCache between the web application and the PostgreSQL database
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service
(Amazon ECS)
Answer: AE
一家公司正在内部运行多层Web应用程序。 该Web应用程序是容器化的,并在连接到包含用户记录的PostgreSQL数据库的许多Linux主机上运行。 
维护基础架构和容量的运营开销
计划限制了公司的成长。 解决方案架构师必须改善应用程序的基础结构。
解决方案架构师应采取哪种行动组合来完成此任务? (选择两个。)
A.将PostgreSQL数据库迁移到Amazon Aurora
B.迁移要托管在Amazon EC2实例上的Web应用程序。
C.为Web应用程序内容设置Amazon CloudFront发行版。
D.在Web应用程序和PostgreSQL数据库之间设置Amazon ElastiCache
E.使用Amazon Elastic Container Service(Amazon ECS)迁移要托管在AWS Fargate上的Web应用程序

QUESTION 398

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
A solutions architect needs to ensure that all Amazon Elastic Block Store (Amazon EBS) volumes
restored from unencrypted EBS snapshots are encrypted.
What should the solutions architect do to accomplish this?

解决方案架构师需要确保所有Amazon Elastic Block Store(Amazon EBS)卷
从未加密的EBS快照还原的文件将被加密。
解决方案架构师应该怎么做才能做到这一点?
A. Enable EBS encryption by default for the AWS Region
B. Enable EBS encryption by default for the specific volumes
C. Create a new volume and specify the symmetric customer master key (CMK) to use for encryption
D. Create a new volume and specify the asymmetric customer master key (CMK) to use for
encryption.
A.默认情况下为AWS区域启用EBS加密
B.默认情况下为特定卷启用EBS加密
C.创建一个新卷并指定用于加密的对称客户主密钥(CMK)
D.创建一个新卷并指定用于以下目的的非对称客户主密钥(CMK)
加密。
Answer: A
QUESTION 399
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A company wants to share forensic accounting data is stored in an Amazon RDS DB instance
with an external auditor. The Auditor has its oพท AWS account and requires its ๐wก copy of the
database.
How should the company securely share the database with the auditor?
一家公司希望共享法务会计数据,该数据存储在Amazon RDS数据库实例中
与外部审计师。 审核员拥有其AWS账户,并要求其提供
数据库。
公司应如何与审核员安全地共享数据库?
A. Create a read replica of the database and configure IAM standard database authentication to grant
the auditor access.
B. Copy a snapshot of the database to Amazon S3 and assign an IAM role to the auditor to grant
access to the object in that bucket.
C. Export the database contents to text files, store the files in Amazon S3, and create a new IAM user
for the auditor with access to that bucket.
D. Make an encrypted snapshot of the database, share the snapshot, and allow access to the AWS
Key Management Service (AWS KMS) encryption key
A.创建数据库的只读副本,并将IAM标准数据库身份验证配置为授予
审核员访问权限。
B.将数据库快照复制到Amazon S3,并将IAM角色分配给审核员以授予
访问该存储桶中的对象。
C.将数据库内容导出到文本文件,将文件存储在Amazon S3中,并创建一个新的IAM用户
给有权访问该存储桶的审核员。
D.制作数据库的加密快照,共享快照,并允许访问AWS
密钥管理服务(AWS KMS)加密密钥
Answer: C
QUESTION 400
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
A company is experiencing growth as demand for its product has increased The company's
existing purchasing application is slow when traffic spikes The application is a monolithic three
tier application that uses synchronous transactions and sometimes sees bottlenecks in the
application tier A solutions architect needs to design a solution that can meet required application
response times while accounting for traffic volume spikes.
Which solution will meet these requirements?
随着公司对其产品需求的增长,公司正在经历增长。
当流量激增时,现有的购买应用程序运行缓慢
使用同步事务的层应用程序,有时会发现瓶颈
应用程序层解决方案架构师需要设计一种可以满足所需应用程序的解决方案
响应时间,同时考虑流量高峰。
哪种解决方案可以满足这些要求?

A. Vertically scale the application instance using a larger Amazon EC2 instance size.
B. Scale the application's persistence layer horizontally by introducing Oracle RAC on AWS
C. Scale the web and application tiers horizontally using Auto Scaling groups and an Application Load
Balancer
D. Decouple the application and data tiers using Amazon Simple Queue Service (Amazon SQS) with
asynchronous AWS Lambda calls
A.使用更大的Amazon EC2实例大小垂直扩展应用程序实例。
B.通过在AWS上引入Oracle RAC来水平扩展应用程序的持久层
C.使用Auto Scaling组和应用程序负载水平缩放Web和应用程序层
平衡器
D.使用Amazon Simple Queue Service(Amazon SQS)与应用程序和数据层分离
异步AWS Lambda调用

Answer: A
QUESTION 401

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real- time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval. What should a solutions architect recommend to meet these requirements?

一家公司在AWS上运行在线市场Web应用程序。 该应用程序服务 高峰时段有成千上万的用户。 公司需要可扩展的,接近真实的 时间解决方案,与其他几家内部机构共享数百万笔财务交易的详细信息 应用程序。 还需要处理交易以删除敏感数据 存储在文档数据库中以实现低延迟检索。 解决方案架构师应建议哪些以满足这些要求?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications. A.将交易数据存储到Amazon DynamoDB中。 在DynamoDB中设置规则,以在写入时从每个事务中删除敏感数据。 使用DynamoDB流与其他应用程序共享事务数据。 B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3. B.将交易数据流式传输到Amazon Kinesis Data Firehose以将数据存储在Amazon中 DynamoDB和Amazon S3。 将AWS Lambda与Kinesis Data Firehose集成在一起以删除敏感数据。 其他应用程序可以使用存储在Amazon S3中的数据。 C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream. D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3. C.将交易数据流式传输到Amazon Kinesis Data Streams。 使用AWS Lambda集成从每个事务中删除敏感数据,然后存储 Amazon DynamoDB中的交易数据。 其他应用程序可以使用Kinesis数据流中的事务数据。 D.将批处理的交易数据作为文件存储在Amazon S3中。 使用AWS Lambda处理每个文件并删除敏感数据,然后再更新其中的文件。 亚马逊S3。 然后,Lambda函数将数据存储在Amazon DynamoDB中。 其他应用程序可以使用存储在Amazon S3中的交易文件

Answer: B