This page was exported from Exam for engine [ http://blog.test4engine.com ] Export date:Tue Mar 25 22:33:54 2025 / +0000 GMT ___________________________________________________ Title: Pass Google Professional-Cloud-Architect exam questions - convert Test Engine to PDF [Q152-Q170] --------------------------------------------------- Pass Google Professional-Cloud-Architect exam questions - convert Test Engine to PDF Pass Your Professional-Cloud-Architect Exam Easily - Real Professional-Cloud-Architect Practice Dump Updated Nov 26, 2024 Google Professional-Cloud-Architect certification exam is intended for cloud architects, engineers, and consultants who design and deploy solutions on Google Cloud Platform. Google Certified Professional - Cloud Architect (GCP) certification demonstrates the candidate's proficiency in using Google Cloud technologies to design, develop, and manage secure, scalable, and highly available solutions. Professional-Cloud-Architect exam covers a range of topics from cloud infrastructure design to data management, security, and compliance. Google Professional-Cloud-Architect certification exam is rigorous and comprehensive, consisting of multiple-choice and scenario-based questions. Professional-Cloud-Architect exam assesses the candidates' ability to design, develop, and manage GCP solutions that meet the business and technical requirements of their organizations. Google Certified Professional - Cloud Architect (GCP) certification exam covers a broad range of topics, including GCP infrastructure, networking, security, data storage, analytics, and machine learning. Passing the exam demonstrates the candidate's proficiency in GCP and validates their ability to design, develop, and manage GCP solutions at an expert level.   NEW QUESTION 152For this question, refer to the Dress4Win case study.As part of Dress4Win’s plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load.They want to ensure that:* The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day* Their administrators are notified automatically when their application reports errors.* They can filter their aggregated logs down in order to debug one piece of the application across many hosts Which Google StackDriver features should they use?  Logging, Alerts, Insights, Debug  Monitoring, Trace, Debug, Logging  Monitoring, Logging, Alerts, Error Reporting  Monitoring, Logging, Debug, Error Report NEW QUESTION 153Your company runs several databases on a single MySQL instance. They need to take backups of a specific database at regular intervals. The backup activity needs to complete as quickly as possible and cannot be allowed to impact disk performance. How should you configure the storage?  Configure a cron job to use the gcloud tool to take regular backups using persistent disk snapshots.  Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the backup to Google Cloud Storage.  Use gcsfuse to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump  Mount additional persistent disk volumes onto each virtual machine (VM) instance in a RAID10 array and use LVM to create snapshots to send to Cloud Storage. https://cloud.google.com/compute/docs/instances/sql-server/best-practicesNEW QUESTION 154You are working in a highly secured environment where public Internet access from the Compute Engine VMs is not allowed. You do not yet have a VPN connection to access an on-premises file server. You need to install specific software on a Compute Engine instance. How should you install the software?  Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil.  Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP address range for Cloud Storage. Download the files to the VM using gsutil.  Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gcloud.  Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic except the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil. NEW QUESTION 155For this question, refer to the Dress4Win case study.Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which additional testing methods should the developers employ to prevent an outage?  They should enable Google Stackdriver Debugger on the application code to show errors in the code.  They should add additional unit tests and production scale load tests on their cloud staging environment.  They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.  They should add canary tests so developers can measure how much of an impact the new release causes to latency. NEW QUESTION 156An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log data.You want to help them find a solution that meets their needs.What should you do?  Direct them to download and install the Google StackDriver logging agent  Send them a list of online resources about logging best practices  Help them define their requirements and assess viable logging tools  Help them upgrade their current tool to take advantage of any new features The Stackdriver Logging agent streams logs from your VM instances and from selected third party software packages to Stackdriver Logging. Using the agent is optional but we recommend it. The agent runs under both Linux and Microsoft Windows.Note: Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform and Amazon Web Services (AWS). Our API also allows ingestion of any custom log data from any source. Stackdriver Logging is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs. Even better, you can analyze all that log data in real time.References: https://cloud.google.com/logging/docs/agent/installationhttps://medium.com/google-cloud/hidden-super-powers-of-stackdriver-logging-ca110dae7e74NEW QUESTION 157Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform and monitor the KPIs with low latency. How should they capture the KPIs?  Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.  Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.  Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.  Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab. Reference:https://cloud.google.com/solutions/data-lifecycle-cloud-platformNEW QUESTION 158You deploy your custom java application to google app engine.It fails to deploy and gives you the following stack trace:  Recompile the CLoakedServlet class using and MD5 hash instead of SHA1  Digitally sign all of your JAR files and redeploy your application.  Upload missing JAR files and redeploy your application NEW QUESTION 159You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application?  Cloud Run and BigQuery  Cloud Run and Cloud Bigtable  A Compute Engine autoscaling managed instance group and BigQuery  A Compute Engine autoscaling managed instance group and Cloud Bigtable https://cloud.google.com/run/docs/about-instance-autoscalinghttps://cloud.google.com/blog/topics/developers-practitioners/bigtable-vs-bigquery-whats-differenceNEW QUESTION 160Case Study: 2 – TerramEarth Case StudyCompany OverviewTerramEarth manufactures heavy equipment for the mining and agricultural industries: About80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.Company BackgroundTerramEarth formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended members of their family.TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers’ needs change. For the past 20 years trends in the industry have been largely toward increasing productivity by using larger vehicles with a human operator.Solution ConceptThere are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second.Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced.The data is downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with 22 hours of operation per day.TerramEarth collects a total of about 9 TB/day from these connected vehicles.Existing Technical EnvironmentTerramEarth’s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload via FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.Business Requirements– Decrease unplanned vehicle downtime to less than 1 week, withoutincreasing the cost of carrying surplus inventory– Support the dealer network with more data on how their customers usetheir equipment IP better position new products and services.– Have the ability to partner with different companies-especially withseed and fertilizer suppliers in the fast-growing agriculturalbusiness-to create compelling joint offerings for their customersCEO StatementWe have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring rapidly and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming equipment. With this technology, we have been able to increase farmers’ yields by 25%, by using past trends to adjust how our vehicles operate. These advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.CTO StatementOur competitive advantage has always been in the manufacturing process with our ability to build better vehicles for tower cost than our competitors. However, new products with different approaches are constantly being developed, and I’m concerned that we lack the skills to undergo the next wave of transformations in our industry. Unfortunately, our CEO doesn’t take technology obsolescence seriously and he considers the many new companies in our industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.For this question, refer to the TerramEarth case study. TerramEarth’s CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the field will have a catastrophic failure. You want to allow analysts to centrally query the vehicle data. Which architecture should you recommend?         The push endpoint can be a load balancer.A container cluster can be used.Cloud Pub/Sub for Stream AnalyticsReferences: https://cloud.google.com/pubsub/https://cloud.google.com/solutions/iot/https://cloud.google.com/solutions/designing-connected-vehicle-platformhttps://cloud.google.com/solutions/designing-connected-vehicle-platform#data_ingestionhttp://www.eweek.com/big-data-and-analytics/google-touts-value-of-cloud-iot-core-for-analyzing- connected-car-datahttps://cloud.google.com/solutions/iot/NEW QUESTION 161During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future. What should you do?  Use a different database.  Choose larger instances for your database.  Create snapshots of your database more regularly.  Implement routinely scheduled failovers of your databases. ExplanationTake regular snapshots of your database system.If your database system lives on a Compute Engine persistent disk, you can take snapshots of your system each time you upgrade. If your database system goes down or you need to roll back to a previous version, you can simply create a new persistent disk from your desired snapshot and make that disk the boot disk for a new Compute Engine instance. Note that, to avoid data corruption, this approach requires you to freeze the database system’s disk while taking a snapshot.Reference: https://cloud.google.com/solutions/disaster-recovery-cookbookNEW QUESTION 162You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute, with bursts of up to 8,500 clicks per second. It must been stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?  Google Cloud SQL  Google Cloud Bigtable  Google Cloud Storage  Google cloud Datastore Explanationhttps://cloud.google.com/solutions/data-analytics-partner-ecosystemhttps://zulily-tech.com/2015/08/10/leveraging-google-cloud-dataflow-for-clickstream-processing/ Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.Good for:* Low-latency read/write access* High-throughput analytics* Native time series support* Common workloads:* IoT, finance, adtech* Personalization, recommendations* Monitoring* Geospatial datasets* GraphsNEW QUESTION 163You need to set up Microsoft SQL Server on GCP. Management requires that there’s no downtime in case of a data center outage in any of the zones within a GCP region. What should you do?  Configure a Cloud SQL instance with high availability enabled.  Configure a Cloud Spanner instance with a regional instance configuration.  Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets.  Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones. Explanationhttps://cloud.google.com/vpc/docs/vpcNEW QUESTION 164For this question, refer to the TerramEarth case study.TerramEarth’s 20 million vehicles are scattered around the world. Based on the vehicle’s location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles.You want to run this job on all the data. What is the most cost-effective way to run this job?  Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.  Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.  Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi region bucket and use a Dataproc cluster to finish the job.  Launch a cluster in each region to preprocess and compress the raw data, then move the data into a regional bucket and use a Cloud Dataproc cluster ….. ExplanationStorageguarantees 2 replicates which are geo diverse (100 miles apart) which can get better remote latency and availability.More importantly, is that multiregional heavily leverages Edge caching and CDNs to provide the content to the end users.All this redundancy and caching means that Multiregional comes with overhead to sync and ensure consistency between geo-diverse areas. As such, it’s much better for write-once-read-many scenarios. This means frequently accessed (e.g. “hot” objects) around the world, such as website content, streaming videos, gaming or mobile applications.References:https://medium.com/google-cloud/google-cloud-storage-what-bucket-class-for-the-best-performance-5c847ac8f9NEW QUESTION 165A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run property on Google Cloud Platform. What should you do?  Help the engineer to convert his websocket code to use HTTP streaming.  Review the encryption requirements for websocket connections with the security team.  Meet with the cloud operations team and the engineer to discuss load balancer options.  Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions. Google Cloud Platform (GCP) HTTP(S) load balancing provides global load balancing for HTTP(S) requests destined for your instances.The HTTP(S) load balancer has native support for the WebSocket protocol.Incorrect Answers:A: HTTP server push, also known as HTTP streaming, is a client-server communication pattern that sends information from an HTTP server to a client asynchronously, without a client request. A server push architecture is especially effective for highly interactive web or mobile applications, where one or more clients need to receive continuous information from the server.NEW QUESTION 166For this question, refer to the TerramEarth case study.To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections. What should you do?  Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket.Run the ETL process using data in the bucket.  Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.  Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.  Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket. https://cloud.google.com/storage/docs/locationsNEW QUESTION 167Your customer is moving an existing corporate application to Google Cloud Platform from an on- premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords. What authentication strategy should they use?  Use G Suite Password Sync to replicate passwords into Google.  Federate authentication via SAML 2.0 to the existing Identity Provider.  Provision users in Google using the Google Cloud Directory Sync tool.  Ask users to set their Google password to match their corporate password. https://support.google.com/a/answer/2611859?hl=enNEW QUESTION 168You have deployed an application to Kubernetes Engine, and are using the Cloud SQL proxy container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that the application is reporting database connection issues. Your company policies require a post-mortem. What should you do?  Use gcloud sql instances restart.  Validate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build Editor role.  In the GCP Console, navigate to Stackdriver Logging. Consult logs for Kubernetes Engine and Cloud SQL.  In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubect1 to restart all pods. NEW QUESTION 169Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings.Which approach should you use?  Grant the security team access to the logs in each Project  Configure Stackdriver Monitoring for all Projects, and export to BigQuery  Configure Stackdriver Monitoring for all Projects with the default retention policies  Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud and open source application services. Allows you to define metrics based on log contents that are incorporated into dashboards and alerts. Enables you to export logs to BigQuery, Google Cloud Storage, and Pub/Sub.Reference: https://cloud.google.com/stackdriver/NEW QUESTION 170A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment You want to advocate for the adoption of Google Cloud Deployment Manager What are two business risks of migrating to Cloud Deployment Manager?Choose 2 answers  Cloud Deployment Manager uses Python.  Cloud Deployment Manager APIs could be deprecated in the future.  Cloud Deployment Manager is unfamiliar to the company’s engineers.  Cloud Deployment Manager requires a Google APIs service account to run.  Cloud Deployment Manager can be used to permanently delete cloud resources.  Cloud Deployment Manager only supports automation of Google Cloud resources. https://cloud.google.com/deployment-manager/docs/deployments/deleting-deployments Loading … Professional-Cloud-Architect Real Exam Questions and Answers FREE: https://www.test4engine.com/Professional-Cloud-Architect_exam-latest-braindumps.html --------------------------------------------------- Images: https://blog.test4engine.com/wp-content/plugins/watu/loading.gif https://blog.test4engine.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2024-11-26 10:38:26 Post date GMT: 2024-11-26 10:38:26 Post modified date: 2024-11-26 10:38:26 Post modified date GMT: 2024-11-26 10:38:26