cert
cert-1
cert-2

Pass Google Professional Cloud Architect Certification Exam in First Attempt Guaranteed!

Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!

cert-5
cert-6
Professional Cloud Architect Exam - Verified By Experts
Professional Cloud Architect Premium Bundle
$39.99

Professional Cloud Architect Premium Bundle

$69.98
$109.97
  • Premium File 276 Questions & Answers. Last update: Apr 10, 2024
  • Training Course 63 Lectures
  • Study Guide 491 Pages
 
$109.97
$69.98
block-screenshots
 Exam Screenshot #1  Exam Screenshot #2  Exam Screenshot #3  Exam Screenshot #4 PrepAway  Training Course Screenshot #1 PrepAway  Training Course Screenshot #2 PrepAway  Training Course Screenshot #3 PrepAway  Training Course Screenshot #4 PrepAway  Study Guide Screenshot #1 PrepAway  Study Guide Screenshot #2 PrepAway  Study Guide Screenshot #31 PrepAway  Study Guide Screenshot #4
cert-15
cert-16
cert-20

Professional Cloud Architect Exam - Google Cloud Certified - Professional Cloud Architect

cert-27
cert-32

Google Professional Cloud Architect Certification Practice Test Questions and Answers, Google Professional Cloud Architect Certification Exam Dumps

All Google Professional Cloud Architect certification exam dumps, study guide, training courses are prepared by industry experts. Google Professional Cloud Architect certification practice test questions and answers, exam dumps, study guide and training courses help candidates to study and pass hassle-free!

Storage and Data Services

1. Storage Options Overview

When it comes to cloud storage options, we're going to be talking about the different choices that Google has for storing your data in the cloud, such as multi, regional, regional, nearly cold line, etc. Then we'll go ahead and talk about storage features like lifecycle management. Google Cloud has four distinct options that you should be aware of, and, once again, a lot of this will be determined by the durability as well as the cost that you may require multi-regional, regional, near-line, and cold-line deployments. We already discussed these in the module that goes through cloud storage.

But when it comes to the storage classes, again, you need to make the right decision based on the use case that you're using. For example, if you're thinking about more content delivery, you may want to place your storage buckets in the proper region, and there are going to be many benefits to doing that. You don't want to have a bucket in, say, Asia, where most of your users are actually in the US. It just doesn't make sense. You'll be costing the organization's performance as well as possibly additional costs because storage is cheaper in the US than in Asia or Europe. So definitely take a look at the different storage classes and try to determine what the right use cases are.

When it comes to features, cloud storage does support what's called object control and object versioning. Essentially, this is where you go in and create a file action object and essentially create a one-off or a different revision of that object. So for example, someone is modifying an object that could be called object one, and then object one A, and object one B. There's also lifecycle management if you want to structure that version however you want. This is where you go ahead and address areas of specific objects, and after a certain point in time, you may not need to have those objects in regional storage. They may be able to just be migrated to an essentially new line of cold lines. Then what about if there's a change, for example, to your objects or your buckets? You go ahead and get notified about that as well. And lastly, you could also import data. There's also a demo that I perform on how to migrate, for example, data from AWS as well as region-to-region with the Google Cloud Migration Service.

When it comes to object versioning, the main goal is to enable you to keep track of what is being modified. It's going to keep track of overwrites and deletes. Users can essentially archive versions of an object, restore it to an earlier state, or delete a version. For example, in this picture, this is the Google Cloud website right here. If you want more information, consult the documentation. But essentially, it's fairly simplistic to understand in the sense that you drop an object in a bucket, and then over time that object may change. It may have a different version, and instead of being called G1, it will be called G2, and so on, or G1, and so on, and over time Lifecycle will take over and move that bucket over to archive. When it comes to lifecycle management, be aware that the goal of lifecycle management is to help you create policies that are going to perform specific actions. The goal is to move regional objects, such as those near a coal line. Once the criteria are set, you could set all kinds of different criteria. Most commonly, it'll be mainly the age and size of the file. You can also create specific rules and policies based on the geo, and so on.

You have the ability to change configurations. One note: it can definitely take a little bit of time for those configs to be fully updated and propagated through the GCP storage platform, and the inspection of the objects occurs asynchronously and in batches. So it's not like, for example, with synchronous replication, where if you update that block, it will automatically update the target from the source. It's going to take a little bit of time to accomplish that, and then we have object change notification. This is, of course, very useful. If someone goes in here, updates a file or an object, and posts it, it's going to go ahead and notify you. You can go ahead and set up a change notification for a bucket and then have it send a notification, and again, the link for that is there as well. Important: This is going to allow you to use the Google Cloud Transfer service and allow you to import that online data into the cloud storage bucket. There's also offline media support as well. This is through a third-party provider.

This would be sort of similar to what Amazon has—sort of like a snowball with similar availability—and third-party service providers could also perform that. And, based on what I've seen, I believe there are a couple of new capabilities on the roadmap that are more akin to an Amazon avalanche capability, where you could send out a truck or a pod and migrate data that way, and have partners essentially migrate the data onto the platform for you. More information can be found here. Feel free to take a look and determine if anything is useful to you. When it comes to cloud storage, Just keep in mind that it is essentially completely integrated with many cloud platform services. So you'll be able to use this with compute engine, app engine, and other services such as Cloud, Pub, Sub, you name it; it'll be fully integrated in most cases. Here are some links. Again, I encourage you to take a look at the links and see if there are any that are useful.

2. Cloud Platform Data Storage Security

When it comes to data storage security, it's important to understand how your cloud provider approaches data security, and then we'll talk about secure data storage and how that falls into Google's infrastructure as a whole. GCP's approach to security is essentially a global-scale infrastructure. It's designed to provide security through the entire information processing lifecycle. The infrastructure provides secure deployment of services, such as ensuring end-user privacy safeguards, secure communications between services, secure and private communication with customers over the Internet, and safe operation by administrators. Google uses infrastructure to build out its Internet services.

So just be aware that the infrastructure that Google Cloud Platform is on is essentially the same infrastructure that other Google services use. Google is, again, as you're likely aware, the largest, if not the largest, data centre telecom provider in the world in many cases. And if you're not aware, Google has a huge data centre footprint, not just for GoogleCloud but also for their own infrastructure and services that use it, such as Search, Gmail, etc. So Google Cloud Platform essentially builds off of those services when it comes to data security. Essentially, there are two areas in what's called the Google infrastructure security layers. With regard to storage services, we will now concentrate primarily on COVID. It will revolve around data encryption and deletion. There is a white paper here; let's take a quick look at it.

The URL for the white paper on security is essentially cloud Google.com securitydesign. There is a link for this in the module that you go to and download the design overview. And this is essentially the infrastructure security layer. If you just go ahead and go to the website, you'll see how Google handles their own datacenters, and it talks about how the data centre uses everything from data—I mean, from biometrics to lasers—and all these really interesting security perimeters. They use vehicle barriers, and once again they go into how virtual machines boot up and how the BIOS has different components. We won't go into all the little details with this, but I want to make sure that you have a good look at it. The areas that we're talking about now are really around secure data storage, which is encryption at rest and deletion of data. Now, when it comes to encryption at rest, you, of course, do have options to support your own keys if you so choose.

Or you could use Google's key management service. For example, if you're going to use Google at work or any of the Gmail services, you have the ability to tie an organisation to it and use essentially the same security built into that as well, if you want. But with that said, just take a look at how things are handled around encryption and deletion of data. When it comes to storage security, GCP uses key management and has very strict compliance to data wiping requirements. It also performs encryption at the application layer. In addition, the hardware itself is encrypted, with hardware encryption support. And they use what's called "chips." I guess "Titan chips" would probably be the proper term for it, and we'll talk about those coming up in a second. And again, the SSDs, for example, are secure as well. But just to finish this example, when a drive fails, that drive has to be replaced.

Google has very strict processes for removing failed drives. For example, that drive has to be essentially wiped and destroyed. It can't just leave the building without going through a proper lifecycle process. When it comes to stored security, there's essentially what's called a "schedule for deletion," where if that is going to be deleted, let's say, for example, you log into your account with Google and you delete a user. Well, Google will delete the account and all the data with it, but it's still available for a specific period of time, which is typically a few hours or a few days. It depends on if it's like Gmail or something else, perhaps in the cloud, or what the cloud service is. It's not very clear as far as the level, but at least from the paperwork I've found, data is deleted in accordance with service-specific policies.

So that's essentially saying what I'm saying: it really depends on the service. And each service has its own policy for infrastructure, notification of service handling, and user data that the account has been deleted with server security. Again, this is the Titan platform that is used as a powerful, purpose-built chip. In essence, if you could read that, and there is a link to learn more about it on this blog post. I won't dive into that; I just want you to be aware of that. But it's a security chip called Titan, and the goal of this is to provide what's called a rude of trust. It's going to validate that the system is booting, what it should be booting, and that it actually has all the components. And if anything fails or doesn't match up, that system is going to boot up. It's basically dead in the water. And here are some links to take a look at. Let's continue on.

3. Cloud Storage Overview Part 2

Now there are four storage classes. Essentially, when we talk about classes, it means exactly what it sounds like. There are different levels of service. Just like if you go on an aeroplane and take your national flight, you generally have at least two, if not three, types of classes. You have coaching and business first class, whereas I now have economy, super economy, and stupid economy. I think you get the point. So with object storage, it's really no different. We have different levels of classes. And what this means is that there are different levels of availability, performance, cost, structures, and capabilities, but also different use cases.

So let's talk about multiregional first. Multiregional is essentially geo-redundant storage. This is the highest level of availability and performance. This is ideal for general applications that require essentially very low latency. Another thing to bring up is that if you're an international organization, this is probably going to be one of the choices you're going to want to look at. It's going to be distributed as well. Regional is also the highest level of availability and performance as well. Except for that, it's in one region; it's more localized. Again, this is storage that you'll want to keep in a more regional approach, such as in the United States, Asia, or Europe, whichever region you're using is now nearly in the cold line. Now, nearline first is essentially its very low cost, which is certainly a lot cheaper. And again, I won't cover pricing.

There is a pricing calculator at the end that I go over that you can experiment with. And also on the page, the pricing is there, but you can see that they do have estimated storage pricing. And when I go through the calculator, I also show you, for example, that the different regions and zones are going to have different pricing. So I like to really call it a moving target because, again, if you're an enterprise, you're never going to get this correct; you're never going to guess right. It's just not going to happen. When it comes to, let's look at nearline now nearline. A good use case for that, again, is data that you need to have available for a certain length of time. But after that certain length of time, let's say typically one month or two months, you will generally want to move it to the cold line if you're not going to access it.

And the reason is that there is a substantial price difference in most cases. But also, it doesn't make sense to pay a higher price for that data if you're not going to access it anyway. In many cases, cold line storage will be used for compliance or archiving purposes. Consider the storage classes to be a component of Google cloud storage. It's essentially the same way that you're going to use the same APIs to access the storage. The question is, what kind of availability do you need, what kind of performance do you need, and do you need to access it now? What kind of costing do you need when it comes to comparing cloud storage to other storage capabilities in Google Cloud?

As you can see, we've got cloud SQL, data store, big table, big query, etc. We're going to talk about each of these individually. I figure that's the best approach to COVID, treating each one individually. So cloud storage is measured in petabytes. Again, you could store pretty much endless amounts of data. And what's good about cloud storage is that you don't have to worry about managing storage arrays or managing whether you have enough space. Google is going to handle that for you. You just have to manage the right credit card or PO to deal with it. Now I'm going to talk here about AWS and how cloud storage compares to AWS as well. What I did want to point out for those folks taking either exam is that both exams are going to essentially challenge your knowledge on whether or not you know the right use case for the right storage service. For example, if you get a task question about scaling SQL horizontally, you should look at doI use, and cloud spanner isn't here, but it didn't appear on the slide here.

But again, do you use "cloud SQL" or "cloud spanner"? For example, if you need a data warehouse, then you may need to look at BigQuery if it's a data warehouse that you want to use with SQL, for example. And there is just so much more to talk about. But for this course, I'm going to focus on storage and make sure that we compare the different types for you. So let's finish up cloud storage and continue on. Okay, for our AWS folks, let's go ahead and make sure you understand the similarities and differences between AWS and Google cloud platform storage. So AWS has a solution called S Three. This is object storage. Google Cloud has cloud storage and object storage, as far as we're aware of.It's essentially the same SLA in terms of availability. Google now has hot storage, coal storage, and cool storage. and you can see that. Now. This is from Wrightskill. Again, if you are unfamiliar with writeskill, they are one of my favourite cloud organizations. Vendors, I guess in that sense, have this platform that allows you to manage all these different platforms in one view and lets you control everything from pricing to services to scalability.

And it gives you, like I said, that really nice aggregation of your services, if that's what you need. But they do a tonne of great research, reporting, and surveys, and they have this annual report called basically the "annual cloud report." There's a name that they give it; I don't remember it off the top of my head, but I'll leave the link for that as well. Now, at Wright Scale, they do a lot of good research, and this is from Wright Scale, so I have to give them credit for that. And, as you can see, they compare quite well. You could see, for example, size limits, and this is still true today. It's five terabytes per object, but you can still scale indefinitely. So you could keep on dumping data into cloud storage, and it really doesn't matter. So when we talk about archive storage, we have AWS Glacier, and then we have the cold line. We talk about cool storage that's infrequently accessed, and you compare that to nearline storage and then your highly available storage. Your hot storage will be three standard sizes, and that would be your GCs storage. Typically, it will be regional or multi-regional.

4. Storage Basics Demo Part1

So I'm over in the console, and as you can see, I'm at the home page under the Iony group project. And so what we want to do now is add some storage under one of these projects. So I've got, I think, three projects, and yes, I do; I've got three projects. I've got boot camp and the IME group project. Okay, so now let's go over to the sidebar, and you can see that there's an area called storage. Now I'm just going to highlight this for readers, because initially, when you first look at this, you don't think of the Big Table as being storage. At least the name doesn't seem to infer that. So, just remember, like if you go to Big Table, the goal of this is to improve your Hadoop no-sequel capabilities. So we go ahead and create an instance there if we so choose.

But since the goal of this lesson is to go over to storage, let's go ahead and go over there. So this is cloud storage. Now, under this project, I have no storage. So we get to create some storage buckets for this particular project. And just to show you, if I go over to the bootcamp, let's say I've already created some buckets over there, and you can see that they're under certain regions. The lifecycle option is enabled, but the others are not. And then these are the labels that are attached to that storage. So if I click on one of these buckets, you can see now that the default search class is regional, whereas these are multi-regional. If you ever take the cloud architect exam, you'll need to understand the distinction between these areas.

And then I'm going to show you where I go over here to see which one was created last, I believe. Yeah. So, as you can see, this one has a good number of uploaded files. Furthermore, these are exports. These are essentially the bills that I export every week or so. So you get it on a daily basis, I believe. It's every day. So you can see that this is the CSV file that is exported every day. So if I go back to buckets and then I go over to this bucket here, the regional bucket, you see that there are no objects there. So let's just go through it, let's go to the blank project, and let's go create a bucket from scratch. So that way, if you haven't created a bucket yet, I encourage you to go log into your free account with Google and essentially get your login set up, your credit applied, and go ahead and play around with this so that you understand how to create a bucket and what to look for when you create a bucket. And remember, that standard is going to be the default when it comes to the storage class.

And I'm bringing this up because, once again, if you take the architect exam or the data engineer exam, both of those exams will expect you to know some of the differences and will quiz you on them. So I'm going to spend a little more time than usual on this because I want to make sure you understand stories because they're going to make up at least 10% or 12% of the architect exam. So it is a good chunk of a test. Okay? So now remember a bucket. A bucket is object storage. Okay? So for my AWS friends and fans out there, this is essentially Google's approach to s three. So you can see here that you have a lot of choices, right?

And so, what exactly is the right choice? So one of the strategies I would highly recommend is to understand why you're creating a bucket in the first place. Is it to keep log files, or is it for content delivery? If it's for content delivery, you probably need to go with multi-regional, assuming that it's important to your organisation that people can access it from the region that will probably be most appropriate for them. So think of it from that perspective. So if you have content that is going to be downloaded routinely, then maybe you want to think about this. Again, cost is proportionally lower than higher, as one would expect from a regional perspective. So this is regional data. So basically, you're going to keep it in the US. And you don't need to have another region, like in Europe or Asia, for example. Nearline is back; essentially, this is an archive, which is exactly what it is. So this is for infrequently accessed data. The key point here is that it's very easy to confuse nearline and cold line on the test. So remember that cold line that seems like there's nothing happening. This is your archive, essentially.

And if you're not going to access the data at least once every 30 days, it's best to put it on cold lines just because the cost is substantially different, at least in most regions. Now you can see that when I select an AC line, you can see that it allows you to again place the redundancy at the nearest location. So you can see that this choice updates with this storage class. So if you do take the exam, you need to know it inside and out. Again, I'm just spending time on this because I want to make sure those who take the exam get it. Now, when you make a decision on what storage class you're going to go with, this is just one of the choices. And then we haven't even gotten to the type of disc that you're going to use. This is basically how the objects are going to be stored, not the underlying SSDs or anything like that per se. We'll talk more about hardware specifics in a minute. So in this case, I just want to do a regional again, just to show you how to create a bucket. And I'm going to probably use US East and then labels.

Now a label is going to be important, especially if you're going to create a fair number of buckets and you want to find specific keywords pretty quickly. So for example, let's say you want to store information on a bucket and that information is about, say, a specific application. I'll just say that in this case, SQLinstructions must be in lower case. As you can see, I'm just going to call them SQL Files to keep things simple. Now that this is going to be discussed again, there are a couple of ways you could do this, but for me, I like to use numbers. You don't have to use numbers. For example, when you create a bucket, maybe you want to create one for specific types of files that are in production or development. You can do that. I like to add value. Or let's say this is going to be for development, and then the value is an SQL file.

So again, you could do this in many different ways. If you're a larger company, I'd recommend putting something related to the region in the key. So you could cross-reference the region with the files that you want to reference as well. So there are many different ways to do that. And again, if you just highlight this area, it actually isn't coming up for some reason. But again, if you did want to, it's weird, but it's there. But the other thing I did notice is that sometimes, even though I'm using Chrome, you may get a different response with IE or a different version of Chrome. So, again, it may or may not appear, but the help should occasionally appear and inform you of what you should put in for the value. So, for example, let's do an experiment. So, normally, that would have popped up, indicating that you wanted to do this, but it didn't. That's okay. So I'm just going to go ahead and put a value in there, zero or one. Again, there's no right or wrong answer. So I'm going to leave it like that.

So let's sum up what we're doing. I need to name this also. So I'm going to create this name. I'm going to call this boot camp. And again, I have this horrible habit, and it'll send me warning messages until it's no longer unique or you meet their requirements. I'm going to call it boot camp. And which regency am I in, the east? I'm going to say us on the east coast. Okay, so it seemed like that name was acceptable for that bucket. Again, if you're in a large production or development environment, it would be very wise to think of a naming schema that makes sense because, when you have hundreds of buckets with thousands of files, it would be a very hard process to find what you need sometimes. So think of a naming schema. Okay, so I think we get what we just did. So to sum it up, you want to name the bucket storage class.

If you have any questions about storage class, you should go over these again, and I believe you can see what's highlighted. Okay. and you go over here. What I like is that you can see that they give you the page you want to go to. And so if you go over to this page, it explains everything that you really need to know, from APIs to pricing to availability. On the exam, I would definitely go to this page and spend 30 minutes going up and down and reviewing this because a lot of the questions will focus on what the lowest cost per gig is and what the minimum requirement is. Like this one, with a minimum storage duration of 90 days? So pay attention to this georeference. Now, the only one that's georedundant is a multiregional. So again, that might be easy for you to remember. Regional is once again in a nearline area.

This is a 30-day minimum, essentially, and then this is a 90-day minimum. And again, on both exams, the data engineer's and the cloud architect's, you're going to get a fair number of questions on these. Then there's this: bucket default storage class, which I'd like to go over. Now if you don't select one, it'll assign the default storage class. So again, it'll go ahead and assign that based on what you're doing. As a result, the bucket assigned standard will be either multiregional or regional. So, for example, if you have reduced redundancy, it would be correlating to that. It wouldn't be regional. It's similar to like; I believe it's nearline. Yes, nearline followed by Coldline. Yeah. So that's, again, how that would generally correlate.

So do pay attention to that. Then come on over here. Again, I must say that one thing Google excels at is providing visual instructions. Another thing, if you haven't noticed, they give you the instructions to complete the task in the console or GS. Util is the command line. Or if you want to go ahead and set up an API call, you can do that as well. And they also have JSON and XML. So, yeah, it's fairly easy to figure out how to use the Google Cloud once you actually read the instructions and go through the pages. Now, I'll admit that the most difficult part will be when you get into the development areas. Again, as a developer, you need to really think like a developer. So you need to understand how the processes work and how APIs work, and some of that area can be fuzzy, to be honest. So with that said, let's go back to the console. Let's go ahead and create this and leave it as regional, and it'll create that bucket. And again, you can see that it has been created, but there is no object. So let's go to the buckets.

Right now, you can see that the storage class is regional, and the location is US East. The life cycle does not exist. And then SQL, remember, that's a label I gave it. And you could see zero and one. And again, this is extremely helpful to have when you have console pages or, as typically happens, CLIdumps of the files that you're looking for. So I go here, and you can see that you have options up here. Do I want to upload files? Create a folder. So again, I can create a folder here and say, "These are home files," let's say. And now I'm off to create. Now there are a lot of restrictions on filenames, or folder names, for that matter. So I go here, and then again, I could upload it to the folder here or I could upload it directly to the bucket here. Whatever I want to do, it doesn't matter. But let's go over here to Settings so you can see that it has project access. So you could go ahead and use the Rest API as well. and you're going to need to do this. Your developers are at least required to be able to access the content, especially with the services that you might want to tie into interoperability here as well. Transfer.

Now this one is actually a cool feature, and I cover it when I talk about the storage transfer, the migration, actually. But you can create a transfer from AWS to Google, for example. Let's return to the browser. So let's go here to the bucket. I want to go to the folder here. Now I want to upload files. So I go ahead and click "upload files," and I'm just going to take these pictures. So I'm just going to take snapshots of what I have here. and you can see here that it's uploaded. It's pretty darn quick. It doesn't take too long to upload a bucket, especially if you have a decent connection at home. And so it's done. So you can see that if I go over here to buckets, you can see I go to folders, and now I go to home files, and then I could go here to share publicly. Now, a word of advice to you security-conscious individuals. Now Google has done a good thing by not enabling sharing immediately. You've got to really enable this yourself. So just be very cautious when you do. Because when you do, someone could figure out the link. Or again, you could share that link, and you can see that it brings you right to that file.

Professional Cloud Architect certification practice test questions and answers, training course, study guide are uploaded in ETE files format by real users. Study and pass Google Professional Cloud Architect certification exam dumps & practice test questions and answers are the best available resource to help students pass at the first attempt.

Add Comments

Read comments on Google Professional Cloud Architect certification dumps by other users. Post your comments about ETE files for Google Professional Cloud Architect certification practice test questions and answers.

insert code
Type the characters from the picture.