Pass Microsoft Certified: Azure Solutions Architect Expert Certification Exam in First Attempt Guaranteed!
Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!
AZ-305 Premium Bundle
- Premium File 191 Questions & Answers. Last update: Jan 18, 2023
- Training Course 98 Lectures
- Study Guide 933 Pages
AZ-305 Premium Bundle
- Premium File 191 Questions & Answers
Last update: Jan 18, 2023
- Training Course 98 Lectures
- Study Guide 933 Pages
AZ-305 Exam - Designing Microsoft Azure Infrastructure Solutions
|Download Free AZ-305 Exam Questions|
Size: 110.52 KB
Size: 467.64 KB
Microsoft Microsoft Certified: Azure Solutions Architect Expert Certification Practice Test Questions and Answers, Microsoft Microsoft Certified: Azure Solutions Architect Expert Certification Exam Dumps
All Microsoft Microsoft Certified: Azure Solutions Architect Expert certification exam dumps, study guide, training courses are prepared by industry experts. Microsoft Microsoft Certified: Azure Solutions Architect Expert certification practice test questions and answers, exam dumps, study guide and training courses help candidates to study and pass hassle-free!
Design a Data Management Strategy
4. Database Auditing Strategy
So, as we're getting into our data strategy section here, one thing you need to understand is how you ensure the integrity of data through, say, an auditing strategy, and how you improve the performance of this through a caching strategy. So data auditing and data caching are sort of two of the related elements. SQL Database does have this transactional log, so everything that happens to it can be rolled back transaction by transaction, including events, logins, and so on.
There is an event log as well. So these sort of come with it. We can see the events that are happening to your SQL database. These logging features are usually used to append blobs, and so it's not going to go back and update, delete, or change. Previously, events just got appended to the bottom. Pin Blobs are very well designed to just append a new piece of data to the end of a file.
Now, when you're dealing with SQL databases, there are a couple of different levels. So there's a server for SQL databases? You create a SQL Database server in Azure Portal, and then you create individual databases on that server. So you can have multiple databases running on a single database server. The auditing features can then be enabled either at the server level, which affects all databases, or against individual databases. So you have some of that flexibility. Now, Azure Monitor is the central dashboard for monitoring and reporting within Azure, and not surprisingly, it does have hooks into SQL databases.
And so you can go into the Azure Monitor, tie that in, and get some of those log files. We're discussing how SQL Database generates log files of its events. Well, those events can be ingested into Azure Monitor, and you can use that as part of your alerting, reporting, or graphing elements like that. Within SQL Server itself, there are management functions and system-level functions.
So, instead of relying on Azure Portal or Azure Monitor, you can call these system functions within SQL Server, which is also seen for a SQL Database, and write your own reports and have them stored to a table or notified. you when a certain event happens. Right? So these are programable from your perspective. A lot of the data Power Bi is the business intelligence, the UX, and the front end for creating really great reports within Microsoft Azure and within Microsoft generally. You can pull in your SQL database events into Power BI. And if you want us to do reporting, monitoring, and auditing on Power Bi, then that's obviously available to you as well.
5. The Concept of DTUs
So when it comes to Azure SQL databases, Azure is giving us a lot of different pricing options. Now, they do give us the option of what's called "v core pricing," which means that we get to choose the number of CPUs and the amount of memory, and we're going to pay basically by the hour for that. It's just like getting a virtual machine. So on the one hand, you can be very specific in terms of "I want ten CPUs and I need 50 GB of memory," and you pay for that.
But there's also this pricing model where Azure has sort of combined all those metrics, the CPUs and the memory and the local hard disc size and all that stuff, into one metric called a DTU. So there's a simplified pricing model and perhaps a more complex pricing model. So what is a DTU? Right, we see on the screen that you can get 10 DTUs, 20 DTUs, or 50 GPUs. What does that mean? So DTU stands for database transaction unit. You give up control over the underlying hardware when you use DTU pricing. You don't get to choose how many CPUs, how much RAM, or how many operations per second.
You simply choose this DTU. Now, the DTU is a relative measure of performance; it does not have an actual value, just as there is no way to multiply the CPUs times the memory. It's really just that if a server is listed as 100Et use, it should be twice as powerful, similar to how a server with 5200 is twice as powerful as 100, and so on. So it's just a number that Microsoft came up with that allows you to compare two pricing plans against each other. So if you choose the DTU pricing model instead of the Vcore model, then you're basically just saying, "I'm going to get this server, and I know that it's roughly this powerful." I can then double the power simply by upgrading to the next plan.
6. The Concept of RU/s
So let's talk about how Cosmos DB is priced. Now, Cosmos DB is different from a SQL database. It is priced based on two factors. One example is storage. So we see it's 25 cents per gigabyte per month. And you'll also be charged based on the forecasted throughput.
Now, provisioned means you basically decide in advance, "I want to reserve this much bandwidth and this much speed for my database transactions." The way that that's provisioned is in what's called "Rus per S," which is request units per second. And so that is basically a billing metric for Cosmos DB, similar to database transaction units, but this is called request units. Now, again, you don't have this underlying control over the number of CPUs and the amount of RAM, et cetera, et cetera. But what you do care about is the speed.
And so if you have a Cosmos DB database and an application that relies on it, you want it to be. And so, are you over our requesting this? Per second, what is the effective number of request units per second effectively? Now, the reason why they have to be so opaque about this is that Cosmos DB supports a number of different data models.
And so it could be a MongoDB or SQL; it could be a GraphDB or a table database. And each of these models has their own set of requirements. You have an API call that gets the contents of the container or queries the database, and so on. So different APIs and different underlying data models count things a little bit differently.
So effectively, with Cosmos DB, you are reserving capacity. Okay? So you want to be able to process 100 requests per second. And if your application in your database supports that comfortably, then you can just pay for that 100 bits per second. But if your database gets a little bit slow and you can monitor it and see that it's regularly reaching that hundred request per second limit, then you might want to provision more. Provision the 200, provision the 300, etc.
7. Data Retention Strategy
Now, in this data section, we started off talking about how a managed SQL database is a managed database solution. And some of the things that it gives you are automatic gear, redundant backups of your data, and the ability to go back in time for point-in-time restores. So you return to yesterday's days database and go back however many days you want. Now, because this does take some space, you can configure how much of your data you want to keep—a minimum of seven days, but you can configure this up to 35 days. So you can do this point-in-time restore for up to 35 days if you configure it that way. Now, there is the concept of a long-term retention policy. So if you are under some type of governance that requires you to keep your data and you want to be able to go back to any point in time for longer than 35 days, then that's a separate feature. It's called long-term retention. And you can set that up for up to ten years.
That's quite a large amount of retention. So that gives you the ability to restore to a point in time. Of course, the more data that you keep, the more data that's being stored, and of course, you're going to be paying for that data storage. Another advantage is that if you delete the database and then realize, hey, we shouldn't have deleted it after all. We need to restore this point in time because these automatic gear item backups allow you to restore a deleted database even after you've deleted it. This is also great if you want to get a database for testing purposes or get your data into another region for whatever purpose. As a result, you can use your database's restoration capabilities to restore the data even to another region.
8. Data Availability, Consistency and Durability
Also important to architects when we're talking about databases and data in general are the concepts of availability, consistency, and durability. Just to recap, availability basically involves removing all your single points of failure. Recognizing which of your applications may fail and adding redundancy there, as well as implementing some type of automatic failover, will protect you from a single point of failure. So typically, when it comes to databases, availability involves having replicated copies of your database in other locations.
So SQL databases allow you to easily replicate your data into other regions, including Cosmos DB. Now there is a challenge, right? So, let's say you have data in the eastern United States. And if you have a replicated copy in France, there is the issue of latency, lag, and delay. This data could not always be in sync. So within a microsecond, you write a record to a database, and then if you're using a select against the replicated database that's not been synchronised yet, then you've got old data being read when it comes to the second database. Now there's a term for this, this is called consistency or consistency levels. If you're using Cosmos DB, it allows you to define the level of consistency you expect. And there are tradeoffs here, right? So if you need strong consistency, then it's going to suffer in terms of performance.
If you're not as concerned with consistency but are concerned with performance, then there's that possibility of microsecond pieces of data being out of sync. The idea of durability is that once you've written something to the database, you'll never lose it. So you don't want to insert a SQL statement into a database, have it succeed, and then wonder where the data went. It didn't end up getting written; the server crashed, whatever. So many of these solutions have a very high level of durability, which is something you should look for; a committed right should never be lost. All of these characteristics—availability, consistency, and durability—are now important to you. You want to plan for them, right? So you want to say, "I care most about availability," and so I'm going to have to add this replication, I'm going to have to add failover, et cetera. Now you can pay for some of this stuff, so obviously Microsoft wants you to.
If you've got a SQL database that is business-critical, there is a business-critical tier of SQL databases, and you get some extra support when it comes to making sure that that database is available to you around the clock. SQL databases are not a particularly prone to failure; in fact, they are quite the opposite. It's just that if you are putting yourself in a position where the failure of this database is a massive, massive problem for you, then there are ways of getting isolated, networked, etcetera.
So, what about the SQL Database service level agreement? So, if you use this business-critical tier and pay the extra, Microsoft guarantees that if anything happens to SQL Database and it fails, the most you'll ever lose is 5 seconds of data. So it'll be replicated in other regions of the world within 5 seconds of being written. So if some massive catastrophe was to happen, then your recovery point objective means that the maximum data loss is 5 seconds. That's the promise they're making to you. Furthermore, with business critical, you can be up and running in this other region in 30 seconds. So from the point where you declare a failure within 30 seconds, you're back up and running. And those are sort of the kind of promises that Microsoft would make to you for upgrading to the business-critical version.
Microsoft Certified: Azure Solutions Architect Expert certification practice test questions and answers, training course, study guide are uploaded in ETE files format by real users. Study and pass Microsoft Microsoft Certified: Azure Solutions Architect Expert certification exam dumps & practice test questions and answers are the best available resource to help students pass at the first attempt.
IT Certification Tutorials
- Top Career Opportunities for Financial Certified Professionals
- Top Project Management Certifications to Improve Your CV
- Top 10 Computer Job Titles That Will Rule the Future
- Discontinuation of ITIL v3 in 2022 And New Technological Era
- GAQM CSM-001 Certified Scrum Master - Chapter 04 - Meetings in Scrum Part 3
- Python Institute PCAP - Modules; Packages and Object Oriented Programming in Python Part 3
- PMI PMP Project Management Professional - Introducing Project Risk Management Part 3
- CompTIA CASP+ CAS-004 - Chapter 01 - Understanding Risk Management Part 3
- DA-100 Microsoft Power BI - Part 2 Level 2 - Getting Multiple files
- CompTIA CASP+ CAS-004 - Chapter 04 - Implementing Security for Systems; Applications; and Storage Part 3
- IIBA CBAP - Tasks of Business Analysis Planning and Monitoring
- MB-210 Microsoft Dynamics 365 - Create and Manage Product and Product Catalog Part 2
- Salesforce Certified Platform App Builder - 5 - Business Logic and Process Automation Part 3
- Amazon AWS Certified Data Analytics Specialty - Domain 4: Analysis
- Google Professional Cloud Network Engineer - Designing; Planning; and Prototyping a GCP Network Part 3