Thursday, July 11, 2019

Database Log Shipping Interview Questions and Answers (Part-4)

What are common reasons which breaks or stops the Log Shipping?
  • Recent modifications on the shared folders.
  • Human error like, either someone used the option of truncate only or switched the recovery model.
  • date/time for the windows servers unmatching due to any DST activities.
  • Datafile added on Primary on different drives then you need to apply that on secondary with move until that your log shipping restore job will fail.
  • Any I/O,Memory,N/w bottleneck.
  • Your tuf file is missing.
  • You may have set the incorrect value for the Out of Sync Alert threshold.
  • Might be you have scheduled the jobs in same time.
  • Might be your MSDB database is full.
I’m getting the below error message in restoration job on secondary server, WHY?
” [Microsoft SQL-DMO (ODBC SQLState: 42000)]
Error 4305: [Microsoft][ODBC SQL Server Driver][SQL Server]The log in this backup set
begins at LSN 7000000026200001, which is too late to apply to the database. An earlier
log backup that includes LSN 6000000015100001 can be restored.
[Microsoft][ODBC SQL Server Driver][SQL Server]RESTORE LOG is terminating abnormally “.
Was your sql server or agent restarted yesterday in either source or destination ? because the error states there is a mismatch in LSN. A particular transaction log was not applied in the destination server hence the subsequent transaction logs cannot be applied as a result.
You can check log shipping monitor \ log shipping tables to check the which transaction log is last applied to secondary database.
Incase if you are not able to find the next transaction log in secondary server shared folder, you need to reconfigure log shipping.
What are your basic steps to reconfigure the Log Shipping?
  • Disable all the log shipping jobs in source and destination servers.
  • Take a full backup in source and restore it in secondary server using the With Standby option.
  • Enable all the jobs you disabled previously in step1.
Is it possible load balance in log shipping?
Yes: its possible in log shipping, while configuring log shipping you have the option to choose standby or no recovery mode, there you select STANDBY option to make the secondary database readonly.
Can I take full backup of the log shipped database in secondary server?
No: You cannot take the full backup.
What are the benefits of Log Shipping?
  • Log shipping doesn’t require expensive hardware or software. While it is great if your standby server is similar in capacity to your production server, it is not a requirement. In addition, you can use the standby server for other tasks, helping to justify the cost of the standby server.
  • Once log shipping has been implemented, it is relatively easy to maintain.
  • Assuming you have implemented log shipping correctly, it is very reliable.
  • The manual failover process is generally very short, typically 15 minutes or less.
  • Depending on how you have designed your log shipping process, very little, if any, data is lost should you have to failover. The amount of data loss, if any, is also dependent on why your production server failed.
  • Implementing log shipping is not technically difficult. Almost any DBA with several months or more of SQL Server 7 experience can successfully implement it.
Once the primary server comes back online, is it a difficult process to switch back to the primary database server?
  • Traditionally, depending on the size and number of database that are invloved in logshipping, it may or may not be difficult.
  • Database size and your network bandwidth matters a lot. Normally, you just have to resetup log-shipping. Meaning – point all your apps to primary and in background, do a full backup of primary, copy that to secondary server and restore it with no-recovery with subsequent log backups.
  • To minimize the downtime, you can use “Reverse logshipping” will prove to be a huge help.
Will changing the Recovery model to Full (from simple) have any issues?
Yes. Changing recovery model breaks the log chain.
Will my existing backups (using Symantec Backup Exec) be affected by enabling log shipping and switching to a full recovery model?
Once you setup logshipping, there is no need to take any additional log backups. In fact, any adhoc log backups will break the log chain. Use COPY_ONLY backups.
There are two servers. one is primary and the other is secondary. primary is on production server.
  • Logshipping configured and working fine. Suddenly, after some days, the sync between the servers is not there.
  • What is the immediate action?
  • First thing I’d do is check the backup log job and the copy job which copies the job from the primary to secondary and after that check the restore job which restores the log on the secondary. If there are no errors there then I’d start looking in msdb at the log shipping tables and see if I could determine the error from there.
Do we require any endpoints for Log Shipping?
We don’t need to create endpoints or assign port in log shipping, which is required for database mirroring.
What are the log shipping frequent issues you have faced?
It depends. I have seen some issues like transaction log cannot restored on the secondary database which is due to that the transaction log sequence is broken.
You might find that the last backed up/copied/restored files do not reflect correctly in the log shipping reports when you use a remote monitor server. What is this issue?
The last copied and restored file will show up as null if the monitor instance is not on the same box as the secondary instance. The last backed up file will show up as null if the monitor instance is not on the same box as the primary instance if the select @@servername value is not used as the monitor server name while configuring the log shipping monitor.
What do you know about the below error?
Error: During startup of warm standby database ‘testdb’ (database ID 7), its standby file (‘<UNC path of the TUF file>’) was inaccessible to the RESTORE statement. The operating system error was ‘5(Access is denied.)’.
If you have configured Log Shipping with STANDBY mode on SQL Server 2008 and the destination folder for the TLOGS uses a remote server on which the sqlservice/sqlagent is not a Local Admin, then the restore job will fail every time.
The sp_resolve_logins stored procedure runs successfully; however, it does not perform the expected modifications to the security on the secondary server. Why?
The sp_resolve_logins stored procedure requires an up-to-date BCP file of the primary server’s syslogins system table. These logins must already by created on the secondary server.
Log Shipping Backup and Out of Sync alerts are firing, even when the secondary server is updated with the transaction log backups. Is this possible?
Yes. It is possible that the alerts might fire even when the secondary database is being updated. If the alert threshold is set to a value less than double the time between back up and copy or restore jobs, the alerts might be raised.
What are the main differences between Log Shipping and Database Mirroring?
Log Shipping:
  • It automatically sends transaction log backups from one database (Known as the primary database) to a database (Known as the Secondary database) on another server. An optional third server, known as the monitor server, records the history and status of backup and restore operations. The monitor server can raise alerts if these operations fail to occur as scheduled.
  • It has a manual Failover.
  • You can use a secondary database for reporting purposes when the secondary database restore in STANDBY mode.
  • The servers involved in log shipping should have the same logical design and collation setting.
  • The databases in a log shipping configuration must use the full recovery model or bulk-logged recovery model.
  • The SQL server agent should be configured to start up automatically.
  • You must have sysadmin privileges on each computer running SQL server to configure log shipping.
Mirroring:


  • Database mirroring is a primarily software solution for increasing database availability.
  • It has a Automatice Failover.
  • Mirrored DB can only be accessed using snapshot DB.
  • It maintains two copies of a single database that must reside on different server instances of SQL Server Database Engine.
  • Verify that there are no differences in system collation settings between the principal and mirror servers.
  • Verify that the local windows groups and SQL Server logins definitions are the same on both servers.
  • Verify that external software components are installed on both the principal and the mirror servers.
  • Verify that the SQL Server software version is the same on both servers.
  • Verify that global assemblies are deployed on both the principal and mirror server.
  • Verify that for the certificates and keys used to access external resources, authentication and encryption match on the principal and mirror server.

Database Log Shipping Interview Questions and Answers (Part-3)

Does the secondary instance need to be licensed?
I am not the licensing police, and I am not Microsoft – check with your licensing representative to clarify your exact situation. Generally, you can have one warm standby server. However, the second someone starts using it for reporting, testing, or anything else, you need to license it like any other server.
When log shipping is set up, Agent jobs are created to alert me if a backup, copy, or restore fails. How do I get notified?
You need to go into the Agent job, pull up Notifications, and choose your method – email an operator, or write to the event log, for example.
Are my logins shipped from the primary to the secondary?
No, they are not. You’ll need to set up a separate method to sync the logins.
What is the difference between the secondary being in Restoring vs. Standby?
Restoring means the database is not accessible. Standby means it is read-only. You make this decision when you set up the log shipping.
If the database is in Standby mode, users can query it – except when a log backup is being restored. You need to decide if a restore job will disconnect users, or if the restore is delayed until after the users are disconnected.
Can we configure log shipping if the SQL server service account is running under local system account?
Yes:If the SQL Server service account is running under the local system account on the primary server, you must create the backup folder on the primary server and specify the local path to that folder here. The SQL Server service account of the primary server instance must have Read and Write permissions on this folder.
What happens to the log shipping in case if you have added the data file on the primary server ?
If both primary & secondary servers as same Disk configuration settings then you can ignore & secondary will takes up, how ever if you changed anything at the Primary side for ex->you have created the folder & added the folder or you have added to any other drive then you have to restore the Next trn backup (i,e after adding the data file) with MOVE option.
Does index operation logging in log shipping?
Yes: It is fully logged operation so it will replicate on secondary as well.
May I shrink the log files of Primary Database?
Yes, you can shrink the log files but make sure that before it copied into Secondary Database.
What do you know about the TUF file?
TUF stands for ->T ->Transaction  U->Undo  F->File  so it is Transaction Undo file.
It contains the Modification that were not committed on the Primary database when the Transaction log backup was in progress and when the log was restored to another database…so at some point of time when the next trn restored on the secondary side then SQL server uses the data from the Undo file and it starts restoring the Incomplete transactions.
My Tuf file deleted unfortunate when the server was shutdown or during the server down time then what will happens to the logshipping?
The log shipping will not work, Incase if you have OS level Backup then you can restore the file then you can be cool.
Can we configure Log Shipping between the different domains?
Yes, we can configure Log Shipping on the server residing in different domains.
After some disk issues at the production server the logshipping continually failed. So I tried a full back up the primary database and restored the secondary database with this full  backup and started the log shipping again. And the logshipping backup of the primary database failed with the error message below:
*** Error: BACKUP detected corruption in the database log. Check the errorlog for more information.
BACKUP LOG is terminating abnormally.
  • Stop all user activity in the primary database.
  • Switch to the SIMPLE recovery model (breaking the log backup chain and removing the requirement that the damaged portion of log must be backed up).
  • Switch to the FULL recovery model.
  • Take a full database backup (thus starting a new log backup chain).
  • Start the logshipping job.
Does the file name first_file_000000000000.trn indicate that the copy or restore job was unsuccessful?
Each run of the copy and restore job is associated with at least one file. By default, if no files are copied or restored in a certain run of any of these two jobs, SQL Server places first_file_000000000000.trn in the file name field. This may or may not indicate a problem.
Does the sp_resolve_logins stored procedure work for remote logins in SQL Server?
No: The sp_resolve_logins stored procedure only works for typical logins. Any remote logins must be created manually on the secondary server.
Why is the log shipping check box sometimes dimmed in the Maintenance Plan dialog box?
  • Multiple databases might be selected for the Maintenance Plan.
  • The database that is selected is not in the Full or Bulk Logged Recovery model.
  • SQL Server Enterprise Edition is not installed on the server.
Can I configure primary and secondary servers to use SQL authentication to connect to the monitor server?
Yes. It is possible to use either Windows or SQL authentication for primary and secondary servers to connect to the monitor server.
Why backup job failed on Primary server and thrown below error messages?
*** Error: Backup failed for Server ‘<Server Name>’. (Microsoft.SqlServer.SmoExtended) ***
*** Error: An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) ***
 *** Error: BACKUP LOG cannot be performed because there is no current database backup.
This message is thrown because the database is never backed up fully before. Do a full backup of the primary database to solve this issue.
What does sp_resolve_logins do?


At the time of the log shipping role change, the sp_resolve_logins stored procedure requires a BCP file of the syslogins system table from the primary server. This stored procedure loads the BCP file into the temporary table and loops through each login to verify if a login with the same name exists in the secondary server’s syslogins system table.
Execution of this stored procedure is required only if there are new logins created on the primary server after log shipping has been initialized and those same logins are not created on the secondary servers with the same SID.

TDE Transparent Data Encryption (TDE)

Transparent Data Encryption (TDE) encrypts the data within the physical files of the database, the 'data at rest'. Without the original encryption certificate and master key, the data cannot be read when the drive is accessed or the physical media is stolen. The data in unencrypted data files can be read by restoring the files to another server. TDE requires planning but can be implemented without changing the database. Robert Sheldon explains how to implement TDE.
With the release of SQL Server 2008, Microsoft expanded the database engine’s security capabilities by adding Transparent Data Encryption (TDE), a built-in feature for encrypting data at rest. TDE protects the physical media that hold the data associated with a user database, including the data and log files and any backups or snapshots. Encrypting data at rest can help prevent those with malicious intent from being able to read the data should they manage to access the files.
SQL Server TDE takes an all-or-nothing approach to protecting data. When enabled, TDE encrypts all data in the database, as well as some outside the database. You cannot pick-and-choose like you can with column-level encryption. Even so, TDE is relatively easy to enable, once you’ve decided this is the path you want to travel.
In this article, we look at how to implement TDE on a user database. The article is the second in a series about SQL Server encryption. The first one (Encrypting SQL Server: Using an Encryption Hierarchy to Protect Column Data) covers column-level encryption. If you’re new to SQL Server encryption, you might want to review that article first.

The TDE encryption hierarchy

When I introduced you to column-level encryption, I discussed the encryption hierarchy and how SQL Server uses a series of keys and certificates to protect column data. The approach used for implementing TDE is similar, but different enough to take a closer look.
As with column-level encryption, the Windows Data Protection API (DPAPI) sits at the top of the hierarchy and is used to encrypt the service master key (SMK), a symmetric key that resides in the master database. SQL Server creates the SMK the first time the instance is started. You can use the key to encrypt credentials, linked server passwords, and the database master keys (DMKs) residing in different databases.
In the TDE encryption hierarchy, the SMK sits below the DPAPI, and a DMK sits below the SMK. The DMK is a symmetric key, just like you find with column-level encryption. However, with column-level encryption, you create the DMK in the user database where the column data will be encrypted. With TDE, you create the DMK in the master database, even though you’ll be encrypting a user database. SQL Server uses the SMK and a user-supplied password to encrypt the DMK with the 256-bit AES algorithm.
Before we go any further with our description, take a look at the following figure, which shows the entire TDE encryption hierarchy, starting with the Windows DPAPI at the top and the SQL Server data at the bottom. As you can see, the next level down our hierarchy is a certificate, which you also create in the master database.
../../../../../../../../../Documents/DataFiles/Screenshots/Screen%20Shot%202017-01-0
The DMK protects the certificate, and the certificate protects the database encryption key (DEK) in the user database. The DEK is specific to TDE and is used to encrypt the data in the user database in which the key resides.
You can skip the DMK and certificate altogether and instead use an Extensible Key Management (EKM) module to secure the DEK. SQL Server 2008 introduced the EKM infrastructure as a way for encryption keys to be stored in hardware outside of the database, essentially integrating the hardware into the encryption stack. That said, the topic of EKM is outside of the scope of this article, but one we might tackle later in this series.
For now, we’ll focus on the TDE encryption hierarchy as it is represented in the figure. From this, we can deduce that to implement TDE on a user database, we must take the following steps:
  1. Create the DMK in the master database, if it doesn’t already exist.
  2. Create a certificate in the master database for securing the DEK.
  3. Create the DEK in the user database to be encrypted.
  4. Enable TDE on the user database.
What is not included in the figure or in the steps is the importance of backing up the SMK, DMK, and certificate. If anything goes wrong in our production environment or we need to restore or move an encrypted database, we might need those keys or certificate, so we better make sure we have copies of them, stored securely in a separate location.
Later in the article, we’ll review how to back them up, but first let’s look at how to implement TDE on a user database. For that, we’ll need to set up a test database such as the one shown in the following T-SQL script:
The database and table created here are only slightly different from the ones we created in the first article in this series. The database includes the EmpInfo table and uses an INSERTstatement to retrieve data from the HumanResources.Employee table in the AdventureWorks2014 database. However, I’ve named the new database EmpData2 in case you want to retain the database from the other article. (Note that I created all the examples on a local instance of SQL Server 2016.)
You don’t need to use this database to try out the examples in this article. If you have a different one you want to use (and it’s safe to experiment with it), just substitute the name for the database accordingly. You’ll want to keep the database small, however, so you don’t get bogged down during the initial encryption process.

Create the DMK

To create the DMK that will support a TDE-enabled database, you take the same steps you take when creating the DMK to support column-level encryption, except for one important difference. You must create the key in the master database, as shown in the following T-SQL code:
The CREATE MASTER KEY statement supports no optional arguments; we need only specify a password, in addition to the basic syntax. (Of course, we would want to use a more robust password in the real world.)
To verify that the DMK has been created, we can query the sys.symmetric_keys catalog view:
The SELECT statement returns the results shown in the following table.
KeyName
KeyID
KeyLength
KeyAlgorithm
##MS_DatabaseMasterKey##
101
256
AES_256
##MS_ServiceMasterKey##
102
256
AES_256
Notice that the results include both the DMK and SMK. As already noted, SQL Server creates the SMK in the master database automatically. As you can see, the two keys are based on the 256-bit AES encryption algorithm.

Create the certificate

The next step is to create a certificate in the master database using a CREATE CERTIFICATEstatement. In SQL Server, a certificate is a digitally signed, database-level securable that binds the public and private keys.
To keep things simple, we’ll create a self-signed certificate, which is automatically protected by the DMK. Normally, a certificate authority (CA) would issue and sign the certificate, which we would then incorporate into our encryption infrastructure, but a self-signed certificate can be handy for developing and testing, as well as checking out functionality like we’re doing here.
To create a self-signed certificate, we need only provide a name for the certificate and a WITHSUBJECT clause, as shown in the following statement:
The WITH SUBJECT clause supposedly specifies the issuer name; however, it can be just about any value, although a relevant description is normally the best option. In this case, I’ve gone with TDE certificate.
Note that, in addition to self-signed certificates, the CREATE CERTIFICATE statement lets us define a certificate based on a certificate file as well as retrieve the private key from a file or use a password to encrypt the certificate.
After we run the CREATE CERTIFICATE statement, we can verify that the certificate has been created by querying the sys.certificates catalog view:
On my system, the SELECT statement returned the results shown in the following table.
CertName
CertID
EncryptType
Issuer
TdeCert
258
ENCRYPTED_BY_MASTER_KEY
TDE certificate
As you can see, the value in the EncryptType column is ENCRYPTED_BY_MASTER_KEY, which confirms that SQL Server has used the DMK to encrypt the certificate.

Create the DEK

Now we switch over to our EmpData2 database to create the DEK, the next level down our hierarchy. When we create the DEK, we must specify the algorithm to use for the encryption key and the certificate to use to encrypt the DEK. Starting with SQL Server 2016, all algorithms have been deprecated except 128-bit AES, 192-bit AES, and 256-bit AES. (The higher the number of bits, the stronger the algorithm.)
To create the DEK, we can use a CREATE DATABASE ENCRYPTION KEY statement, as shown in the following example:
In this case, we’ve specified the 256-bit AES algorithm and the TdeCert certificate we created in the previous step. When you run the statement, you should receive the following warning.
This is an important message and one you should heed. We’ll discuss backing up your keys and certificates later in the article, but know that it is something you should be doing whenever you’re using them as part of your encryption process.
Now let’s return to our DEK. Once we’ve created the key, we can verify its existence by querying the sys.dm_database_encryption_keys dynamic management view:
The sys.dm_database_encryption_keys view returns details about a database’s encryption state and its associated DEKs. The following table shows the results returned by our SELECTstatement.
DbName
EncryptState
KeyAlgorithm
KeyLength
EncryptType
EmpData2
1
AES
256
CERTIFICATE
Notice that the EncryptType column has a value of CERTIFICATE, which confirms that a certificate was used to encrypt the DEK.
Also notice that the EncryptState column shows a value of 1. This indicates that the database is in an unencrypted state. According to SQL Server documentation, the column can display any one of the values described in the following table.
Value
Description
0
No database encryption key present, no encryption
1
Unencrypted
2
Encryption in progress
3
Encrypted
4
Key change in progress
5
Decryption in progress
6
The certificate or asymmetric key encrypting the DEK is being changed

Enable TDE on the user database

We now have all the pieces in place to enable TDE on the EmpData2 database. The only step left is to turn encryption on.
Before we do that, I want to point out that there are many considerations to take into account before actually enabling TDE. For example, if any filegroups associated with the database are set as read-only, the encryption operation will fail. You’ll also run up against a number of restrictions when trying to implement TDE, such as not being able to drop a database during the initial encryption process.
Before you implement encryption on anything other than a test database in a test environment, I highly recommend that you review the MSDN article Transparent Data Encryption (TDE), which explains the various considerations and restrictions to take into account before implementing TDE.
With that in mind, let’s return to the matter at hand, which is to enable TDE on the EmpData2database. To do so, we need only run a simple ALTER DATABASE statement that sets encryption on, as shown in the following example:
That’s all there is to it. Because our database is so small, the encryption process will be very quick. Not surprisingly, the larger the database, the longer this process will take.
If we again query the sys.dm_database_encryption_keys view, we’ll get the results shown in the following table, which verify that the EncryptState value is now 3.
DbName
EncryptState
KeyAlgorithm
KeyLength
EncryptType
tempdb
3
AES
256
ASYMMETRIC KEY
EmpData2
3
AES
256
CERTIFICATE
The results also show something else that’s very important to note—the addition of a row for the tempdb database. When you implement TDE on any user table, SQL Server also encrypts the tempdb database.
If you consider the logic behind this, you can see why Microsoft has taken this step. The database contains such items as temporary user objects, internal objects, and row versions, any of which can expose sensitive data. The downside to this is that unencrypted databases can take a performance hit, although Microsoft claims that the impact is minimal.
This issue aside, as long as our certificate and keys are in place, we can query the TDE-encrypted database just like we did before we enabled TDE. For example, we can run the following SELECTstatement against the EmpInfo table:
Notice that we do not have to take any special steps with our query like we do with column-level encryption. We simply run the query as before, a fact that should make application developers happy. The following table shows the results returned by our SELECT statement.
EmpID
NatID
LoginID
1
295847284
adventure-works\ken0
2
245797967
adventure-works\terri0
3
509647174
adventure-works\roberto0
4
112457891
adventure-works\rob0
5
695256908
adventure-works\gail0
As you can see, we’re getting exactly the results we would expect. From the user/application perspective, it’s business as usual.

Disable TDE on the user database

At some point, you might decide that you want to disable encryption on a user database. The process is as simple as enabling it. You again run an ALTER DATABASE statement, only this time turning off the encryption, as shown in the following example:
You can verify that encryption has been disabled by again querying the sys.dm_database_encryption_keys dynamic management view, which now returns the results shown in the following table:
DbName
EncryptState
KeyAlgorithm
KeyLength
EncryptKey
tempdb
3
AES
256
ASYMMETRIC KEY
EmpData2
1
AES
256
CERTIFICATE
As you can see, the EncryptState value for the EmpData2 database is now 1, indicating that it is in an unencrypted state. But notice that the tempdb database is still encrypted. As it turns out, the database will stay encrypted until it is re-created, which occurs whenever the SQL Server service restarts.
For a development or test environment, restarting the service might not be a big deal, but restarting a production instance is an entirely different matter. One more reason to give careful consideration to implementing TDE.
In the meantime, if you do have control over your instance of SQL Server, you can restart the service to see for yourself what happens with the tempdb database. From there, you can again query the sys.dm_database_encryption_keys view, which should return the results shown in the following table.
DbName
EncryptState
KeyAlgorithm
KeyLength
EncryptKey
EmpData2
1
AES
256
CERTIFICATE
As you can see, the tempdb database is no longer included in the results because the database has not been encrypted or subjected to TDE.
If you disable TDE on your database, you’re also free to drop the DMK, certificate, and DEK, using the DROP MASTER KEYDROP CERTIFICATE, and DROP DATABASE ENCRYPTION KEYstatements, respectively. Or you can re-enable TDE on the user database at any point. Just keep in mind the impact on the tempdb database.

Back up the certificate and keys

As already noted, you should back up your certificates and keys, preferably right after you create them. This is also true for the SMK, before you start relying on it to protect your DMKs.
To back up the SMK, you can use a BACKUP SERVICE MASTER KEY statement, as shown in the following example:
The statement itself is fairly straightforward. You must provide the full path for the backup file and a password for encrypting the key in that file. One thing to note, however, is that not all the examples in the Microsoft documentation clearly demonstrate that you must provide a full path, including the file name and its extension. Without these, you will receive an error.
It’s also worth noting that the BACKUP SERVICE MASTER KEY statement includes no logic for what to do when the file already exists. If it does exist, you’ll again receive an error.
Backing up the DMK works much the same way, except that you use a BACKUP MASTER KEYstatement:
Again, you must provide the full path and file name, along with a password. In addition, the file cannot already exist.
Backing up a certificate is a little different because you want to be sure to explicitly back up the private key along with the certificate. For that, you must use a BACKUP CERTIFICATE statement that includes the WITH PRIVATE KEY clause, as shown in the following example:
In this case, we’re generating a file for both the certificate and the private key, as well as providing a password for the private key.
That’s all there is to backing up the certificate and keys. Of course, you should store your backup files in a remote locate separate from the database files, ensuring that they’re protected from any sort of mischief or recklessness.

Database Log Shipping Interview Questions and Answers (Part-4)

What are common reasons which breaks or stops the Log Shipping? Recent modifications on the shared folders. Human error like, either so...