Tuesday, April 17, PM. Hi, I did it with detach database move the log files and reattach the database. Backup the database before is a good idea! Thursday, April 19, AM. Monday, November 10, PM. I hope It will help you Regards Ganga. Friday, May 27, AM. Bring database online Balmukund Lakhani Please mark solved if I've answered your question, vote for it as helpful to help other user's find a solution quicker This posting is provided "AS IS" with no warranties, and confers no rights.
Friday, October 7, AM. Thanks, Soheib. Friday, June 8, PM. Make sure that both files have exactly the same initial size and autogrowth parameters specified in MB.
This will help SQL Server to evenly distribute data between them. It reads allocated extents from the end of the file and moves them to the other files in the filegroup. In case, if filegroup has multiple files, SQL Server uses proportional fill algorithm choosing to which file those extents need to be moved.
The choice depends on amount of free space in the file — more space file has, more data would be copied there. In case, when the filegroup originally has more than one file, you would like to avoid an overhead of moving data to the file, which yet to be moved. Usually, data files in production databases do not have excessive amount of free space. When this is the case, you can simply prevent unnecessary data movements by restricting auto-growth of the old files.
However, if those files have large amount of free space, you can also consider to shrink them and release this space first. There is the catch though. If free space is located in the beginning of the data file, shrink operation would start data movement and introduce the overhead.
You need to make decision how to proceed on case by case basis. Now we are ready to process the first data file. Listing below shows the code that performs data movement and removes an empty file from the filegroup afterwards. Both operations are transparent to the users and client applications. It is worth mentioning that you can use the code from the second listing above to monitor the progress of the operation.
If you checked the status of the files after operation is completed, you would see the results as shown in Figure 3. As you see, the data from the data file has been distributed between other files in the filegroup. Listing shows the code and Figure 4 shows the database files after the process is completed.
As you see, the secondary filegroup now resides on the new drive. The word of caution. Make sure that transaction log is truncating especially if the database uses FULL recovery model. Unfortunately, you cannot remove nor change the primary data file in the database.
Moreover, you cannot shrink the file below the size of the data currently stored in the file, even if a filegroup has the other data files. It would move data to the other files in the filegroup and failing on the final stage of the execution with the error message shown in Figure 5. Nevertheless, the majority of the data from the MDF data file would be moved to the other files. Listing below shows the code that performs this action. As you see, MDF data file is pretty much empty. Figure 7 illustrates the situation after it is completed and MDF file became very small.
Unfortunately, this approach introduces two or more unevenly sized data files in the PRIMARY filegroup, which makes proportional fill algorithm less efficient. It may or may not be a problem in your system, depending on how volatile is the data. This will distribute the data in all files in the filegroup evenly.
The decision how to handle transaction log depends on its size, and backup and high availability strategies you have in place. Transaction log size affects time, which file copy operation will require and, therefore, the system downtime. Obviously, the simplest solution is avoid shrinking transaction log if the log file is not very large and downtime is acceptable. In case, if you need to reduce the downtime, there are no options but shrinking the log file.
However, with FULL recovery model situation is a bit more complicated. As the first step in this process, you need to truncate the log by performing the log backup.
Keep in mind that open transactions, backlogs in high availability log record queues and a few others factors can prevent transaction log from being truncated.
Your results may vary. Shrink operation would release the empty space from the tail of the log; however, the resulting file size depends on the active VLF offsets in the file. It is entirely possible that shrink command would not reduce the file size if active VLFs are close to the end of the file. Figure 8 illustrates the partial output from our test database. Status value of 2 indicates that VLF is active and cannot be truncated. As you can see, it is in the middle of the file. As you saw in Figure 7, the log file is using just 61MB out of 1.
Now, when the transaction log file is small, the database outage can start. I shutdown all applications using the database and kill all remaining connections. First one is detaching the database. It makes the database unavailable without deleting database files. Then, the real database outage starts. Second step is moving the log file to a new location on the operating system level. Just to be on the safe side, it is good to make a backup of the files, just in case.
Third step is attaching the database. I use a recommended method - create database for attach. After attaching the database, it should be operational, but there are still some actions that should be performed like mapping users to logins or executing dbcc checkdb.
Besides that, I want to extend the log file to its initial size - if you remember, I shrank it before moving. Extending can be done by alter database modify file command. And a very important action - always remember to take a full database backup after such operation. We use cookies on our website.
Some of them are essential for the operation of the site, while others help us to improve this site and the user experience tracking cookies.
0コメント