This week I had to restore to a point in time for an entire instance. The instance had approx 25 databases on it, and all of them had to be replaced to get the information we needed. As this was on a SQL Availability Group, we are backing up the logs every 15 minutes. We also use Ola’s backup scripts, so if you’re not familiar with those, the directory structure is <Database Name> -> subfolder the DIFF, FULL and LOG in the database folder. So the choices were to go through the SSMS GUI or spend half a day working out the logic to script out the restores.

However, I came up with a different idea. I have been working the DBATools PowerShell module and was able to use that to script out a full restore with minimal lines of code, and way faster than half of a day working out the logic. What I came up with, well for me at the time, was pretty ingenious and damn sure easier than restoring via the GUI.

## Insert list of databases with format
$dbList= 'db1',
'db2',
'DB3',
'db4'

foreach ($dbName in $dbList)
{

## for the path, insert UNC path. for time, use 24 hr clock
$path = '<insert UNC path here>\' + $dbName 
$rtime = '20:30 03/17/2020'

## dbatools module
restore-dbadatabase -Sqlinstance <server name here> -Path $path -DatabaseName $dbName -MaintenanceSolutionBackup -RestoreTime $rtime

}

In December the topic of the month for #Tsql2sday was to talk about our learning goals for 2018. One of the goals I put forth was to get better at SQL development. As part of that goal, I also realized I have to get more familiar with Visual Studios. So to help with that, I do a lot more of my script writing in Visual Studio, plus I check in my code (another goal I had).

I happened to notice that Kenneth Fisher (b/t) has a homework series going that challenges DBAs and Devs to get better at certain tasks. In May, the task was to create a db. I didn’t think much of it at the time, I’ve created thousands of dbs from SSMS, but it occurred to me this morning that I haven’t done as code from Visual Studio. So here’s what I came up with.

  
/* Project Name: Kenneth Fisher's homework challenge 1 Date: 6/27/18 Author: SQLMac Purpose: Create a database in a different manner than SSMS. In this case, as code that is deployed and ran. */ USE master; GO IF DB_ID(N'Fisher_HW') IS NOT NULL DROP DATABASE Fisher_HW; GO CREATE DATABASE Fisher_HW ON PRIMARY (NAME = F_HW1_dat, FILENAME = 'D:\$SQL_Instance\Data\F_HW1_dat.mdf', SIZE = 100mb, MAXSIZE = 500MB, FILEGROWTH = 50mb), FILEGROUP Non_Prime_Write DEFAULT (NAME = F_HW2_dat, FILENAME = 'D:\$SQL_Instance\Data\F_HW2_dat.mdf', SIZE = 100mb, MAXSIZE = 5GB, FILEGROWTH = 100mb), (NAME = F_HW3_dat, FILENAME = 'D:\$SQL_Instance\Data\F_HW3_dat.mdf', SIZE = 100mb, MAXSIZE = 5GB, FILEGROWTH = 100mb), FILEGROUP Non_Prime_Read (NAME = F_HW4_dat, FILENAME = 'D:\$SQL_Instance\Data\F_HW4_dat.mdf', SIZE = 100mb, MAXSIZE = 100MB, FILEGROWTH = 50mb) LOG ON (NAME = F_HW_log, FILENAME = 'D:\$SQL_Instance\Data\F_HW_log.ldf', SIZE = 100mb, MAXSIZE = 1gb, FILEGROWTH = 10mb) COLLATE SQL_Latin1_General_CP1_CS_AS; GO ALTER DATABASE Fisher_HW SET RECOVERY SIMPLE; go ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 2; GO ALTER DATABASE [Fisher_HW] MODIFY FILEGROUP [Non_Prime_Read] READONLY; GO ALTER AUTHORIZATION ON DATABASE::[Fisher_HW] TO [sa]; go

I’ve been meaning to work on getting my mind wrapped around Source Control for a while now. I was all hyped about it after a recent SQL Saturday, then I wanted to make it a learning goal for the year. Looking back at those posts, I guess the goals was to do more “DevOps” type stuff, which includes source control so there ya go.

Past Failures

This wasn’t the first time I tried to work source control. When I was working on my undergrad long ago, I wanted to get more experience as a developer and tried to work on an open source project with a friend. Unfortunately, I was remote to the project and getting my mind wrapped around how to implement source control frustrated me to the point of giving up.

Then I spent some time trying to figure out how to set up a local repository that I could use. Here again, it became frustrating as SSMS doesn’t have a built in method for utilizing source control. And using Visual Studio is a complex, confusing endeavor for the everyday DBA. In the end, I just stuck all of my scripts into an organized folder in My Documents and called it good.

This Time

I have taken the approach of checking in all my scripts to a code repository, then pushing that code to BitBucket. Why BitBucket? Well because it’s what the organization I work for uses, so I’m trying to kill two birds. Learn how to effectively use source control while I also figure out how to navigate BitBucket. I’m still on the fence about using the rest of the tools from Atlassian.  To push the code I have been using GitKraken which works great for both Windows and Mac.

Another avenue of getting used to using Source Control I have been utilizing is Visual Studio. Again, two bird mentality here. I get familiar with both source control and writing code in Visual Studio. Is that important for a DBA? I don’t know, there are some advantages to using Visual Studio to develop and leave SSMS for the administration work. Of course, at this point I am still too much of a novice to notice much difference.

Just have to keep plugging away at it.

Much more to learn

There’s still much more stuff I need to figure out. Like how to branch, merge, correct mistakes, and recover from those opps moments. Such as when I accident deleted my repo setting this up. Thankfully, I managed to recover most of my scripts. It keeps things interesting right?

Some times I just want to kick myself. A very simple concept that I never thought about applying to work, I was bouncing around the internet trying to get a better understanding of how to use source control with my script library when I found a rabbit hole.

The rabbit hole in question was of the “How to write better code” / “How to be a better developer” variety when I came across an article about keeping a journal for programming. In fact, it was this article.

Dammit. Why hadn’t I thought of that sooner? I suppose it follows the advice about posting articles/resources on your blog for you to search against later. It just hadn’t occurred to me to keep a journal discussing ideas that weren’t ideal for blogs for various reasons.

 

I have recently been working on tuning a job that purges data out of various tables that are older than 13 months using getdate. Pretty straight forward right? Well originally the job consisted of 6 different statements like so:

delete from table1
   where column_with_date <= DATEADD(mm,-13,GETDATE());

Except, the tables have become so large the generic deletes were filling up tempdb. No bueno. So I decided to rewrite the job, using six separate stored procedures that I created. While redundant, we could modify the behavior for each delete in the table independently. More importantly, we could batch out the deletes preventing tempdb from filing.

So I went about creating a generic template to run a delete in batches. I think I based the code off a script I found on DBA Stack Exchange.

SET NOCOUNT ON;

    DECLARE @rowcount INT;
    SET @rowcount = 1;

    WHILE @rowcount > 0
    BEGIN
        BEGIN TRANSACTION;

        DELETE TOP (500000)
            database.dbo.table(x)
        WHERE column <= DATEADD(mm,-13,GETDATE())
        OPTION (MAXDOP 1);

        SET @rowcount = @@ROWCOUNT;

        COMMIT TRANSACTION;

    END

One of the stored procs began using parallelism and causing high CPU usage, so I added the maxdop option.  All of the tables have a timestamp column, so the getdate() function works great to use as a filter. Except not all of the timestamp columns were formatted the same way.

All of the tables except one, used the following format:

Where as one table used the following format:

The Fix

The difference is the one table uses date and time, not just date. How is this important? Well, when the one job step executed on this table it got stuck in kind of a loop. The code was written to grab 500k records based on the the date and TIME 13 months ago. So with every iteration, the time would change ever so slightly and the job would run forever (well, until midnight anyway).

Disclaimer, it took me about a week to finally realize what was happening. So to correct the behavior, I changed the formatting of the where clause so the query would get all of the records for the day instead of the slow cycle it was on.

SELECT * FROM table(x)
    WHERE CONVERT(VARCHAR(10), column_timestamp, 110) <= DATEADD(mm,-13,GETDATE());

I feel pretty dumb for not catching this sooner. Although, I doubt it ever takes me this long to catch it again.