Today, I’m not thankful that it’s Friday, although I am happy about it. Today I am thankful for the wisdom of Grant Fritchey ( b/t ). No Grant didn’t help me with any kind of problem, at least directly. Hell, I have never even met the man in person anyway. No, I’m not some kind of weirdo interwebs creeper either. I do, however, follow Grant’s blog and occasional twitter posts. What can I say, he’s a pretty sharp crayon when it comes to SQL, and as a DBA I like and listen to, sharp crayons. Although today I’m not thankful for Grant’s technical wisdom. Today I am thankful for his wisdom related to making the relationships between the DBAs, devs, IT, and business folks better.

As you know, COVID-19 has upended well…everything. This means, that at my company priorities have been upended as well. So with fast responses to technical and business problems, IT hasn’t had to polish things out and make them experiences seamless. This week, I forgot about that and started letting my ego and my own self-importance get the better of me. I was focused on the problems related to databases and the SQL instances I work with, I forgot about the bigger IT picture. During a SCRUM call, I was a little too honest about my frustration related to connecting to instances in the cloud and how unintuitive access currently is, along with the inability to use our toolsets. It was pointed out to me later, that my frustration was a little too apparent and could have been seen as hostile towards the IT folks on the call.

Own itOwn It

So today, I used some wisdom from both Jocko and Grant, I chose to own it. I decided it was better to apologize for my ego whether it was misconstrued or not and make sure to foster a better relationship between IT and the Data Team. So instead of dismissing it as just as a misunderstanding in a moment, I opted to take 20 minutes to send an apology accepting all of the blame. I’m not posting this as a humble brag about my moment of empathy either, I am posting this to both remind myself I will be better publicly and to reinforce the lesson. The bottom line, it is better to remain humble and approachable than to fracture getting work done because of some unmet, implied expectation that neither side had agreed to.

I will be better.

 

Figuring out Windows FirewallI ran into an issue last week when I was working on upgrading a SQL2014 Availability Group to SQL2019. Nothing went wrong with the actual SQL upgrade, however, post-upgrade I ran into some headaches. Turns out Windows Firewall can bite you in the ass on an upgrade. The reason for the upgrade in the first place was so we could take advantage of automatic database seeding.

Automatic seeding is to be the lynchpin on PowerShell script I’m working on and will post about later. The idea will be to check for new databases added during the day, and then programmatically added them to the secondary later in the day. But that post is for another day.

After the upgrade, I performed a failover test, verified everything was working okay. After failing back over to the original primary, I added a database to the availability group via the wizard and chose automatic seeding, and while that was running I went off to do something else. When I got back, I saw the wizard had completed successfully and the database had been added. Just to make sure though, I checked the secondary and found no database. Say what?

After spending a day looking for failover cluster errors, testing failovers, adding databases to both nodes of the AG to test database creation, and assorted other tasks, I was still no closer to figuring out why the secondaries wouldn’t sync up or why automatic seeding wasn’t working. Keep in mind, all of this worked just fine when the AG was still running as a SQL 2014 environment. I mentioned this to one of our IT folks because I was at loss.

The fix

One of our IT admins found the problem, although it did take them a couple of hours. Turns out, port 5022 was blocked on the secondary replica but why was it blocked when it worked before? I’ll tell you why. When, and this very well could have been me when I built the server originally, the server was built and everything was loaded, a rule was created in the Windows Firewall for inbound connections. Only, instead of creating the rule to use the port number, the rule was created to allow access to the SSMS executable. Do you see what happened?

When I upgraded SQL to 2019, the 2019 executable took over running SQL. The path changed from <disk>:\<install location>\MSSQL12.MSSQLSERVER\blah blah blah to <disk>:\<install location>\MSSQL15.MSSQLSERVER\blah blah blah

The fix was simple, we changed the firewall to allow connections on the AG port. I mean I guess we could have repointed the rule, but then if we ever upgraded that server again…I promise we would forget about this issue and had to repeat the troubleshooting work figuring it out again.

Be humble folks, every so often we manage to shoot ourselves in the foot.

As a DBA, working from home (WFH) is not a new concept or new experience, I do it a lot. However, this shift to working from home due to the office being closed for Covid-19 is a little different. First off, I’m enjoying getting my commute time and saving money on fuel. The downside is, since moving in 2017, I haven’t put a lot of effort into getting my work area properly set up for me. So that’s a bit distracting.

Other things I have learned Working from home

  • I need to keep my schedule as similar to the office as possible
  • I need to structure my day as I normally would – times for learning and deep dives, etc.
  • I need to remember to get up and move
  • I need to remember to take breaks

Work from homeI’m naturally an introvert and don’t mind being without a lot of people, so I imagine that’s been a little easier on me than most. Although there are times when I need to communicate or just banter with people: I can use Teams for that at work and Twitter for the SQL community, not that I’m on Twitter a lot.

With everyone else working from home, I have noticed a few things that I do differently that has made my life easier. I leave my work laptop at the office, mostly because it’s 10lb brick and I prefer carrying my MacBook instead. So instead of VPN’ing into the Office, I have a LogMeIn account, so I was spared all the issues with the VPN being maxed out. I also prefer LogMeIn because if my connection gets dropped to the laptop, and I’m running a script than the script doesn’t get ghosted on the SQL side.

Please don’t

Somethings I have seen people do that they probably shouldn’t over VPN:

  • Don’t do virtual meetings over the VPN, you’re chewing up bandwidth. Instead, drop the VPN connection to have the meeting.
  • Have a backup plan in case the VPN has an outage
  • Please, please for the sake of those you work with – pay attention to the ambient noises. We know your kids and pets are cute, but listening to them in the background makes it harder to hear.
  • Use Mute. Really….if you’re not talking, put yourself on mute.
  • Don’t drink or chew on calls unless you are muted.
  • If you have a webcam set up, be careful what you broadcast, and don’t forget it’s there.

It’s going to be awhile

Just because you are not in the office, that is no reason to act unprofessional or forget to take care of yourself. Working from home can provide a golden opportunity to get your diet under control, get some training in, as well as spending time with your family and doggos. You can feel good about reducing your carbon footprint (if that’s important to you), you can use the savings from commuting and eating out to fund a vacation when practical, or you can just enjoy the slowdown and quiet in your daily life. We’re going to be at this a while, so we may as get good at it.

This week I had to restore to a point in time for an entire instance. The instance had approx 25 databases on it, and all of them had to be replaced to get the information we needed. As this was on a SQL Availability Group, we are backing up the logs every 15 minutes. We also use Ola’s backup scripts, so if you’re not familiar with those, the directory structure is <Database Name> -> subfolder the DIFF, FULL and LOG in the database folder. So the choices were to go through the SSMS GUI or spend half a day working out the logic to script out the restores.

However, I came up with a different idea. I have been working the DBATools PowerShell module and was able to use that to script out a full restore with minimal lines of code, and way faster than half of a day working out the logic. What I came up with, well for me at the time, was pretty ingenious and damn sure easier than restoring via the GUI.

## Insert list of databases with format
$dbList= 'db1',
'db2',
'DB3',
'db4'

foreach ($dbName in $dbList)
{

## for the path, insert UNC path. for time, use 24 hr clock
$path = '<insert UNC path here>\' + $dbName 
$rtime = '20:30 03/17/2020'

## dbatools module
restore-dbadatabase -Sqlinstance <server name here> -Path $path -DatabaseName $dbName -MaintenanceSolutionBackup -RestoreTime $rtime

}

I don’t know why I am slow to take up new tools, but I am. I recently, like back in December, began working with DBATools. In case you’re not familiar, DBATools is a PowerShell module assembled by the community over at DBATools.io. This module, oh this wonderful module, makes some of the most aggravating parts of a DBA’s job so much easier. Things like server migrations, copying users, adding Ola’s jobs, and a slew of other things so much simpler.

What sold me? Well in December I had to migrate 70 some small databases off to a new SQL AG. I knew I was going to have to script out the backups so I decided this was a good as a time as any to learn Powershell and solve a problem. That thought process got me thinking about DBATools, and I decided to see if the module could help me out. Oh, and help me out it did. This module is a tool EVERY DBA should be using.

DBATools works

I went about setting up the list of databases into a Powershell list, created a specific UNC file location for the backups to land on, and ran the following the code:

Backup-DbaDatabase -SqlInstance $server -Database $dblist -Path $path -Type Full -CopyOnly -compressbackup

Where $server was the name of the server I was backing up from, $dblist was the list of databases I needed, $path was the UNC location. The -CopyOnly switch, made sure I didn’t affect the backup chain on the server I was migrating from. Although, this was less of a concern post-migration.

The really cool part about this, or at least I thought it was cool, was the restore command.

Restore-DbaDatabase -SqlInstance $newServer -path $path 

The restore command allowed me to point the restore command to the path, and it restored each database in the directory to the default file locations. Pure genius.

Other areas of the toolset I used were copy users, migrate settings, migrate jobs, and copy linked servers to name a few. Oh, this tool saved me so much work and time. I’m not sure why I did not embrace it earlier.

technical debtKids, it’s 2020. That it means it way past the time to kick the can down the road. Most of us in IT have been around long enough to see us shift from a centralized CPU (mainframes) to decentralized (client/server) architectures back to a centralized approach again (cloud). That means we have encountered problems and we have ignored problems, and those ignored problems have a name. It’s called Technical Debt, and the check is coming due.

Technical Debt

We all have it, and it tends to bite us right in the ass. It causes us to make compromises and other less than stellar decisions about how to handle it. What we need to do is start addressing the debt, figure out ways to minimize the impact of the debt. Stop covering it up with more hardware, crazy monitoring schemes, and automated fixes that just treat the symptoms and ignore the problems.

Just start working on it already. You’ve known about this debt for years/decades and it’s not getting worked on. Stop ignoring and start spending some time refactoring. Your customers will thank you, your developers will thank you, and your staff will thank you.

“The Only Thing That Is Constant Is Change ”

– Heraclitus

A long way back, a Greek philosopher said something about change being the only constant. The guy was right, but he would have made a horrible IT project manager. It’s a true statement, but at the same time forecasts an ever-changing landscape that is hard to pin down and stay on top of.

The rate of change is always about the same, as fast as possible, but the directions have shifted back on itself. Twenty years ago, when I was starting out in IT, the focus was to move off the centralized computing platform with terminal interfaces in favor of multiple servers and client access. Now we’re shifting back to a centralized platform. Of course, back then, it was mainframes and terminals versus today being the cloud and devices (phones, tablets, laptops, etc). I suppose with the promise of high mobile bandwidth that makes sense.

Containerization

I think in the next 10 years, probably more like 5, containers are going to push out Virtual Machines. Less to manage, easier to spin up, and can use Orchestrators like Kubernetes to manage the systems. Why troubleshoot a failed system, if you can just quickly replace it? I also think it’ll be an easy way to horizontally scale SQL Availability Groups (this is probably a pipe dream) across containers. Toss a load balancer in the mix….and suddenly you’re not as constrained on reads. The only problem to solve is that of licensing. Of course, if it’s a write-heavy application than you may still need the power of a traditional node.

Infrastructure as Code (IaC)

IaC is the next move forward in the infrastructure space. Why? Because why should we be sitting around clicking next when we can define a server or application with declarative code and it takes care of the build? Oh ya, and add in source control, the config file is maintained, documented, and versioned. I don’t know why anyone would fight this, except those who feel threatened. Plus, a positive is when Azure has a sale and you have to move your entire organization over from AWS, or vice versa, it’ll be a lot less work the second time.

The downside is, I think for those coming up in the ranks now and in the future are going to lose some valuable troubleshooting skills. Although I say that, I can’t remember the last time I’ve thought about if a comm port issue that was related to an IRQ or memory address. It is good for trivia when you ask the younger crowd what the difference is between the different comm ports are.

Source Control

I honestly don’t know why it has taken so long for Source Control to catch on outside of development. I mean I get it, there’s a bit of a learning curve and branching isn’t something I have completely figured out yet. But dammit, it makes working on the same piece of code so much easier for me to work on between different computers. With my team at work finally embracing it, I’m looking forward to seeing how it helps us.

DevOps

I think the ’20s are going to be the decade of DevOps. And by that, I mean on 12/31/29, DevOps will have been the default for years. Spinning up servers by hand will be a rare exception, but more like a novelty. And all of the previous things I listed are components of DevOps anyway. See, it’s already happening.

Change is Change

Change can be either good or bad, but I think it depends on how you see the change. Regardless of how you view it, it’s going to happen. Perhaps instead of fighting it, embrace it…..use the new decade as a launch platform to retool your skillset, embrace new technology, geek out and have some fun learning.