In December the topic of the month for #Tsql2sday was to talk about our learning goals for 2018. One of the goals I put forth was to get better at SQL development. As part of that goal, I also realized I have to get more familiar with Visual Studios. So to help with that, I do a lot more of my script writing in Visual Studio, plus I check in my code (another goal I had).

I happened to notice that Kenneth Fisher (b/t) has a homework series going that challenges DBAs and Devs to get better at certain tasks. In May, the task was to create a db. I didn’t think much of it at the time, I’ve created thousands of dbs from SSMS, but it occurred to me this morning that I haven’t done as code from Visual Studio. So here’s what I came up with.

  
/* Project Name: Kenneth Fisher's homework challenge 1 Date: 6/27/18 Author: SQLMac Purpose: Create a database in a different manner than SSMS. In this case, as code that is deployed and ran. */ USE master; GO IF DB_ID(N'Fisher_HW') IS NOT NULL DROP DATABASE Fisher_HW; GO CREATE DATABASE Fisher_HW ON PRIMARY (NAME = F_HW1_dat, FILENAME = 'D:\$SQL_Instance\Data\F_HW1_dat.mdf', SIZE = 100mb, MAXSIZE = 500MB, FILEGROWTH = 50mb), FILEGROUP Non_Prime_Write DEFAULT (NAME = F_HW2_dat, FILENAME = 'D:\$SQL_Instance\Data\F_HW2_dat.mdf', SIZE = 100mb, MAXSIZE = 5GB, FILEGROWTH = 100mb), (NAME = F_HW3_dat, FILENAME = 'D:\$SQL_Instance\Data\F_HW3_dat.mdf', SIZE = 100mb, MAXSIZE = 5GB, FILEGROWTH = 100mb), FILEGROUP Non_Prime_Read (NAME = F_HW4_dat, FILENAME = 'D:\$SQL_Instance\Data\F_HW4_dat.mdf', SIZE = 100mb, MAXSIZE = 100MB, FILEGROWTH = 50mb) LOG ON (NAME = F_HW_log, FILENAME = 'D:\$SQL_Instance\Data\F_HW_log.ldf', SIZE = 100mb, MAXSIZE = 1gb, FILEGROWTH = 10mb) COLLATE SQL_Latin1_General_CP1_CS_AS; GO ALTER DATABASE Fisher_HW SET RECOVERY SIMPLE; go ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 2; GO ALTER DATABASE [Fisher_HW] MODIFY FILEGROUP [Non_Prime_Read] READONLY; GO ALTER AUTHORIZATION ON DATABASE::[Fisher_HW] TO [sa]; go
Log File

A couple of weeks ago I was given the task of helping a developer figure out why his Postgresql cluster wouldn’t fail over. The specific problem was to test failover prior to launching the Postgresql cluster in to Production. Sounds easy enough,, right? Setting things up wasn’t too bad, but finding answers quick was a challenge.

Setting up

The last thing I wanted to do was to start potentially destructive testing on a cluster that was to be used in Production. Instead I opted to create a couple Postgresql instances that I could set replication up on. Without thinking much, I set up instances on my laptop and ran them under Windows. I got to thinking about that, and realized that wasn’t going to reproduce the same results as on existing Linux cluster. So I spun up a couple of Ubuntu VMs.

Once getting the VMs spun up on my laptop, I had to install Postgresql, which wasn’t that difficult. In fact, it was a pretty simple process of making sure Ubuntu was updated, installing Postgresql per the documentation, and then configuring the HBA file to allow me to connect. The challenge came in setting up Replication for Postgresql. However, once setup it was pretty simple to get replication configured.

Getting the fail over to work

The way Postgresql handles replication is counter intuitive to the way SQL Server is done. Of course, to be fair I’m sure a Postgresql DBA would think the same about SQL. The Master-Slave relationship is just that,  Master telling the Slave what to do. If the Master goes down, the Slave is available for read. If the Slave goes down, the Master keeps going. In this relationship, the Master is the only write node. So when you promote the Slave to Master, the previous Master is orphaned. To rejoin the cluster the previous Master must be reconfigured as a Slave. Good times there. No failing over in a back and forth fashion.

My issue was with the promotion step kept failing. I would get this jacked error that:

pg_ctl: server did not promote in time

You have to love Linux for it’s vagueness. Turns out, it was pretty easy fix. When I was following the setup guide, I had copied in a recovery.conf file. Only, I didn’t modify the owner of the file. So had I taken more than 2 seconds to remind myself to check the log file, I would have discovered this problem much sooner. As it turns out, I made a post about this on DBA Stackexchange and was reminded to check the log file. In my defense, my Linux is a little rusty but that doesn’t excuse the fact I know to always check the log.

Turns out, changing the owner of the recovery.conf file made the promotion work flawlessly. I should know better. Always check the log file.