Thursday, 30 July 2015

Continuous Integration

It seems like a magic silver bullet: Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day. It means that everything should stay inline, that no-one steps on anyone's toes. That all should be shiny and lovely and

Oh. Hang on.

Didn't we all learn in Comp Sci 101 that there's no such thing as a silver bullet? I'm sure that turned up on my course 14 years ago: and Continuous Integration has been around since well before then (1991 if we believe wikipedia). Mind you, my Comp Sci course also involved the networks tutor getting very excited about 10BASE5 thick coax, when possibly he should have been getting nostalgic about it.

So. Anyway. Continuous integration is supposed to make it much easier for everyone to work together. And, in order to work together, you need good tools. You need to be using the same tools, you need the tools to work together without causing conflicts, and, crucially, you all need to be using the tools properly!

Enter Redgate. Hurrah!  I do rather have a DBA crush on Redgate. They've made my life so much simpler in so many ways (except for discontinuing Hyperbac. I did rather love Hyperbac, and I still have it running on the one SQL 2000 box that we're trying to decommission. It works. Period.).

Back in March, I went on the Continuous Integration Workshop hosted by Redgate. Primarily, I wanted to understand how CI works - we've got some source control in place where I work, but it very often feels that developers are stepping on each other's toes, that we're not always testing as robustly as we might, that our databases are completely out of step between environments, and, as DBA, I'm running scripts and copying databases from Live to Test to Dev more than I might otherwise like.

By the end of the day: I could see how it was supposed to work in the world of databases. From a standing start (I've not been involved with our Developers' source control system at all, and I can't exactly say I'm intimate with TFS), we looked at how to use SQL Source Control to link to a source control system - it works with pretty much any system, and DLM Dashboard (their super, free, product - which I had a lot of fun testing last year) to keep track of changes, to be informed as databases changed, and to automate the process. I took copious notes.

And, the automation of the process would be fantastic. Because this morning, I ran a script a whole week early, and had to roll it back again fairly pronto. On the plus side, I'd taken a backup before I'd got going, so I could restore that on the Dev box and then use SQL Data Compare, which made it an absolute doddle to establish which rows in the Live database had been updated, and to roll them back, based on a comparison between the two versions of the database.

(Did I say I love Redgate?)

Disclaimer: Redgate are offering me a free place at a Redgate Workshop in return for posting.

Wednesday, 29 July 2015

SSRS Parameters

I keep searching for the same default values for my date parameters.
Enough! Time for a list!

Monday of last week
=DateAdd(DateInterval.Day, 2-WeekDay(Today), DateAdd(DateInterval.Day, -7, Today))
Friday of last week 
=DateAdd(DateInterval.Day, 6-WeekDay(Today), DateAdd(DateInterval.Day, -7, Today))
First day of last month
First day of this month
First day of next month
Last day of last month
Last day of this month
Last day of next month

More to come... 

Wednesday, 30 July 2014

SQL Always On - things to get set up before you get going with the Wizard....

This article is key. At this point, I'm assuming that all the nodes involved are in the Windows Failover Cluster

  • Make sure that all nodes in the cluster are patched with this. Otherwise, you cannot set quorum votes on this node. Do this before the server is a live server. Regardless of what the article says, you need to restart each node. This means failing over. This means downtime. This is a pain in the butt if you have seven instances of SQL on a three node cluster.

  • Set the Node Votes. Easy. Use Powershell. Use the Windows Powershell Modules found in admin tools in the start menu. If it's not there, make sure it's installed. This doesn't result in downtime
    • Get-clusternode | fl NodeName, NodeWeight
      Should be ‘1’ for each node because that’s the default.
      This needs to be changed to ‘0’ for DRSQLALWAYSON by running
      (Get-ClusterNode “DRSQLALWAYSON”).NodeWeight = 0 
  • Make sure that HADR Always On is enabled for each instance of SQL involved. The easiest way is via SQL Configuration Manager. You will have to restart SQL. Again, downtime.

  • Make sure that the DR node is not a node onto which the primary instance of SQL could failover. This doesn't result in downtime.
    • Get-ClusterOwnerNode -resource “SQL Server (InstanceName)”
      Set-ClusterOwnerNode -resource “SQL Server (InstanceName)” –owners Node1,Node2

       It is worth running get-clustergroup in order to establish the names of the SQL Server instances. For the default instance on the cluster, I found that the command was 
    • Get-ClusterOwnerNode -resource "SQL Server")

  •  Work out where your listener will be.  If you've got primary and secondary instances on different subnets, this will be a whole bunch of fun. You need to us TSQL rather than the GUI:
USE [master]
           ALTER AVAILABILITY GROUP MyAvailabityGroup
           ADD LISTENER N'MyListener' (
           WITH IP
           ((N'local.ip.add.dress', N''),
           (N'dr.ip.add.dress', N'') ) , PORT=60173);

Monday, 14 April 2014

By far my most used script...

The one for an ad hoc backup of a database where I'm using Ola Hallengren's excellent solution, and I've ensured that I'm using the default backup location.


SET @dbname = 'dbname'

DECLARE @servername VARCHAR(50)

SET @servername = @@SERVERNAME
SET @servername = Replace(@servername, '\', '$')
SET @SERVERNAME = LEFT(@servername, 30)

/*Specify backup location*/
DECLARE @filefolder VARCHAR(50)

--SET @filefolder = 'W:\SQL_Backups_FULL\'
/* Or, to use default location*/
EXEC master.dbo.xp_instance_regread
  @filefolder output;

DECLARE @filelocation VARCHAR(150)

SET @filelocation = @filefolder + '\' + @servername + '\' + @dbname
                    + '\FULL\' + @dbname
                    + Replace(CONVERT(VARCHAR(20), Getdate(), 101), '/', '')
                    + Replace(CONVERT(VARCHAR(5), Getdate(), 108), ':', '')
                    + '.bak'

DECLARE @backupname VARCHAR(50)

SET @backupname = @dbname + '-Full Database Backup'

BACKUP DATABASE @dbname TO DISK = @filelocation  


Thursday, 13 March 2014

SQL Integration Services Access

Once upon a time, a long time ago, in SQL 2008, all users had access to connect to Integration Services in Management Studio. Naturally, SQL 2012 does it differently, and you have to explicitly grant access via Component Services
  1. Run Dcomcnfg.exe. This lets you edit the registry.
  2. In the Component Services dialog, expand
    Component Services --> Computers --> My Computer --> DCOM Config .
  3. Right-click Microsoft SQL Server Integration Services 11.0, and then click Properties.
  4. On the Security tab, click Edit in the "Launch and Activation Permissions" area.

  5. Add users and assign appropriate permissions, and then click OK. If you don't like your users remoting onto the server, chose 'remote launch' and 'remote activation'

  6. Repeat steps 4 - 5 for the "Access Permissions" area. Again, grant remote access if you don't allow users to log directly onto the server.

  7. Restart the Integration Services Service in SQL Configuration. Restart Management Studio if you had it open, and were trying to connect from that.

Friday, 21 February 2014

SQL Agent Jobs in SQL2012

In the last week or so, we've been tightening up the security on our SQL boxes, and removing as many logins with sysadmin access.  This has included various of the service accounts.

On pretty much every single SQL 2012 box, this has had an unintended consequence: the SQL Agent Jobs have suddenly stopped running. Moreover, if you start a SQL Agent job, it looks like it is running, but, then, hit close - and actually it's done nothing at all. Until you look in the SQL Agent Error Log.

A selection of errors:

Date        21/02/2014 08:14:38
Log        SQL Server Agent (Current - 21/02/2014 09:45:00)

[298] SQLServer Error: 229, The SELECT permission was denied on the object 'sysjobschedules', database 'msdb', schema 'dbo'. [SQLSTATE 42000] (SaveAllSchedules)

Date        21/02/2014 09:45:31
Log        SQL Server Agent (Current - 21/02/2014 09:45:00)

[298] SQLServer Error: 229, The SELECT permission was denied on the object 'sysjobservers', database 'msdb', schema 'dbo'. [SQLSTATE 42000] (ConnExecuteCachableOp)

Date        21/02/2014 09:45:31
Log        SQL Server Agent (Current - 21/02/2014 09:45:00)

[298] SQLServer Error: 229, The UPDATE permission was denied on the object 'sysjobservers', database 'msdb', schema 'dbo'. [SQLSTATE 42000] (ConnExecuteCachableOp)

Date        21/02/2014 09:45:31
Log        SQL Server Agent (Current - 21/02/2014 09:45:00)

[298] SQLServer Error: 229, The EXECUTE permission was denied on the object 'sp_sqlagent_log_jobhistory', database 'msdb', schema 'dbo'. [SQLSTATE 42000] (ConnExecuteCachableOp)

Note: just giving the SQL Agent account db_owner permissions on msdb didn't work. I just got different errors:
Date        20/02/2014 09:17:24
Log        SQL Server Agent (Current - 20/02/2014 09:17:00)

[298] SQLServer Error: 14262, The specified @job_id ('CF59EAF1-FAEC-4E5B-BE53-D07FE2CD9F60') does not exist. [SQLSTATE 42000]

Date        20/02/2014 09:17:24
Log        SQL Server Agent (Current - 20/02/2014 09:17:00)

[000] Unable to retrieve steps for job SQL Backup full backup: master and 5 others (created 05/07/2013 13:06:40)

(Yes, we use Red Gate SQL Backup here. And jolly good it is too).

Giving it additional SQL Agent Operator rights, however, did work...and was rather less sledgehammery in its approach than giving it sysadmin rights. If using Red Gate SQL Backup, you will need to ensure the account can execute the stored procedures in master. If using Ola Hallengren's Backup Solution (we use that too), then the only answer is, in fact, to grant sysadmin access - otherwise the SQL Agent Account cannot create the relevant folders. If you have regular checkdb jobs running, then you need to make sure it can execute DBCC CHECKDB on all databases.