Tuesday 15 December 2009

Nifty...

The Midnight DBA has posted something rather nifty that can be found in SQL Profiler. There is the option, using sp_trace_setevent, to check for deprecated code - code that just won't work in future versions of SQL. Her explanation is rather more coherent than mine, I suspect she wasn't quite as excited.

I wish I had known about this one before I started migrating databases from SQL 2000 to SQL 2008....

I'll be implementing it on the last databases to be moved tomorrow. Murphy's Law says that if I don't, something ghastly will happen.

So I reported a bug...

And they got back to me! Hurrah!

Alas, I was out of office the entire time they were trying to get back to me, didn't have a watch on the bug.... and Microsoft concluded that I was no longer having issues, so closed it.

I shall try to get them to re-open this one. I know I am not the only one with the problem, but I do seem to be the only one who has reported it.... It's 507494, (if that link doesn't work, please let me know? It's a tad tricky when one is logged in to MS Connect).

Monday 2 November 2009

How to report a SQL Bug

1. Set up a Connect Account, using your Windows Live ID Passport (note, I find it more helpful to set these up using a personal email account, rather than a work account which won't migrate jobs with you. It is possible to change email accounts within Live, but it is a complete and utter pain). Join the SQL Server Connect group.
2. Visit the general feedback page and 'search this connection for' the thing that is causing you distress. IGNORE the 'Submit Feedback' button in the top right of the page. It isn't a button.
3. Find 'submit it yourself' (a link in the left hand menu).
OR
1. Click here to get directly to the relevant form (SQL Server Bug).

It's polite to search and check that no-one else has reported your bug. However, do not, unless you have lots of spare time, offer to fill in your SQL Profile. It takes an age, and I don't think it actually makes any difference as to whether you can report a bug.

Friday 30 October 2009

Change DB Owner

USE Database
EXEC sp_changedbowner 'NewOwner'

Wednesday 28 October 2009

Things to remember when setting up replication

  1. Set up replication domain accounts for publisher and subscriber before you begin.
  2. Make sure that these accounts have access to the servers
  3. Make sure that the snapshot folder is on the correct server, in the correct place. If you get this wrong, right click on 'Replication -->Distributor Properties --> the button with the ellipsis (...)next to the publisher in question'. However when I tried this, it all went wrong, so I started all over again.
  4. Make sure that the publisher's account has access to the folder in which you are putting your snapshots.
  5. Make sure that the subscriber's account has read access to the folder in which you are putting your snapshots. Do this before generating a snapshot, it takes rather longer after you've generated the snapshot.
  6. Make sure that you use the full path, not a mapped drive i.e. \\myservername\r$\my folder\my subfolder instead of r:\my folder\my subfolder
  7. If you get a warning telling you that it's unlikely that Pull Subscriptions will be able to read that folder, without having admin rights on the Publisher Server, then heed it.
  8. It will be vastly easier to set up a Push Subscription in this scenario. For a Snapshot Replication that runs at midnight, this is probably preferable.
  9. There are at least six different places to check for replication problems, and it's probably not the top tab of the Replication Monitor. Try checking the other agents and the SQL Log.
  10. When setting up a Pull Replication for the first time, set it up to run 'on demand'. You can change the schedule in SQL Agent later.
  11. Make sure that the Snapshot Agent has successfully completed before trying to run any sort of Push or Pull Subscription. It saves heartache.
  12. Give the Publication a reasonably descriptive name, which should at the very least say where it is from. If it is going to multiple places, stating where it's going could be harder.

Why service packs are a Good Idea

"The job failed. Unable to determine if the owner (Domain\Administrator) of job IpmsaPolicyLogMaintenance has server access (reason: Violation of PRIMARY KEY constraint 'PK__syscachedcredent__0AD2A005'. Cannot insert duplicate key in object 'syscachedcredentials'. [SQLSTATE 23000] (Error 2627) The statement has been terminated. [SQLSTATE 01000] (Error 3621))."

The quick & dirty fix? Set it to run at a slightly different time from any other SQL jobs, as the error occurs when two jobs start simultaneously, and both try to run an insert in the syscached credentials table, rather than one running an insert and one an update. Microsoft explain it more coherently.

The better fix? SQL 2000 SP1. Yup. SP1.

Friday 23 October 2009

This morning, I 'ave been mostly deleting old SQL Agent Jobs

And these have been those annoying jobs that are, apparently, undeletable and unwanted maintenance jobs.

Job #1

A rather ancient backup job, or, rather, the remnants thereof, sitting in SQL 2005 Agent but with no maintenance plan behind it, no subplan schedule within it, and generally going 'Ha Ha' in a manner eerily reminiscent of Nelson Muntz. Assuming that the following error message can be translated as 'Ha Ha!'.

The DELETE statement conflicted with the REFERENCE constraint "FK_subplan_job_id". The conflict occurred in database "msdb", table "dbo.sysmaintplan_subplans", column 'job_id'.The statement has been terminated. (Microsoft SQL Server, Error: 547)

First, I had a look in the sysjobs table


SELECT *
FROM msdb.dbo.sysjobs
WHERE name like 'plan name%'

This gave me the ID for the job, and then I was able to run the following against the sysmaintplan_subplans table


SELECT *
FROM sysmaintplan_subplans
WHERE job_id = '28B010B4-5FCD-4802-8405-808E2C9EA352'

(to confirm that I was looking at the correct job)


DELETE sysmaintplan_subplans
WHERE job_id = '28B010B4-5FCD-4802-8405-808E2C9EA352'

Then I was able to delete the job via Job Activity Monitor.

Job #2

This was sitting on a SQL 2000 SP4 box, which had been created in the time honoured fashion: take an image of the original server and rename it. You get the most infuriating error

Error 14274: Cannot add, update, or delete a job (or its steps or schedules) that originated from an MSX server. The job was not saved.

For this one to be fixed, you need to restart SQL. First, find out what the name of the original server was by running

SELECT @@SERVERNAME

Then, rename it with the correct name

sp_dropserver 'OldServerName'

sp_addserver 'CorrectServerName', 'local'

Then, restart SQL. I got lucky. No-one was using the Server.

Run the following


UPDATE msdb..sysjobs
SET originating_server = 'CorrectServerName'
WHERE originating_server = 'OldServerName'

Bob's your Uncle, Charlie's your Aunt, and it should now be possible to delete the recalcitrant job.

This is a known problem that was introduced in SQL 2000 SP3, and has apparently persisted. Grrrr.

(This week i 'ave mostly been.... ref)

Monday 19 October 2009

MOM 2005 Database Sizes


Those of use with MOM 2005 (Microsoft Operations Manager) and Forefront Client Security have faced the problem of large databases (why yes, you do need sunglasses for that page).


The SystemCenterReporting Database needs to be really rather large: this nifty graph is my best effort involving Excel and the data given on the Technet Forefront Client Security Blog. The Y-axis is space required in GB, the X-axis is the number of servers monitored.
From this, we can see that vastly less disk space is needed if we are only keeping 180 days worth of data. The default is that the System Center Reporting database keeps 395 days worth of data, and, because MOM is sitting on top of that database (among others), this is the amount of data that it saves. All well and good, but for the fact that there is no GUI within MOM to change the number of days that data can be saved.
Just before we decided to put the datafiles onto the SAN, I found a very handy stored procedure. Sweet as, it ran really quickly and claims to have reduced the number of days data kept.
Now I'm just waiting to see if the overnight groom job will actually reduce the quantity of data kept... because that database is still full-as-full, with no room to expand, despite believing that it only keeps 90 days worth of data. Not that I would allow SystemCenterReporting DB to autogrow anyhow. Reporting databases (like Sharepoint databases) are best created with a filesize that is the same as the anticipated eventual size of the database, and then left well alone with autogrow switched off. Log files should be treated in a similar manner. That way, the data can arrange itself in a much more logical manner, and data fragmentation will be reduced.

Wednesday 14 October 2009

Another BizTalk Deletion...

From the Database Structure and Jobs page:


Warning

In the BizTalk Management (BizTalkMgmtDb) database, there's a stored procedure named dbo.adm_cleanupmgmtdb. DO NOT RUN THIS STORED PROCEDURE! If you do run this stored procedure, all the entries in the database will be deleted.

Why, I wonder, this obsession with deleting the entire contents of a database?

All this, plus talk of the fact that BizTalk can have distributed transactions pending across several databases makes me distinctly ansty.

First job, therefore, for a bemused DBA with a whole new BizTalk system? Figure out how to configure backups. Clearly, my usual weekly-full-backup, daily-differential-backup and hourly-transaction-log-backup model is not going to work here. Not from a DR perspective anyhow. There will be tears and upsets if I'm not careful.

Luckily, I have some Test databases to play with, as well as the ones on the Live Server. So I shan't destroy the entire business if it goes a bit wrong.

Certainly, this is something to build into the DR plan.

Tuesday 13 October 2009

BizTalk

We've got BizTalk now, and I'm still getting to grips with it. However, I came across this little gem on the Best Practices page.

"Delete all the data
If the databases are too large and if the preferred method is to delete all data, you can delete the data.
Caution Do not use this method in any environment where the data is business critical or if the data is needed. "

You don't say....

(more on BizTalk later. At the moment I have Too Many Questions).

Friday 25 September 2009

Servernames

@@SERVERNAME. Sometimes, it isn't the same as the actual server's name. @@Servername is incorrect, it's wrong, it doesn't match. Argh!! It's easy enough to map though:


SELECT @@SERVERNAME AS[@@SERVERNAME],
CAST(SERVERPROPERTY('MACHINENAME') AS VARCHAR(128)) + COALESCE('' +
CAST(SERVERPROPERTY('INSTANCENAME') AS VARCHAR(128)), '') As RealInstanceName

The problem arises, when someone decides to rename the Server after SQL has been installed. The new Servername will get recognised by SQL during startup: and, default or instance, it will happily allow you to connect to SQL using the new Servername afterwards.

But....

The sys.servers (or sysservers, if you are in SQL 2000) table still holds the old information, hence the erroneous information from @@SERVERNAME (if you run SELECT SERVERPROPERTY('ServerName') you will get the correct information).

So. You can change the sys.servers table information thus (having disabled replication AND dropped any remote logins first AND switched off mirroring AND scripted out all your maintenance plans AND deleted them...):

sp_dropserver old_servername\instancename
GO
sp_addserver new_servername\instancename, local
GO

If you don't disable replication or drop the remote logins, then there will be errors. Don't say you weren't warned. You may need, if you have linked servers, to re-add those. Client aliases using named pipes will also need to be checked and corrected.

If you didn't script out and delete your maintenance plans before you got going, there is now (as of SQL 2008) a script to deal with the fact that they cannot be changed nor deleted....

Which all goes to show that, really, once you have decided what the Server is going to be called, you should stick with it. Avoid calling servers the same as the previous server, but with 'New' on the end. Or calling anything 'Temp'. Or using a default name. SQL2000Win2003 might be descriptive, but it won't help if you don't identify which one of the four servers with that name it really is.... Particularly, avoid having a server named, say, George, calling the new server, NewGeorge, and then creating a third server to replace NewGeorge which is, confusingly, called George, but isn't the same server as the Original George was. Also confusing is a server which was called Mabel, but is now called Sybil, while another server used to be called Sybil but is now called Fiona, while there still exists a server which has been called Mabel all along.

In other words. THINK.

Monday 14 September 2009

It's the little things

Me, I like to faff about with my resultsets in Excel. This probably makes me incredibly old fashioned, and I'm sure various of my friends would call me a flat-earther. Tant pis.

However, I got all overexcited just now. Instead of highlighting the resultset, copying it using ctrl+c and pasting it into the spreadsheet at row #2, and then swearing greatly while I transferred the column names individually, I right clicked on the resultset after highlighting it.

Now, I've been using SSMS for SQL 2008 for, oh, six months. How did I fail to notice that there was an option to "Copy with Headers"?! If this has been an option in SQL 2005, well, please don't tell me. I'd rather not know that I missed it for, oh, two years.

Wednesday 2 September 2009

syspolicy_purge_history failed on the cluster

This one is apparently a bug. It's a job that SQL creates when it installs SQL 2008, and it will perpetually error on the "Erase Phantom System Health Records" step, owing to the fact that it will insist on calling the method on the node name, rather than on the virtual servername

viz

"MessageExecuted as user: Domain\AgentServiceAccount. A job step received an error at line 1 in a PowerShell script. The corresponding line is '(Get-Item SQLSERVER:\SQLPolicy\SQLClusterInstanceNameNodeName\Default).EraseSystemHealthPhantomRecords()'. Correct the script and reschedule the job. The error information returned by PowerShell is: 'Method invocation failed because [Microsoft.SqlServer.Management.PowerShell.Extensions.Machine] doesn't contain a method named 'EraseSystemHealthPhantomRecords'. '. Process Exit Code -1. The step failed."

The way to deal with it is to change the script a bit:

"Get-Item SQLSERVER:\SQLPolicy\VirtualServerName\InstanceName".

And, if it's the default instance, or there is only one SQL instance on the cluster, go for

Get-Item SQLSERVER:\SQLPolicy\SQLClusterInstanceName\default"

In other words, just get rid of the Node information, and all will be fine

KB article.

Not that I have quite worked out what this job does: keeps things tidy if you have policy management set up, I imagine. Which is useful.

Tuesday 18 August 2009

Rebuild Index Failed

"failed with the following error: "Online index operations can only be performed in Enterprise edition of SQL Server.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly."

Or, perhaps, the failure reason is that the SQL Server in question is Standard Edition, and that's pretty much it?

Other suggestions here (if you're not in Standard Edition!).

It would be nice, though, if SQL were clever enough to know when it's running Standard Edition, and not give you the option of an Online Index Operation when you're building the maintenance plan, now wouldn't it?! Or is that a Moon-on-a-Stick request?

Thursday 13 August 2009

OSQL sort of isn't...

'OSQL' is not recognized as an internal or external command, operable
program or batch file.

Oh. But it's where it should be! C:\Program Files\Microsoft SQL Server\80\Tools\Binn (not my installation).

Within CMD, you can type 'set path' to check the variables in the default path. If the actual location of osql.exe isn't in there (or, rather, its parent folder - and it should be there, because SQL is supposed to install it!)

Follow the instructions here.

Sorted. And so quickly.

Tuesday 11 August 2009

Because I am a doofus

And, because, primarily, this is my repository of Useful SQL Information.

Master
Contains serverwide information about SQL
  • logins
  • linked server info
  • configuration info
  • info about user databases - locations of db files, key properties etc

Vital: SQL cannot start without it.

Resource

AKA mssqlsystemreources.mdf in the file system. Rather completely invisible in SSMS. Contains all the system objects deployed in SQL: so all the system stored procedures and system views (although these appear logically in the db to which they belong). It makes life easier when performing upgrades or applying Service Packs to have all this information in one place. Do not go looking for it. You will not find it except as a file.

Cannot be backed up.

Model

Template database: all databases need a base set of objects called the system catalog. (I know, American Spelling). When you create a new database, it uses the model database as the basis for its structure. So, if you want a people table in every single database you create, create it in the model database.

MSDB

Stores information for SQL Server Agent, Service Broker, Database Mail, Log Shipping etc. SQL Server Agent job information gets stored here i.e. job parameters, schedule, execution history. MSDB also stores backups and maintenance plan information, as well as information about the servers and tables involved in log shipping.

If you're backing up the database logs every 15 minutes or so, and you have lots of databases in Full Recovery Mode, you need to make sure that you have a regular tidyup of MSDB with a cleanup task, to make sure it doesn't get too big.

Distribution

Used during replication, it stores metadata and history information and transaction information for transactional replication. Until replication is set up, the distribution database is invisible in SSMS, however, the data files are installed by default.

TempDB

It's all in the name for this one: temporary data and data objects. Temporary Tables. Temporary Stored Procedures. That sort of malarkey. Also used for sort operations for large queries, some indexing, and during dbcc checkdb (database_name, repair_rebuild). Thus tends to get a bit out of control on occasion.

Gets recreated every single time SQL Server Service is started (very handy if it's gotten a little Out Of Control recently).

Cannot be backed up.

Monday 27 July 2009

SPFun

Or what I learned about Service Packs.

A SQL Maintenance Plan job had been running fine on my SQL 2005 RTM machine for months: until I joined the company, and made the mistake of having a look at it and doing some tidying. Next thing I know?

"Unable to start execution of step 1 (reason: line(1): Syntax error). The step failed.".

"But, but, but, but!" I cried "I've just stopped it faffing about with Adventureworks, and removed that database from the server. I haven't touched the syntax directly, I dealt with this in SQL Management Studio. The syntax, surely, should be fine?"

I shrugged, recreated the job... and the same thing happened the next time it ran.

Hmmm.

It would seem that this particular error happens when SP2 is not yet installed on one's SQL box, and one edits a job via a newer version of SSMS. If one plays about with the job on the box, the problem is solved. Or, if one does the sensible thing, and installs the service packs.

I foresee some out of hours work for us here.

Friday 17 July 2009

SQL Integrity Checks

The job failed "Integrity Checks Job for DB Maintenance Plan 'DB Maintenance Plan1' sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed." That's the job history. The history against the Maintenance plan (I only learned about this more detailed history last week, and I think it's fab), said "[Microsoft][ODBC SQL Server Driver][SQL Server]Repair statement not processed. Database needs to be in single user mode."

There are all sorts of ways of dealing with this.

Firstly, you can untick the box that says "Attempt to repair any minor problems" within the database maintenance plan. This is the same as changing the end of the SQL statement in the scheduled job to read "-CkDB" instead of "-CkDBRepair" If the job fails, this will flag up that there are errors within the database. Run DBCC CHECKDB at some point when you get in, and investigate from there. Databases need to be in single user mode to repair themselves. Sometimes, this is not possible (for example, SQL boxes that support websites with 24/7 availability) and you need scheduled downtime.

Secondly, you can ensure that users are not logged in. In one case, I shifted the job to run at 23:45 instead of at 19:30 - there were a couple of extremely enthusiastic users who were still logged in at 19:30.

How did I find out who they were? Script... On all servers, I find that an admindb is very helpful. So, I created a new table:


USE [admindb]
GO
/****** Object: Table [dbo].[tbl_sweetsql_usersloggedin] Script Date: 07/17/2009 10:15:11 ****** (C) Sweet SQL Lass/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[tbl_sweetsql_usersloggedin](
[loginame] [varchar](50) NOT NULL,
[logintime] [datetime] NULL,
[currenttime] [datetime] NULL,
[hostname] [varchar](50) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO

and then ran the following as a scheduled job:

USE [master]
GO
/****** Script Date: 07/17/2009 10:15:11 ****** (C) Sweet SQL Lass/
INSERT INTO
admindb..tbl_sweetsql_usersloggedin
SELECT loginame,
login_time,
getdate(),
hostname
FROM
sysprocesses
WHERE dbid = 7

(use
SELECT *
FROM sysdatabases
to find out what the dbid is).

The above code works on SQL 2000. For SQL 2005/8, use sys.sysprocesses and sys.databases views to get the same information. This is published under a creative commons license. (c) Sweet SQL Lass.

If the users are really lazy, and just didn't log out, and you want to repair things in the middle of the night, then you may need to use some SQL code to deal with this and throw them out of the database. There is plenty out there (and I have a version a colleague produced, and am seeking permission to publish, because it is rather nifty). Personally, I wouldn't recommend this. At the very least, warn your users that they will get logged out.

Heck. Microsoft doesn't recommend automatically repairing your database as part of an Integrity Check job. So who am I to go against that idea?

Finally, don't forget - log the information for your Maintenance Jobs to a Report File. It makes life much easier if you've done something really daft, like drop a database from the server without bothering to remove it from the Maintenance Job first.

Tuesday 14 July 2009

Fun and Games with SCOM

Trying to find out what the recommended database size for Operations Manager 2007 should be is a headache....

Picture this. The boss mentions that you might just want to check the SCOM server, because, based on the situation on the MOM server, it might just be misconfigured. 41,240 events worth of MSSQLSERVER errors in the Application Event Log misconfigured? Owing to lack of space in the OperationsManagerDB? Yes. At this point, all you can do is laugh. Lots. It was quite the sight.

A trawl round the internet suggests this spreadsheet may help work out how big this database needs to be - it's Operations Manager 2007. Like SharePoint, and MOM, the database is not supposed to be set to autogrow, and it needs elbow room (about 40% freespace on the data file - see the comments) for indexing and suchlike. An excellent article can be found here about estimating database sizes, and there is also this article and an explanation of how to turn it into a spreadsheete here.

It helps if you know how many agents you're monitoring. We think 40 per server - and we get a result in the spreadsheet that's about the same size as the current data file, which has zero free space. So, I doubled the size of the data file. This gave me 49% free space. Not bad for a guess! I begin to wonder if SQL Enterprise Edition has different stats from SQL Standard Edition for this as well - we're running SQL Standard Edition, and it always seems to want More Space than Enterprise Edition.

Miraculously, when the database has grown, all errors stop, all of a sudden, we have indexes and, most importantly, we can monitor again.

Playing with the spreadsheet suggests that the Data Warehouse data file needs to be about 20 times the size of the Operations Manager data file. But it's OK: you can set this to autogrow... There will only be a problem when we're out of disk space on the server - I've got 120GB to play with for the datafiles for all the databases on the server.

Tempdb also needs to be about 20% of the size of the two other databases combined. Careful though: if you set it to autogrow, this mucks up Operation Manager's ability to monitor it out of the box. If you don't set to autogrow, then you get lots of emails about it... because it's one of the system databases... Shame. Like Thomas says: this is the sort of thing we might just like to know!

Begs the question, where did the email that told someone that the Operations Manager database had run out of space actually go? That I must find out!