Friday, 30 October 2009

Change DB Owner

USE Database
EXEC sp_changedbowner 'NewOwner'

Wednesday, 28 October 2009

Things to remember when setting up replication

  1. Set up replication domain accounts for publisher and subscriber before you begin.
  2. Make sure that these accounts have access to the servers
  3. Make sure that the snapshot folder is on the correct server, in the correct place. If you get this wrong, right click on 'Replication -->Distributor Properties --> the button with the ellipsis (...)next to the publisher in question'. However when I tried this, it all went wrong, so I started all over again.
  4. Make sure that the publisher's account has access to the folder in which you are putting your snapshots.
  5. Make sure that the subscriber's account has read access to the folder in which you are putting your snapshots. Do this before generating a snapshot, it takes rather longer after you've generated the snapshot.
  6. Make sure that you use the full path, not a mapped drive i.e. \\myservername\r$\my folder\my subfolder instead of r:\my folder\my subfolder
  7. If you get a warning telling you that it's unlikely that Pull Subscriptions will be able to read that folder, without having admin rights on the Publisher Server, then heed it.
  8. It will be vastly easier to set up a Push Subscription in this scenario. For a Snapshot Replication that runs at midnight, this is probably preferable.
  9. There are at least six different places to check for replication problems, and it's probably not the top tab of the Replication Monitor. Try checking the other agents and the SQL Log.
  10. When setting up a Pull Replication for the first time, set it up to run 'on demand'. You can change the schedule in SQL Agent later.
  11. Make sure that the Snapshot Agent has successfully completed before trying to run any sort of Push or Pull Subscription. It saves heartache.
  12. Give the Publication a reasonably descriptive name, which should at the very least say where it is from. If it is going to multiple places, stating where it's going could be harder.

Why service packs are a Good Idea

"The job failed. Unable to determine if the owner (Domain\Administrator) of job IpmsaPolicyLogMaintenance has server access (reason: Violation of PRIMARY KEY constraint 'PK__syscachedcredent__0AD2A005'. Cannot insert duplicate key in object 'syscachedcredentials'. [SQLSTATE 23000] (Error 2627) The statement has been terminated. [SQLSTATE 01000] (Error 3621))."

The quick & dirty fix? Set it to run at a slightly different time from any other SQL jobs, as the error occurs when two jobs start simultaneously, and both try to run an insert in the syscached credentials table, rather than one running an insert and one an update. Microsoft explain it more coherently.

The better fix? SQL 2000 SP1. Yup. SP1.

Friday, 23 October 2009

This morning, I 'ave been mostly deleting old SQL Agent Jobs

And these have been those annoying jobs that are, apparently, undeletable and unwanted maintenance jobs.

Job #1

A rather ancient backup job, or, rather, the remnants thereof, sitting in SQL 2005 Agent but with no maintenance plan behind it, no subplan schedule within it, and generally going 'Ha Ha' in a manner eerily reminiscent of Nelson Muntz. Assuming that the following error message can be translated as 'Ha Ha!'.

The DELETE statement conflicted with the REFERENCE constraint "FK_subplan_job_id". The conflict occurred in database "msdb", table "dbo.sysmaintplan_subplans", column 'job_id'.The statement has been terminated. (Microsoft SQL Server, Error: 547)

First, I had a look in the sysjobs table


SELECT *
FROM msdb.dbo.sysjobs
WHERE name like 'plan name%'

This gave me the ID for the job, and then I was able to run the following against the sysmaintplan_subplans table


SELECT *
FROM sysmaintplan_subplans
WHERE job_id = '28B010B4-5FCD-4802-8405-808E2C9EA352'

(to confirm that I was looking at the correct job)


DELETE sysmaintplan_subplans
WHERE job_id = '28B010B4-5FCD-4802-8405-808E2C9EA352'

Then I was able to delete the job via Job Activity Monitor.

Job #2

This was sitting on a SQL 2000 SP4 box, which had been created in the time honoured fashion: take an image of the original server and rename it. You get the most infuriating error

Error 14274: Cannot add, update, or delete a job (or its steps or schedules) that originated from an MSX server. The job was not saved.

For this one to be fixed, you need to restart SQL. First, find out what the name of the original server was by running

SELECT @@SERVERNAME

Then, rename it with the correct name

sp_dropserver 'OldServerName'

sp_addserver 'CorrectServerName', 'local'

Then, restart SQL. I got lucky. No-one was using the Server.

Run the following


UPDATE msdb..sysjobs
SET originating_server = 'CorrectServerName'
WHERE originating_server = 'OldServerName'

Bob's your Uncle, Charlie's your Aunt, and it should now be possible to delete the recalcitrant job.

This is a known problem that was introduced in SQL 2000 SP3, and has apparently persisted. Grrrr.

(This week i 'ave mostly been.... ref)

Monday, 19 October 2009

MOM 2005 Database Sizes


Those of use with MOM 2005 (Microsoft Operations Manager) and Forefront Client Security have faced the problem of large databases (why yes, you do need sunglasses for that page).


The SystemCenterReporting Database needs to be really rather large: this nifty graph is my best effort involving Excel and the data given on the Technet Forefront Client Security Blog. The Y-axis is space required in GB, the X-axis is the number of servers monitored.
From this, we can see that vastly less disk space is needed if we are only keeping 180 days worth of data. The default is that the System Center Reporting database keeps 395 days worth of data, and, because MOM is sitting on top of that database (among others), this is the amount of data that it saves. All well and good, but for the fact that there is no GUI within MOM to change the number of days that data can be saved.
Just before we decided to put the datafiles onto the SAN, I found a very handy stored procedure. Sweet as, it ran really quickly and claims to have reduced the number of days data kept.
Now I'm just waiting to see if the overnight groom job will actually reduce the quantity of data kept... because that database is still full-as-full, with no room to expand, despite believing that it only keeps 90 days worth of data. Not that I would allow SystemCenterReporting DB to autogrow anyhow. Reporting databases (like Sharepoint databases) are best created with a filesize that is the same as the anticipated eventual size of the database, and then left well alone with autogrow switched off. Log files should be treated in a similar manner. That way, the data can arrange itself in a much more logical manner, and data fragmentation will be reduced.

Wednesday, 14 October 2009

Another BizTalk Deletion...

From the Database Structure and Jobs page:


Warning

In the BizTalk Management (BizTalkMgmtDb) database, there's a stored procedure named dbo.adm_cleanupmgmtdb. DO NOT RUN THIS STORED PROCEDURE! If you do run this stored procedure, all the entries in the database will be deleted.

Why, I wonder, this obsession with deleting the entire contents of a database?

All this, plus talk of the fact that BizTalk can have distributed transactions pending across several databases makes me distinctly ansty.

First job, therefore, for a bemused DBA with a whole new BizTalk system? Figure out how to configure backups. Clearly, my usual weekly-full-backup, daily-differential-backup and hourly-transaction-log-backup model is not going to work here. Not from a DR perspective anyhow. There will be tears and upsets if I'm not careful.

Luckily, I have some Test databases to play with, as well as the ones on the Live Server. So I shan't destroy the entire business if it goes a bit wrong.

Certainly, this is something to build into the DR plan.

Tuesday, 13 October 2009

BizTalk

We've got BizTalk now, and I'm still getting to grips with it. However, I came across this little gem on the Best Practices page.

"Delete all the data
If the databases are too large and if the preferred method is to delete all data, you can delete the data.
Caution Do not use this method in any environment where the data is business critical or if the data is needed. "

You don't say....

(more on BizTalk later. At the moment I have Too Many Questions).