Wednesday 17 February 2010

But what if the FTS won't?

When building a Full Text Search Catalog and Index, it is a Good Idea to check that the search works properly at the end of it, by using something along the following lines:

SELECT *

FROM tablename

WHERE Freetext (COLUMN, 'some text')


Now, I knew the create script worked, as I'd used it to create my Very First Full Text Catalog and Index only a week ago. I knew the select would work, because, again, I'd tested it. SQL 2008 SP1 was about to have a bit of a disagreement with me about what was, and what wasn't, going to work.

Not so.

SQL Server encountered error 0x80070005 while communicating with full-textfilter daemon host (FDHost) process. Make sure that the FDHost process isrunning. To re-start the FDHost process, run the sp_fulltext_service 'restart_all_fdhosts' command or restart the SQL Server instance.



and in the Logs:


SQL Server failed to communicate with filter daemon launch service (Windows error: Windows Error: hr = 0x80070005(failed to retrieve text for this error)). Full-Text filter daemon process failed to start. Full-text search functionality will not be available.

Running sp_fulltext_service 'restart_all_fdhosts' did not improve matters: the command ran happily, but the error persisted. Restarting SQL didn't help either (heck, it's a DEV box, and not yet ready for the Developers).

Searching for 0x80070005 told me that the problem was to do with permissions: and the solution turned out to be setting the SQL Full Text Filter Daemon Launcher Service to run under the same account as the SQL Server Service.

The SELECT statement then deigned to execute: but did not return any results. Rebuilding the index did not help matters: however, dropping the FULLTEXT index, the index, and the Full Text catalog did work. T-SQL here for posterity for next time I have this problem.



DROP FULLTEXT CATALOG CatalogName

DROP FULLTEXT INDEX ON dbo.TableName

DROP INDEX IndexName ON dbo.TableName

Monday 15 February 2010

When the autoshrink/autogrow won't show in SQL Reports

...Try the script below, from this discussion on MSDN. It worked beautifully for me when it turned out that there were well over a million rows of autogrow/autoshrink data, and thus the autogrow/autoshrink data from SQL 2008's built in reports wasn't going to show correctly.

I then changed the data file size, so that the thing grew in one BIG chunk, gave it lots of elbow room for more data to fill up the file, and left it well alone.

DECLARE @filename VARCHAR(100), @filenum int

SELECT @filename = CAST(value AS VARCHAR(100))

FROM fn_trace_getinfo(DEFAULT)

WHERE property = 2

  AND traceid = 1

  AND value IS NOT NULL



-- Go back 4 files since default trace only keeps the last 5 and start from there.

SELECT @filename = substring(@filename, 0, charindex('_', @filename)+1) + convert(varchar, (convert(int, substring(left(@filename, len(@filename)-4), charindex('_', @filename)+1, len(@filename)))-4)) + '.trc'



--Check if you have any events

SELECT gt.HostName,

       gt.ApplicationName,

       gt.NTUserName,

       gt.NTDomainName,

       gt.LoginName,

       gt.SPID,

       gt.EventClass,

       te.Name AS EventName,

       gt.EventSubClass,

       gt.TEXTData,

       gt.StartTime,

       gt.EndTime,

       gt.ObjectName,

       gt.DatabaseName,

       gt.FileName

FROM [fn_trace_gettable](@filename, DEFAULT) gt

JOIN sys.trace_events te ON gt.EventClass = te.trace_event_id

WHERE EventClass in (92, 93, 94, 95)

and gt.DatabaseName = 'tempdb'  --Change the DB Name here

ORDER BY StartTime;

Thursday 11 February 2010

Taking it futher

The awesome SQLRockStar came out with a nifty tip about fragmented Virtual Log Files. SQL has this tendency to split the transaction log file out into different chunks. It might be physically contiguous, but, logically, it is fragmented, and this affects performance.



The tip teaches us how to use Policy Based Management in SQL 2008 to work out which databases have extensive Virtual Log File fragmentation. So far, so good. But hark! I hear you cry! I only have one SQL 2008 Server. Everything else is still SQL 2000 or SQL 2005. There is no Policy Based Management tool there.


Panic not. If you have set up a Central Management Server in SQL 2008, and registered your servers within it, you can evaluate the Policy against all your servers. Just right click on the group you want to check, pick 'Evaluate Policies', choose your source (i.e. the SQL 2008 box on which you set up the initial policy), and let it run. It tends to look as though it's stuck if one of the servers in the group you've chosen isn't running SQL or is turned off, but leave it to run, and, eventually it'll stop trying to connect to that box, and move onto the next one.

Once you know what's up, you can follow the instructions here. I did make up a handy script to make life a little easier, and I cannot emphasise enough that you must back everything up properly before you get going. It will save a vast amount of heartache in the long run. Honest truly.










USE [databasename]
-- Backup the Log File

BACKUP LOG [databasename] TO devicename
GO

-- Set recovery to simple mode, otherwise the log file refuses to shrink

ALTER DATABASE [databasename]
SET recovery simple
GO

-- Shrink the log file

DBCC shrinkfile ( transactionloglogicalfilename , truncateonly )
GO

-- Grow it in ONE BIG STEP
ALTER DATABASE [databasename]

MODIFY FILE

(

NAME = transactionloglogicalfilename

, SIZE = newtotalsize --MB

)
GO

-- Set recovery back to full mode

ALTER DATABASE databasename
SET recovery FULL
GO

Finally, run a full backup, so that all future transaction log backups have something to work from.

(I used the instant SQL formatter to format the SQL for this post. It looks, um, interesting....)

Friday 5 February 2010

Scrap of SQL

How to identify duplicates in a table....


SELECT email,

Count(email) AS numoccurrences

FROM users

GROUP BY email

HAVING (Count(email) > 1)

Of course, what to do with all those duplicates is up to you!

Monday 1 February 2010

3rd Party Software Again...

This time, Lightspeed System's TTC seems to be the culprit. It has a nifty little system that sets autoshrink = true for all its databases, and then it gets terribly upset if the database file fills up. Of course, if it wasn't autoshrinking, then it wouldn't need to autogrow, and it wouldn't get its knickers in a twist because it couldn't autogrow on the terms of SQL's default, which is an autogrow of 10%.

When autoshrink = true, SQL will try to shrink all database files with more than 25% free space. This creates a performance hit for the CPU: the default setting for SQL is to set autoshrink = false (except on the Desktop Edition, where the default setting for SQL is autoshrink = true).

When autogrow = true, SQL will try to grow the database file while running an INSERT. The query may well time out, the autogrow will fail, and the database file will fill up. The default for SQL is to set autogrow = true, with a growth of 10%.

Both autogrow and autoshrink cause problems with file fragmentation and index fragmentation. It is vastly more healthy to set the initial database file size to the anticipated database file size, and then leave it well alone. There will be less load on the CPU, neither files nor indexes nor pages will fragment and the server will run more efficiently. It's like the difference between ironing a shirt on an ironing board, unimpeded, and ironing a shirt on a folded up towel on one corner of your work desk. Yes, it can be done, but no, we really don't want to as it takes longer and is vastly more exhausting.

Why Lightspeed Systems seems to think that Total Traffic Control's databases should be set to autoshrink is something I haven't yet established. My Google-fu is not strong today, so if you do know the answer, I would love to hear it.