Category Archives: SQL

Transact or SQL server related

Using EXISTS with variable tables

I ran into a need to use EXISTS with a variable table yesterday and thought it might be nice to make it available to others.

As you can see all you need to do is enclose the variable table in [ ].

TSQL Pull file into binary field

Ok so i needed a fast way to import a file from the filesystem straight into a binary database field
It couldnt be easier with OPENROWSET.
Here is how

simple as that
——————————-
Ok so I have an update to make here. In most cases the file your trying to pull up from the filesystem does exist, but wouldnt it be nice to make sure it does before you go ahead, well here’s how do it using xp_fileexist.
I’ve also left the snippet that will tidy up filename UNC’s of the ‘ character. If you tidy it before the file exist test, the test will fail since it already does this. Be sure to leave it between the exist test and the openrowset call.

sql job prevent concurrency between jobs

Have you ever needed to make absolutely sure two jobs dont run at the same time in, just schedule to run at different times i hear you say. well in some cases its not that easy. Sometimes you have jobs that run every 20 minutes and others of a similar fashion, there is a chance one job could extend longer for what ever reason, most of the time its unforseen, as in the case i had recently.
So i came up with a solution. A simple Stored Procedure i wrote can, it will live in your master db, why the master i hear you ask. well in order to prevent collasions between databases it is best to live in the master.
1st write a list of the job names you want to prevent from running concurrently, and then follow the instructions here
2nd place a code snippet only once, in the first step of each job, so it runs prior to anything in the job

SQL UPDATE() updating millions of records nicely

Ever needed to update millions of records before ?
New to tackling this ?
I had this experience when our database contained 17 million records when 15.5 million were junk and could be disgarded. My mission was flag the unwanted records and remove them.
This blog is simply about how to go about updating a table with millions of records. Fear not there is an approach that wont lock your database for hours, or expontentially increase the size of your transaction log file. Its known as Chunked updates.

original source explained here http://www.sqlusa.com/bestpractices2005/hugeupdate/

In my case I needed to update 17 million records based on a cross reference match of each record against another table containing 5.5K records, a single update statement would have takens hours and hours.
The chunked approach on an optimised indexed set of tables took just 35 minutes on a single license quad core proc SQL server 2005 standard edition.
Heres is the SQL statement

So you can see from the code i was chunking in 500000 rows segments, tune this to your requirements I guessed and struck it lucky.

For those interested, yes i did take steps to prevent this from occuring again. I created an insert trigger that cross references the User Agent table flagging the record. A Job then does the clean up of periodically archiving the unwanted data into a redundant database for historical reference.

SQL Server exclusive access, prevent connections to DB accept your own

Im about to embark on a rather large upgrade of our live webserver. It includes both DB and ASP.NET updates
For this i am going to need exclusive access to the DB while i perform the reuired operations. After trawling the internet for a few minutes i stumbled across a great article with some SQL script.
Here it is, the only change I made was so that when you run the procedure, if it encounters your connections username in the list it will ignore it.
So if you are connecting to the instance of SQL via managment studio on your temrinal, or remotely but using your login credentials, you will be denying access to a DB for everyone accept your connection username.

Originally posted by Tony Regerson
http://sqlblogcasts.com/blogs/tonyrogerson/

SQL index on nvarchar

In SQL 2005 in order to be able to index any field, the field length cannot exceed 900 bytes
This in mind you can have an index on something like a varchar(900) or an nvarchar(450)
This greatly assisted me on my quest to improve performance on our global www platform.
We record every visit to our website, which as you can imagine was increasing our db size exponentially.
I ended up using two locations to download browser user agent information
http://www.user-agents.org/allagents.xml
http://www.botsvsbrowsers.com/category/1/index.html
and populate a table in the with this, periodically updating the table twice a day. I then created an insert trigger on the table that records each user, this trigger cross referenced the user agent table to identify if this browsing instance was human or a bot of some description, raising a bit field flag if it was not human.
As you can imagine, on a very busy website it is critical that this happen as fast as possible. Hence applying an index on the fields under comparison is ideal and very fast.

To edit the field in the db and bring it down to the correct size a simple transact staement was needed

The website when it pulls out the http user agent from the web request, truncates it down to 450 so as to avoid any splat in the db when the data layer tries to insert the data

Now that the non human traffic has been flagged, in our fully normalised referential database we can archive the data and delete the records from the production servers very rapidly, saving space and improving performance