1 (edited by Stuart 13-08-2008 09:26:02)

Re: Timeouts

I have recently done some work on the DeleteSurvey addin and discovered some things about handling timeouts that I thought it would be worth recording.

This addin works by using the DATBASE_RELATIONS table to find tables that are related to SURVEY, turns on the CASCADING DELETE constraint in these relations, deletes the specified SURVEY (relying on the CASCADING DELETE to delete the related SURVEY_EVENT, SAMPLE, etc., rows) and then turns the cascading deletes off again. This is all wrapped up in transactions so that either the whole lot works or no change is made.

BEGINTRANS
  turn on cascading deletes
  BEGINTRANS
     Delete the survey
  COMMIT
  turn off cascading delete
COMMIT

The problem was that it worked nearly all the time - but there were a few niggling cases when it didn't - and I couldn't track down why!

It turned out to be two different problems, one of which was that the delete operation itself was timing out.

The Delphi code for the addin uses ADO to connect to the database. The ADO Connection/Query objects have two timeout properties: ConnectionTimeout and CommandTimeout which, according to the documentation, default to 30 seconds and can be set to 0 for 'no timeout'.

However, altering either or both of these setting had no effect in my tests - it still timed out, as far as I could see, after about the default 30 secs whatever these were set to in my code.

A lot of web searching showed that this is a common problem and that there are loads of suggested solutions - most of which didn't work for me! I finally found an obscure Microsoft Knowledgebase article which reveals that the CommandTimeout property of ADOConnection only has an effect if the CursorLocation property is also set to 'use server' (it defaults to 'use client').

So the code I came up with to execute a DDL query is like this:

Function DoSQL(const sSQL: string): boolean;
var oQuery: TADOQuery;
begin
  oQuery := TADOQuery.Create(nil);
  try
    with oQuery do
    begin
      Connection := NBNData;
      CursorLocation := clUseServer;
      CommandTimeOut := 0;
      SQL.Clear;
      SQL.Add(sSQL);
      try
        ExecSQL;
        Result := True;
      except
        on e:Exception do
        begin
          Result := False;
          MessageDlg(e.Message, mtError, [mbOK], 0);
        end;
      end;
      Close;
    end;
  finally
    oQuery.Free;
  end;
end;

This function executes a DDL query (INSERT, UPDATE, DELETE, CREATE, ALTER, etc), returns true if the Query is successful or false if it is not, and shows the error message from the exception if it fails. NBNData is a global TADOConnection object initialised to connect to the Recorder 6 database via the Recorder interfaces.

This overcomes the timeout problem in my tests. The biggest survey I have tried deleting had 768,000 TAXON_OCCURRENCES and took about 8 minutes to delete on my machine. I suspect that a much bigger survey (say over 1 million records) would probably fail on SQL Server's remote execution timeout (the default setting for which is  600 - i.e. 10 minutes). If you wanted to delete a survey of this size or bigger, you would probably need to change your SQL Server configuration to increase this timeout - but I haven't tested this.

2 (edited by Stuart 13-08-2008 10:08:20)

Re: Timeouts

Whilst we are on the subject: another common reason for DeleteSurvey to fall over is that the log file runs out of disk space.

When you delete anything, SQL Server writes a transaction for each row deleted to the log file (NBNData_log.LDF). In the example of deleting a SURVEY with 768,000 TAXON_OCCURRENCEs, the log file ended up at several Gbs! (So, after you have checked that everything is OK, you should probably do a backup - which shrinks the logfile.)

On a network running a big SQL Server database it is recommended that the data file (.MDF) and logfile (.LDF) are each located on their own dedicated, physical disk drive for both performance and data security reasons. These drives should have plenty of free space above your anticipated data growth. Since big hard disks are so cheap these days, this is easy to arrange and well worth doing.  You should also have regular, scheduled, SQL Server backups running - and these will keep the size of the logfile under control.

On a standalone machine, there are often problems because the default installation will place the database files in your 'Program Files' directory. Many computers are set up with a relatively small C: partition for Windows and program installation, and the rest of the space (often massive these days!) in a D: partition. But SQL Server installation creates all the database files in C:\Program Files\Microsoft SQL Server\... This can lead to problems with the SQL system's temporary database and/or logfiles running out of disk space on C:.

The solution is straightforward (using tools in the excellent and free SQL Server Management studio): Detach the database, move the files where YOU want them and then attach them again. I usually put the Recorder databases files (NBNData_Data.MDF and NBNData_log.LDF) in D:\Recorder 6 \Database and the SQL Server temporary database (tempdb.mdf & tempdb.ldf) in its own directory (e.g. D:\SQLtempDB). The other system database files you will find in the default installation's Data directory don't usually get very big - so you can probably leave those where they are.