MCTS 70 – 433 implementing programming objects

Implementing programming objects

Create and alter stored procedures

Stored procedures are an abtraction layer for the database objects and data. Stored procedures can contain T-SQL or (CLR) Common Language Runtime; the only commands which aren’t valid are: USE <database>, CREATE[aggregate|rule|default|schema|function|trigger|procedure|view] or ALTER[function|trigger|procedure|view] or SET[showplan_[text|xml|plan]|parseonly].

Stored procedures have an ENCRYPTION option; this only obfuscates the code it doesn’t encrypt the data.

Stored procedures can contain variables, parameterisaton, error handling and control flow constructs (if…else, etc.)

The batch delimiter for T-SQL is GO

You define variable in T-SQL using declare @ symbol followed by a name, then you define its data type e.g. DECLARE @var1 int; if you wish you can assign the variable a value at this point by appended = <value>. Global variables accessed using @@<name>, global variables cannot be deleted, modified or created.

All variables can be defined using one DECLARE keyword the only exception to this is when you declare a table variable.

DECLARE @tablevarname table
(
[column name] [data type] [column options]
)

When returning values from a SELECT statement into a variable ensure that the return value is scalar i.e. only one result.

When performing calculations on variables you can now define the calculation as @var [+=|*=|/=] <value>

To create a stored procedure using the following syntax

CREATE PROCEDURE procedure_name 
@param1 param1_data_type, 
@param2 param2_data_type OUTPUT
AS
-- Procedure code

Defining a parameter as an OUTPUT changes the parameter from an INPUT to return variable. SP example here

The control flow constructs available in T-SQL are:

RETURN – passes the value back to the caller.

IF…ELSE – conditional logic, if you need to execute multiple statement enclose them in BEGIN…END.

BEGIN…END – as above.

WHILE – loop whilst a condition is true.

BREAK/CONTINUE – break out of the loop and continue the loop. no really used anymore.

WAITFOR – has three permutations DELAY; TIME and RECEIVE. receive works in conjunction with SERVICE BROKER. Delay waits for a specific time and time using a duration.

GOTO – used to skip to specific parts of the code block; its use is discouraged.

Stored procedures and functions have a EXECUTE AS context which has three options:

LOGIN – executed under the context running the SP.

USER – executed under a specific context such as a ROLE, GROUP, CERTIFICATE, ASYMMETRIC KEY

CALLER – executes under the context of the routine. CALLER as two options NO REVERT and COOKIE INTO; NO REVERT doesn’t allow the exection context to be changed back and COOKIE INTO allows the security context to be return to the previous.

Cursors are used in SQL server to return one row at a time; if you find yourself using a cursor to make the same changes to multiple rows then consider an alternative method such as set-based e.g. UPDATE table SET Column = <value>

Cursors have five components; the first is to DECLARE the cursor for a SELECT statement, that provides the cursor with a SELECT statement to OPEN, when executed the cursor uses FETCH to retrieve one row at a time. CLOSE closes the cursor and DEALLOCATE removes it from memory. If the cursor is inside a SP then exiting the SP will CLOSE and DEALLOCATE the cursor.

The cursor types are FAST_FORWARD, STATIC, KEYSET and DYNAMIC. FAST_FORWARD is the fastest performing cursor and is the default. FAST_FORWARD allows forward scrolls only. STATIC stores the cursor results in a temporary table within tempdb. This set cannot be updated. KEYSET stores the key that uniquely identifies a column that is stored within a temporary table within tempdb. As you scroll the non-key columns are retrieved from the underlying tables. If a key for a row was retrieved and it has been subsequently deleted then @@FETCH_STATUS will return a minus 2. The DYNAMIC type reflects all changes to the underlying rsult set, even including new rows as it scrolls.

To UPDATE or DELETE FROM the current row the cursor is working on use the following statement: WHERE CURRENT OF <cursor name>

The cursor has the following FETCH options which are self-explanatory: FETCH_[FIRST|NEXT|LAST|PRIOR|ABSOLUTE|RELATIVE].

The concurrency options for READ_ONLY i.e. the cursor is not updatable, SCOLL_LOCKS i.e. the current row being accessed by the cursor is locked and OPTIMISTIC i.e. it uses a timestamp or checksum of the row, if the underlying data has changed since the cursor read the row and modifications will fail.

Compliations and recompilations are how SQL server compiles and stores query and execution plans. A query plan is reusable and is more efficient if generated from a stored procedure. An execution plan is not reusable. Stored procedures with multiple code paths should probably use the RECOMPILE option to be more efficient, alternatively create a stored procedure for each code path.

Create and alter user-defined functions (UDFs)

In order to create a function your user must have the CREATE FUNCTION privilege on a particular database. User-defined functions cannot change table data, create or access temporary tables, execute dynamic code or change database or instance state.

Functions can return scalar, inline table or multi-statement table values; all functions must have a return statement and code blocks must  be defined within BEGIN…END keywords. The exception to this rule is the inline table valued function.

User-defined functions can use the SCHEMABINDING option; this option protects against dropping dependant objects.

By default a function will execute regardless of whether the value is null or not; to counter this you can use WITH RETURNS NULL ON NULL INPUT.

Data is retrieved from a function using the SELECT statement; it is not advisable though to include a function within the WHERE clause. 

Functions are good at extending the static check and default constraints built into SQL server e.g. you can create a constraint that looks up another table to validate the input; by default constraints cannot do this.

Functions can also be nested; the return of the inner function must be the same as the input of the outer function though.

Example here

Create and alter DML and DDL triggers

There are three types of triggers: DML triggers (insert, update and delete), DDL triggers (drop table etc.) and logon triggers.

DML triggers fire when you insert, update or delete a record from a table; when this trigger fires you have access to the inserted and deleted tables; these are referenced here.

The AFTER option defines that the trigger will only fire once the insert, update or delete statement has passed all constraints.

The INSTEAD OF option defines trigger code to be executed instead of the insert, update or delete statement.

The NOT FOR REPLICATION option stops triggers firing on a subscriber node when the replication engine is making changes.

DDL triggers fire when a DDL statement is executed or a logon is initated.

DDL triggers can be scoped at the instance or database level with ON ALL SERVER and ON DATABASE respectively. Instance level triggers can be found at: Instance > Server objects > Triggers and database level triggers can be found at: Instance > Database > Programmability > Database Triggers.

Because DDL triggers run in the context of a transaction thus can be rolled back; good for catching attempts to drop a table or restrict logins. The logon trigger fires after the user has successfully authenticated but before a session is established.

The DDL trigger has access to EVENTDATA function; this function outputs XML; see the example below on how to access the data.

Example here

Create and deploy CLR-based objects

CLR basics

1. The sql instance must be configure to run CLR

The above is achieved by running:

EXEC sp_configure 'clr enabled', 1;
reconfigure;

2. CLR must be written in .NET e.g. C# or VB.NET

something that has a .vb or .cs file extension.

3. The CLR must be complied e.g. a DLL

You can compile a DLL using vbc.exe or csc.exe

[vbc.exe|csc.exe] /target:library /out:CLRcode.dll codefile.[vb.cs]

4. The CLR assembly must be loaded into SQL server

CREATE ASSEMBLY [assembly name]
FROM 'path to the DDL created in step 3';

5. A database object (Table constraint, stored procedure, trigger etc.) should reference the assembly

EXTERNAL NAME [assembly name]."Namespace.class".[subroutine|function]

CLR code runs under the following permission sets:

SAFE – the default method. the assembly can only access objects in the database to which it is deployed and perform calculations. 

EXTERNAL_ACCESS – The assembly is allowed to access reosurces from other SQL instances, the file system and other network resources such as a web service. 

UNSAFE – The assembly is allowed to execute unmanaged code such as Win32 API or a COM component.

Implement error handling

Error messages within SQL server have three components; error number, error severity and error message. Error numbers range from 1 – 49999, error severity ranges from 0 – 25 (16 and above get written to the event log, 20-25 are considered fatal, 19-25 can only raised by the sysadmin fixed role). Error mesages can be 255 unicode characters long and allows two parameters to be parsed.

Custom error messages must be numbered 50001 and above; custom error can be localised but an English version must exist.

Custom messages can be raised via RAISERROR {[msg_id|msg_str|@local_variable]}, severity, state, arguments, WITH option [LOG|NO WAIT]

RAISEERROR has access to the following error functions: ERROR_NUMBER(), ERROR_MESSAGE(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE() and ERROR_LINE(). A code example here shows the functions in action.

sp_addmessage can be used to add custom errors to sys.messages; the syntax is EXEC sp_addmessage @msg_num = msg_id, @severity = severity, @msgtext = ‘msg’; e.g. EXEC sp_addmessage 50001,16,N’This is a test’;

Messages can be viewed using SELECT * from sys.messages, custom messages can be modified using sp_altermessage and removed using sp_dropmessage.

Error handling in SQL server 2008 now uses TRY…CATCH statement blocks. The TRY block contains the code you want to test for errors and the CATCH block handles the error. Calling the RAISERROR statement from within the TRY block will move the control to the CATCH block; RAISERROR in the CATCH block returns an error to the application.

The functions @@TRANCOUNT or XACT_STATE can give you an idea of how many transaction are outstanding and at what state they’re in i.e. can commit or rollback, doomed state or no open transactions. XACT_STATE is used in the code example above which shows ERROR functions.

Managing Transactions

Atomic – A transaction must complete in its entirety or the transaction is rolled back

Consistenty – A transaction cannot break the rules of the database i.e. an update to a record cannot break those rules

Isolation – A transaction must run in isolation of other transaction i.e. other transactions must not be able to view the changes until the transaction is complete.

Durable – Transactions must persist hardware failure e.g. all transactions are written to the transaction before they are committed by the database software.

By default SQL server treats INSERT, UPDATE and DELETE as indivdual transactions which offer no rollback functionality. When you rollback a transaction the identity seed is not reset, the next transaction will increment by whatever identity increment is defined.

Implicit transactions can be enabled by specifying SET IMPLICIT_TRANSACTION ON per connection or by modifying the default server connection properties.

Explicit transaction are used when you use BEGIN TRANSACTION and end when you use COMMIT TRANSACTION or ROLLBACK TRANSACTION. You can define transaction savepoints to control whereabouts the database engine would rollback to. You can use SELECT @@TRANCOUNT to work out any many outstanding transactions are in progress.

Pessimistic locking – this approach to locking assume that you’re going to be updating data thus locks the record.

Optimistic locking – this approach to locking assume you will not be updating any data or a certain level of inconsistent data is acceptable. This method doesn’t use read locks concurency is better and performance is improved.

The following resources are affected by locking: rows, pages, indexes, tables and databases and the following locking modes exist:

Shared lock – placed on rsources for SELECT operations; not compatible with exclusive locks but compatible with other shared locks. If the isolation level is set to REPEATABLE READ or higher or a locking hint is used then the lock is in place for the duration of the transaction. Otherwise it is released when the SELECT operation is complete.

Update lock – placed on resources where a shared lock is required but a exclusive lock is anticipated. The update lock is upgraded to exclusive when data is modified.

Exclusive lock – not compatible with any other lock type. Only the NO LOCK and READ UNCOMMITTED isolation levels override exclusive lock functionality.

Intent (IS,IX,SIX) – improves performance and locking efficiency as intent locks are placed on higher level resources such as table until it is necessary to place shared or exclusive locks on lower level resources such as pages.

Schema (Sch-M, Sch-S) – Sch-M locks are placed on objects when schema modifications are being made such as adding a new column to a table. Sch-S locks are placed on objects whilst queries are being compiled or executed. The two locks are not compatible with each other.

Bulk Update – placed on table resources during bulk inserts.

Key Range – placed on a range of rows to protect against phantom insertation and deletions of a record set being accessed by a transaction.

Deadlocks; by default a transaction in SQL server 2008 will wait an indefinite amount of time for a resource to become available; unless it recognises a deadlock situation has occurred. In this scenario SQL server will choose a deadlock victim and rollback the transaction then issue a 1205 error.

The locking status within SQL server can be viewed via perfmon (lock wait times, locks per second), SQL profiler, sys.dm_tran_locks or activity monitor.

When you run SELECT resource_type, request_mode, request_type, request_onwer_type FROM sys.dm_tran_locks. The following resource types are valid:

Row Identifier (RID) – lock on a single row
Key – lock on a range of keys in an index
Page – lock on a 8KB page
Extent – lock on a 64KB extent
HoBT – lock on a Heap or Balanced tree
Table – lock on data and index pages
File – lock on a database file
Application – lock on a application
Metadata – lock on metadata
Allocation Unit – lock on single allocation unit
Database – lock on an entire database
The following transaction lsolation levels are available in SQL server 2008:

READ UNCOMMITTED – allows reads of rows that were updated by a transaction before the rows have been committed; dirty reads. Dirty reads can contain data that maybe rolled back thus n olonger exist.

READ COMMITTED – allows transactions to experience nonrepeatable reads but depending on the isolation level will depend on the actual outcome.

REPEATABLE READ – doesn’t allow transactions to read non-committed data.

SHAPSHOT – requires ALLOW_SNAPSHOT_ISOLATION to be ON. This isolation level uses snapshots but doesn’t hold any locks on the resources.

SERIALIZABLE – doesn’t allow data to be read that has been modified but not committed. Protects against phantom reads but causes the highest level of blocking and contention.

The following code examples use isolation levels and explicit transactions to show the above. Examples Isolation levels and Explicit transactions

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.