Showing posts with label SQL Server. Show all posts
Showing posts with label SQL Server. Show all posts

Wednesday, March 14, 2012

Entity Framework Code First - Visualizing Generated Model & Database

Aren't we always curious about finding out how model is generated from the entities we have defined in Code First. Since there are so many conventions involved, it always seems difficult to go to the database and see individual tables. Well, there are different ways which could make our life easier to understand the database generated and even the resulting model by the code first entities. Let's discuss about them.

This discussion would discuss how we can get a complete picture of the related entities in a Context and resulting database without. As they say, a picture is worth a thousand words. So, definitely, this would make our life easier to be managing those entities, their relationship and resulting databases.
  1. Database Diagrams
  2. Viewing Entity Data Model using Visual Studio Integration
  3. Generating model using EdmxWriter
Database Diagrams
Most SQL Client tools allow creation of diagrams for existing tables. We just need to reverse engineer the database and select the tables we need. In the diagram, we can see the details of the selected tables. We can also see the foreign key relationships between them. SQL Server Management Studio also has such support. Just right click the Database Diagrams folder under your database in Object Explorer and select New Database Diagram.


You should see following dialog providing the list of all the tables in the database. Here you can select all the tables that you want to include in the diagram and click Add button. For our database, we should see the following list of tables.


As we hit the button, a new database diagram is created and shown. It has the tables as selected in the previous step. It also has the foreign key relationship details between the selected tables.


Visualizing Model using Entity Framework Power Tools
We can also visualize the expected model during design time using Entity Framework Power Tools. It is a Visual Studio Extension which can be installed using online extension library download feature of Visual Studio as follows:


After installation, if you select a class file in the solution explorer the following context menu is displayed allowing the view of the expected model.


If the selected file contains a sub-class of DbContext then it generates a read-only model listing all the entities as expected to be generated at run-time.


Persisting Generated model as *.edmx file
We can also support persisting the generated model at run-time. Entity Framework ... has added a new type to the framework just to support this requirement. This is called EdmxWriter. It can use an XmlWriter to persist the generated model as in the below code:
using (var context = new InstituteEntities())
{
    XmlWriterSettings settings = new XmlWriterSettings();
    settings.Indent = true;

    using (XmlWriter writer = XmlWriter.Create(@"Model.edmx", settings))
    {
        EdmxWriter.WriteEdmx(context, writer);
    }
}
Here XmlWriter and XmlWriterSettings are from System.Xml namespace. You would be needing to import the namespace in the class file before using them. After the above code is executed, we should see Model.edmx file created in the output directory.


We can view this file in Entity Designer in Visual Studio.

Saturday, January 10, 2009

Enabling CLR Integration

To enable CLR on your SQL Server, first you would have to setup server level configuration. For that 'CLR ENABLED' has to be set on the server.

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'clr enabled', 1;
GO
RECONFIGURE;
GO


Trust Worthy Computing
To deploy CLR stored procedures, trustworthy computing has to be enabled for the database in which you want to host your CLR code. To enable CLR, ALTER DATABASE statement is used:

ALTER DATABASE MYDB SET TRUSTWORTHY ON

Friday, January 9, 2009

Killing SQL Server Process with X Lock on a table

Ever troubled with an X Lock on you table which you desperately want to remove?

I was hit with this problem today. You can do the following in order to remove the lock from your table:

First we find out how many locks are on your table.

SELECT REQUEST_MODE, REQUEST_TYPE, REQUEST_SESSION_ID FROM
sys.dm_tran_locks
WHERE RESOURCE_TYPE = 'OBJECT'
AND RESOURCE_ASSOCIATED_ENTITY_ID =(SELECT OBJECT_iD('TBL_PROD_TQ'))


In the above example, the details of locks on the table “MyTable” are queried. Along with the Lock type, the session responsible for the lock is also displayed.

Now if you find there is an ‘X’ lock on your table. Then using the session ID found as a result of the above query, session can be killed. We assume that our session id is 140.

KILL SESSION_ID (e.g. KILL 140)

If you want to get further details about your session, dm_exec_sessions can be used. To get the details of the session:

SELECT * FROM
sys.dm_exec_sessions
where session_id = 114


To get the IP Address of the party involved in the session, we can use dynamic management view dm_exec_connections:

SELECT * FROM
sys.dm_exec_connections
WHERE SESSION_ID = 114

Tuesday, October 14, 2008

Online indexes in SQL Server

Do you know that normally when you create an index on an existing table, you can not perform any query or updating while creating index on a table. This has always been like that before SQL Server 2005. With SQL Server 2005, Microsoft introduced ONLINE indexes in SQL Server. It must be known that online does not mean faster index creation but quite the contrary it is slower than its other counterpart.

In order to allow modification and querying data during the index creation, SQL Server maintains versions of rows in a Version Store. You can check size of version store by using VERSION STORE SIZE counter in performance monitor.

Now you have two options for creating indexes. They are as follows:
1. Online
2. Offline

It must be remembered that online indexes are only available in Enterprise edition of SQL Server.

With offline indexes any DML operations is not allowed. This is done by acquiring EXCLUSIVE lock on the table or indexed view. So for the whole duration of creation of index, no DML operation (including select) is allowed on the object.

There are three phases of creation of an online index. They are as follows:

1. Preparation: In preparation phase, following tasks are carried out:
a. Acquiring of lock mainly S and IS lock on table.
b. Creation of metadata for index.
c. Recompiling DML plans by incrementing version number.

2. Build

3. Final

For creating an index there must be sufficient temporary space available in disk. This space is required for sorting and other intermediate tasks. Extra space is used from disk when SQL Server does not find necessary space in primary memory to hold the temporary data. When SQL Server is using the disk space, it may acquire the space from user database if SORT_IN_TEMPDB option is not set for index creation or modification. There are certain advantages for using tempdb for this purpose. With this option most contiguous space is allocated for index which makes queries faster on the index. Additionally it uses tempdb transaction log.

When creating an index it is also to be made sure that transaction log has sufficient disk space as indexing creating large data loads which might fill the transaction log. It is best if you change the recovery model to SIMPLE or BULK_LOGGED before starting the index operation.

If an index is fragmented then it is either reorganized or rebuilt with ALTER INDEX REORGANIZE or ALTER INDEX REBUILD command. It must be remembered that reorganizing an index is always an online operation but rebuilding an index can be online or offline. For rebuild an index is dropped and a new index is created.

Index creation can also be further controlled with providing hints for maximum degree of parallelism in the CREATE or ALTER index statement. But keep in mind that as the degree of parallelism increases the requirement of memory also increases. So make sure that you have enough memory available.

CREATE INDEX IX_MyIndex
ON MyDB.MyTable (MyID)
WITH (MAXDOP=8);


Note: MAXDOP option can not be specified in ALTER INDEX REORGANIZE statement. Currently, it is always a single threaded operation. You can also specify MAXDOP in sp_configure command. This would apply the parallelism setting on all operations of the server instance.

Monday, September 29, 2008

SSIS Package Deployment Utility

For deploying packages created by business analysis developers in SQL Server Integration Services, Business Intelligent development studio introduces Package Deployment Utility. This deployment utility can create a deployment package by means of which you can deploy your package in:

1. File System
2. SQL Server job

Just take properties of your project and go to Deployment Utility tab.



Set the "Create Deployment Utility" as "True" and specify the "Deployment Path".

As soon as you build your project deployment utility is created in the above specified folder with the package file. The file type of Deployment Utility is "Integration Services Deployment Manifest". The extension of the deployment package is "*.SSISDeploymentManifest".

When you run this manifest file. The package deployment wizard is started which helps in deploying the package.



As discussed above, you can also specify the deployment destination for our SSIS package.



If you choose to install in the file system then you just have to specify the destination folder and start the wizard. If you choose otherwise and install in the SQL Server instance, then you have to specify the SQL Server instance in which we want to install this package.

Friday, September 5, 2008

Resource Governor (SQL Server 2008)

I have written an article about Resource Governor. A feature introduced in SQL Server 2008.

Click: SQL Server 2008 Resource Governor

Thursday, September 4, 2008

T-SQL Debugger (SQL Server 2008)

Gone are the days in which you could not debug your code in SQL Server Management Studio. The only option that SQL Server developers had was to add their stored procedures and other programming objects into Visual Studio project and debugging their code from the Visual Studio IDE. With SQL Server 2008, Microsoft has included full-fledged debugger in SQL Server Management Studio.

Like other debugging tools, we can add break-points in our T-SQL code. Pressing F9 would add the break-point as generally is the standard in other Microsoft tools.

There are different windows available for debugging the code. The windows include:
1. 'Local'
2. 'Call Stack'
3. 'Output'
4. 'Command Window'
5. 'Threads'
6. 'BreakPoints'
7. 'Watch'

You can 'Step Into', 'Step Out' or 'Step Over' the code. There are same short-cut keys available as in other Microsoft tools.

The 'Quick Watch' feature is also available. With this feature, you can evaluate any expression evaluated under the environment of execution of current code.



You can also toggle, delete or disable breakpoints.

Enjoy this feature!

SQL Server Agent Proxy

What would you have to do if you want any job step to be executing with a different credential than SQL Agent Service Account. The answer is SQL Server Agent Proxy. I have written a useful article about SQL Server Agent Proxy on codeproject. You can access the article on the following link.
Click: SQL Server Agent Proxy

Saturday, August 23, 2008

Google Custom Search (SQL Server)

Whenever you need any information you start Google, type the term and hit enter. The query may be about any dictionary request, engineering problem and everything else in the world.

Now this information on tip is very good as you can search the world very easily. But to find out the relevant information among the result is very difficult. We need a way so that we could search only the specific sites specializing in the specific field. Google introduced a Custom Search option few years back. Using the same provision, I have created a Custom Search for Microsoft SQL Server.

Google Custom Search

Please visit the above search engine to query your specific terms.

Friday, August 22, 2008

Performance Improvement of SQL Server

In the very early ages we used to say that "Information is virtue" then it changed to "Information is power". In the present age of information technology, Information is not power any more because information is every where. It is the correct and timely use of it. Just take an example, there are two recruitment agencies both have unlimited resumes of job seekers in their database. Now an organization reaches to both of them and asked them to provide list of candidate suiting a particular criteria. Now the firm which has the faster mechanism to fetch the information from the database would be able to get the contract. So in the present age, only those organizations would remain to be existent which have faster access to information. This is "Survival of the fastest" scenario for an organizational market.

Today I want to discuss many things for you to improve the performance of your SQL Server based systems.

Hardware:

Use Faster Storage Technologies:
It is true that by writing efficient code, we can improve the performance. But this code has to run on hardware, so by using the efficient hardware, we can improve the performance of our systems. These days, we are using multi core processors and primary memory up to many giga bits per second. But the problem is that our database systems are based on the data stored in our persistent storage drives. The maximum speed of such devices is 7200 rpm. This proves to be a bottleneck. But being a software person, we are not concerned, generally, with the speed of our hard drives. But by just using a storage which becomes more responsive to the data requirements of our system, we can improve our systems many folds.

Having discussed about this, a question pops up in our minds. Can we increase the speed of our hard drives? The answer is YES. This can be done by using the drives which have higher speeds. Gone are the days in which the organizations had to use the magnetic storage drives because of their capacity, cost and reliability. These days’ organizations are moving towards solid state options for persistent storage. This storage have more random access ability. So they can process data much faster then their magnetic counterparts.

The only problem is that, these drives are not marketed as they deserve to be. But being an enthusiast, we should dig the technology to get the best out of it.

Software:

a. Efficient use of indexes
First thing first, Index does not mean improving performance for every operation. If there are more SELECT operations in a table, use index on columns. But if there are more inserts / updates / deletes, don't consider implementing it. This is because this would further slow down the process. These operations would not only be slow but would also create fragmentation problems.

b.Computed Columns:
This is another decision you have to take based on your scenarios. If you have more Insert / Updates then using computed columns would significantly reduce the performance of your system. Instead if your system involves more SELECT operations then use of computed columns would prove to be a step towards drastic performance improvement for your system.

c. Selecting the data types of columns in a table
This aspect is sometimes ignored whenever an improvement is discussed. Always consider the following points whenever you have to select a data type of one of your column.

• You should have very good reason not to use non-Unicode VARCHAR if you have to deal with character data. The reasons may be your requirement to store data more than 8000 bytes. There may be special Unicode characters which you have to use. Otherwise, you can do very fine with this data type.

• Always remember that integer based columns are much faster to sort any result set if used in an ORDER BY clause.

• Never use Text or BigInt data types unless you need extra storage.

• Using Integer based data type for Primary key would improve the performance compared to text based or floating point based ones.

d. Bulk Insert / Updates
Whenever Bulk Insert or updates are involved, selecting the recovery model to FULL or BULK LOGGED would slow down the process of the insert or update operation. So what you can do is that you change the recover model do your operation and then change the model back to the previous one.

Index could also create delay for these operations.

Always try to pass data as XML to your stored procedures and use OPENXML for BULK INSERT or UPDATE. This operation is then done through a single INSERT / UPDATE statement as a SET based operation. Avoid using DataAdapter's for update even if you are using updateBatchSize with it. This is because of the reasons: it is not SET based; still more than one UPDATE statements are passed to the SQL Server. This only decreases the roundtrip between your application and SQL Server. The numbers of UPDATE statements being executed on the server are the same. The maximum value for UPDATEBATCHSIZE is a limited value. But with XML option you can send the complete data of a DataTable.

e. Parameterized queries
You might have studied that compiled queries run faster. This is because the DBMS does not have to create a plan for the query for each operation. This also would help you as a precaution against SQL Injection attacks.

f. Stored procedures
Stored procedures also help in preventing the SQL Injection attacks. They also decrease the roundtrips between application and Database Server because many statements are combined in one procedure. This makes these bunch of statements as a single transaction. There are also many other benefits. But the specified ones are enough to use them as a first priority.

g. Stop lavishing use of explicit Cursors
This has been said by so many people so many times that this repetition is certainly not very necessary. I just tell you my example. Few years back I was given a stored procedure in which the previous developer had the lavishing use of the explicitly cursors. The execution time has risen to 40 minutes then because of increase of data in the database. This was expected to increase more as the data increases. By just replacing these cursors with some SET based operations; I was able to turn that execution time to 3 seconds. This was a great achievement on my part. By just looking at this example, you can find out that use of cursor is killing for the performance of your system.

Wednesday, August 20, 2008

IN Clause (SQL Server)

Today I want to discuss one thing about IN clause in SQL Server.


SELECT * from
tblcustomer
WHERE
(ID, Name) in (
select CustomerID, CustomerName from tblCustomerOrders
)


In the above statement, we wanted to list the details of customers who have submitted some orders. When you run this statement in SQL Server (with the tables tblCustomer and tblCustomerOrders already created), an error is generated about this. The real reason is that SQL Server does not support more than one parameters in the IN clause.

Monday, August 18, 2008

Inline Initialization of Variables in SQL Server 2008

Before 2008, variables could only be assigned with a value using SELECT or SET statements. There was no way to assign any value to a variable at the time when it is declared. With 2008, Microsoft has removed this shortcoming and introduced this feature. This inline initialization can be done using literal or any function.
E.g.

DECLARE @myINTVar INT = 3
DECLARE @myVARCHARVar VARCHAR = LEFT('Shujaat',1)
SELECT @myINTVar, @myVARCHARVar

Friday, August 1, 2008

Joining with a result set returned from a UDF (CROSS AND OUTER APPLY)

I have a function which returns a result set. Now I need to join the result
set with a table in a SELECT query. How can I do that?

This question led me to a feature of T-SQL which is APPLY. This comes in
two flavors. They are as follows:
1. Cross Apply
2. Outer Apply

Both are used to join with the result set returned by a function. Cross
apply works like an INNER JOIN and OUTER apply works like LEFT OUTER JOIN.

Thursday, July 31, 2008

SPARSE COLUMN and COLUMN SET

Another new feature coming up with SQL Server 2008 is SPARSE column. It provides optimal storage by reducing the storage for nullable columns. But there are also downside to it as making a column as SPARSE would put some overhead when a non-null value is read for the column. The group of columns can be setup by making a COLUMN SET while creating a table. The column set would allow all the sparse columns to be updated like a single XML column.

Tuesday, July 29, 2008

Intellisence support in SQL Server 2008

For those of you who are bored with SQL Server management studio of SQL Server 2005. There is a good news that the same tool has intellisence support in the 2008 version.


Congratulations!!


But this comes at some cost. If you open the profiler then you would see these statements being executed.

I just executed the statement SELECT * FROM SYS.DATABASES. But this came as a price of execution of one more statement to the server. This is not unique to the SQL SERVER 2008. All those software which support intellisence services has to query the database to get the information about the object.

So if you have many database developers this might come as a cost to the Server instance. But this would also help you reduce the cost as the database developers become more productive with the support of intellisence.

I support the inclusion of this feature and admire it.

Wednesday, July 23, 2008

Batch Update ADO .net OPENXML

I wanted to use the Batch Update in one of my project and I explored the following information. This might be of interest to some of you.

I thought that using the Batch update option in .net 2.0 command, I can go with this. This is achieved using the rows in a dataset which are updated using a DataAdapter object. In reality one has to set the UpdateBatch size property to a specific value. But this is not the same. When i started the profiler, the trace showed that each statements are executed individually instead a single Update / Insert statement. When i studied the documentation, it became clear that Microsoft never claimed that a single statement would be executed. It says that a batch of statements are passed to SQL Server instead of individual statement. You can study the following article for further details:

http://msdn.microsoft.com/en-us/library/kbbwt18a(VS.80).aspx

But those who are like me and want that a single Update/ Insert statement is executed instead, have an alternate option. This is through the use of OPENXML feature of SQL Server. All the rows are provided to a stored procedure as a single parameter of XML. The parameter datatype is generally set as NText. For further details, please read the following:

http://en.csharp-online.net/XML_and_ADO.NET%E2%80%94OPENXML

After using this option, when you start your profiler trace, you should be satisfied that there is only a single Update statement being executed.

Are you happy now??

Wednesday, July 9, 2008

SSIS Utilities (DTSWizard, DTUtil)

Today I stumbled across two very good utilities shipped with SQL Server
2005. They are both SSIS based.

1. DTSWizard:
This is a wizard based import export utility. This creates a SSIS package,
executes it, copy it to file system or to database instance. This SSIS
package is for import / export data from many sources to SQL Server. I used
it to create a package to export data to excel file, which is successfully
did. I loved this utility.

You can start this utility by just typing DTSWIZARD in the command prompt.



2. DTUTIL:
This is also related to SSIS. This is about copying / moving SSIS packages
from / to SQL Server.

You can start this utility by typing DTUTIL from SQL Server. This is not
wizard based but it uses or creates SSIS package.

Friday, May 2, 2008

DDL trigger for auditing DDL statments running on SQL Server

Today I would discuss the DDL triggers available in SQL Server 2005. Like Oracle these are executed when any DDL statement is executed. The SQL statements can be logged in this trigger.

For keeping a record, it is better to create a table.

CREATE TABLE [dbo].[DDL_OBJECT_VERSION](
[EVENT_TYPE] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL,
[POST_TIME] [datetime] NULL,
[LOGIN_NAME] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL,
[USERNAME] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[DATABASE_NAME] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL,
[SCHEMA_NAME] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL,
[OBJECTNAME] [nchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[OBJECT_TYPE] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL,
[SQL_TEXT] [varchar](max) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[SERVER_NAME] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL
) ON [PRIMARY]

The context information can be obtained by EVENTDATA() function availble in DDL trigger. The format of XML returned by this function is as follows:


CREATE_TABLE
2005-07-30T10:48:52.537
55
TEST-SHUJAAT
shujaat\Administrator
dbo
AdventureWorks
Person
Address
TABLE

ANSI_NULL_DEFAULT="ON"
ANSI_PADDING="ON"
QUOTED_IDENTIFIER="ON"
ENCRYPTED="FALSE" />
CREATE TABLE [Person].[Address](
[AddressID] [int]
IDENTITY (1, 1)
NOT FOR REPLICATION NOT NULL,
[AddressLine1] [nvarchar](60) NOT NULL,
[AddressLine2] [nvarchar](60) NULL,
[City] [nvarchar](30) NOT NULL,
[StateProvinceID] [int] NOT NULL,
[PostalCode] [nvarchar](15) NOT NULL,
[rowguid] uniqueidentifier ROWGUIDCOL NOT NULL CONSTRAINT
[DF_Address_rowguid] DEFAULT (NEWID()),
[ModifiedDate] [datetime] NOT NULL CONSTRAINT
[DF_Address_ModifiedDate] DEFAULT (GETDATE())
) ON [PRIMARY];




You can access invidual information with XQuery support available in SQL Server. The example code of DDL trigger is as follows:

CREATE TRIGGER [DDL_TRIG_OBJ_VERSIONS]
ON DATABASE
FOR DDL_DATABASE_LEVEL_EVENTS
AS
SET NOCOUNT ON
DECLARE @data XML;
SET @data = EVENTDATA();
insert INTO [DBO].[DDL_OBJECT_VERSION](
EVENT_TYPE,
POST_TIME,
SERVER_NAME,
LOGIN_NAME,
USERNAME,
DATABASE_NAME,
SCHEMA_NAME,
OBJECTNAME,
OBJECT_TYPE,
SQL_TEXT
)
VALUES(
@data.value('(/EVENT_INSTANCE/EventType)[1]','sysname'),
GETDATE(),
@data.value('(/EVENT_INSTANCE/ServerName)[1]','sysname'),
@data.value('(/EVENT_INSTANCE/LoginName)[1]','sysname'),
@data.value('(/EVENT_INSTANCE/UserName)[1]','sysname'),
@data.value('(/EVENT_INSTANCE/DatabaseName)[1]','sysname'),
@data.value('(/EVENT_INSTANCE/SchemaName)[1]','sysname'),
@data.value('(/EVENT_INSTANCE/ObjectName)[1]','sysname'),
@data.value('(/EVENT_INSTANCE/ObjectType)[1]','sysname'),
@data.value('(/EVENT_INSTANCE/TSQLCommand)[1]','VARCHAR(max)')
);

Wednesday, April 30, 2008

Operation causing trigger to fire

Oracle has two options for DML triggers on a table. 1- They can be set up to fire for each row modified. 2- The can be set for each statement executed e.g. Insert, update, delete. I learnt an awkward thing about SQL server 2005 DML trigger today that it can not be set for execution for each row being inserted, deleted or modified.

But there are workarounds through which you can achieve the same result. For the work around you must remember the INSERTED and DELETED tables available in a DML trigger.

What you can do is to find out the count on these two tables .

Select @InsertedCount = count(*) from INSERTED;
Select @DeletedCount = count(*) from DELETED;

where the variables are declared as:

Declare @InsertedCount, @DeletedCount int;

Now if @InsertedCount > 0 and @DeletedCount = 0, it means that it is an insert operation.

if @InsertedCount = 0 and @DeletedCount > 0, it means that it is a delete operation.

If @InsertedCount > 0 and @DeletedCount > 0, it means that it is an update operation.

The last condition is if @InsertedCount = 0 and @DeletedCount = 0. This is not a hypothetical situation. But a real situtation. This situation can occur as a result of Update or Delete query which has selected no record to modify / delete because of the selection criteria in WHERE clause.

INSERTED / DELETED tables in a SQL Server trigger

SQL server 2005 provides the records being inserted / modified or deleted in two temporary tables available in a DML trigger. These tables are INSERTED and DELETED. The INSERTED table has the new state records being inserted or updated. The DELETED table has the old state of the records being updated or deleted. Obviously for an insert operaion DELETED table would have zero records. Similarly, for a delete operation INSERTED table would be empty. And for a Update operation both tables should have some records.

The above theory about number of records is true when the insert / update/ delete operation result in some rows being inserted/ updated/ deleted. So if no record is selected because of the criteria of delete or update statement. Then even in case of delete or update both tables i.e. INSERTED and DELETED would be empty. But if you would have to log the activity then you would not want to log the attempt which has modified no record.

I used this approach to find out the operation which has caused the trigger to fire. The trigger was set to fire for insert, update or delete operation. I had to log each modification of data. The above approach of record cound in INSERTED and DELETED table worked perfectly fine for me.

I will be coming with more blogs about SQL server trigger in coming days.
1-