Everyday brings something new. This is an attempt to blog things learnt on a daily basis to solve issues in our environment from DBAs, colleagues and the great collection of articles in the field. Please share your comments, alternative solutions and suggestions.
Monday, November 23, 2009
Monday, November 16, 2009
Updating counts in a table
update t
set [count] = x.cnt
from [servername].[dbname].[dbo].[table1] t,
(Select tk,count(ID) as cnt from [servername].[dbname].dbo.table2 with (nolock) group by tk) x
where x.tk=t.tk
set [count] = x.cnt
from [servername].[dbname].[dbo].[table1] t,
(Select tk,count(ID) as cnt from [servername].[dbname].dbo.table2 with (nolock) group by tk) x
where x.tk=t.tk
Friday, November 13, 2009
To find active directory users
To see what users exists in the user groups:
Open command prompt
Run : dsa.msc
Open command prompt
Run : dsa.msc
Wednesday, November 11, 2009
Varchar vs Nvarchar
VARCHAR is stored as regular 8-bit data. But NVARCHAR strings are stored in the database as UTF-16 — 16 bits or two bytes per character. So Nvarchar takes up more space. Max for nvarchar is 4000 unlike varchar which is 8000.
Nvarchar is for unicode strings and varchar for non-unicode strings.
Unicode (nvarchar) has the capability to store characters from all
languages, while varchar typically uses a local code page and can only use
local characters. So If you have customers from around the world, it is suggested to use NVARCHAR.
One of the biggest uses found for nvarchar is in importing data.
Loading text fields is a problem because people put strange things in the
fields. Importing into a nvarchar almost always gets the data in while
varchar often has issues with things like NPC's. Once in you can just
change the data type to varchar if you want and everything is fine.
Nvarchar is for unicode strings and varchar for non-unicode strings.
Unicode (nvarchar) has the capability to store characters from all
languages, while varchar typically uses a local code page and can only use
local characters. So If you have customers from around the world, it is suggested to use NVARCHAR.
One of the biggest uses found for nvarchar is in importing data.
Loading text fields is a problem because people put strange things in the
fields. Importing into a nvarchar almost always gets the data in while
varchar often has issues with things like NPC's. Once in you can just
change the data type to varchar if you want and everything is fine.
Tuesday, November 10, 2009
Gather DB and Server Stats
To find how well a database is performing
Database: Buffer Cache Hit Ratio:
Buffer Cache Hit Ratio counter indicates how often SQL Server goes to the buffer, not the hard disk, to get data. This ratio should exceed 90%, and ideally be over 99%.If your buffer cache hit ratio is lower than 90%, you need to go out and buy more RAM today. If the ratio is between 90% and 99%, then you should seriously consider purchasing more RAM, as the closer you get to 99%, the faster your SQL Server will perform. In some cases, if your database is very large, you may not be able to get close to 99%, even if you put the maximum amount of RAM in your server.
select DB_name(), GetDate() as EventTime ,
((CONVERT(decimal(6,2),t1.cntr_value)/CONVERT(decimal(6,2),t2.cntr_value))*100) as HitRatio
from sys.dm_os_performance_counters t1,
sys.dm_os_performance_counters t2
where
t1.object_name LIKE '%Buffer Manager%'
and t1.object_name = t2.object_name
and t1.counter_name='Buffer cache hit ratio'
and t2.counter_name='Buffer cache hit ratio base'
--NB: Sometimes this percentage will show greater than 100% (The key is to look for dips in Hit Ratio starting below 99-98%)
Get Server Stats: CPU Utilization
declare @ts_now bigint
select @ts_now = cpu_ticks / convert(float, cpu_ticks_in_ms) from sys.dm_os_sys_info
select top 1 @@SERVERNAME,record_id,
dateadd(ms, -1 * (@ts_now - [timestamp]), GetDate()) as EventTime,
SQLProcessUtilization,
SystemIdle,
100 - SystemIdle - SQLProcessUtilization as OtherProcessUtilization
from (
select
record.value('(./Record/@id)[1]', 'int') as record_id,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') as SystemIdle,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') as SQLProcessUtilization,
timestamp
from (
select timestamp, convert(xml, record) as record
from sys.dm_os_ring_buffers
where ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'
and record like '%%') as x
) as y
order by record_id desc
End
(NB: You can either set up a job or use perfmon to gather these)
Performance Counters for Server Audit
Performance counter: Processor(_Total)\%Processor Time gives CPU Utilization.
Threshold: Should be always below 100%, ocassional peaks are fine.
If it shows a constant 100%, there will be a bottleneck, bad response times, reponse failures etc.
Performance counter: Memory\Pages/Sec
Threshold: Sustained values higher than five indicate a bottleneck.
Significance: This counter indicates the rate at which pages are read from or written to disk to resolve hard page faults.
Performance counter: Memory\Available Mbytes
Threshold: A consistent value of less than 20 to 25 percent of installed RAM is an indication of insufficient memory.
Significance: This indicates the amount of physical memory available to processes running on the computer. Note that this counter displays the last observed value only. It is not an average.
Performance counter: Memory\Pages faults/Sec
The Page Faults/sec value is made up of hard and soft page faults (If the page is loaded in memory at the time the fault is generated, but its status is not updated as 'present' in hardware, then it is called a minor or soft page fault). So it is a little misleading value.
If page faults is too high, check the pages\sec value. If that is high means you have hard page faults and an immidiate action is required. If that is less than 5, then possibly the number is only soft page faults and nothing to worry about. Though most admins will want it to be below 15-20.
More explanation on performance counters
Database: Buffer Cache Hit Ratio:
Buffer Cache Hit Ratio counter indicates how often SQL Server goes to the buffer, not the hard disk, to get data. This ratio should exceed 90%, and ideally be over 99%.If your buffer cache hit ratio is lower than 90%, you need to go out and buy more RAM today. If the ratio is between 90% and 99%, then you should seriously consider purchasing more RAM, as the closer you get to 99%, the faster your SQL Server will perform. In some cases, if your database is very large, you may not be able to get close to 99%, even if you put the maximum amount of RAM in your server.
select DB_name(), GetDate() as EventTime ,
((CONVERT(decimal(6,2),t1.cntr_value)/CONVERT(decimal(6,2),t2.cntr_value))*100) as HitRatio
from sys.dm_os_performance_counters t1,
sys.dm_os_performance_counters t2
where
t1.object_name LIKE '%Buffer Manager%'
and t1.object_name = t2.object_name
and t1.counter_name='Buffer cache hit ratio'
and t2.counter_name='Buffer cache hit ratio base'
--NB: Sometimes this percentage will show greater than 100% (The key is to look for dips in Hit Ratio starting below 99-98%)
Get Server Stats: CPU Utilization
declare @ts_now bigint
select @ts_now = cpu_ticks / convert(float, cpu_ticks_in_ms) from sys.dm_os_sys_info
select top 1 @@SERVERNAME,record_id,
dateadd(ms, -1 * (@ts_now - [timestamp]), GetDate()) as EventTime,
SQLProcessUtilization,
SystemIdle,
100 - SystemIdle - SQLProcessUtilization as OtherProcessUtilization
from (
select
record.value('(./Record/@id)[1]', 'int') as record_id,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') as SystemIdle,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') as SQLProcessUtilization,
timestamp
from (
select timestamp, convert(xml, record) as record
from sys.dm_os_ring_buffers
where ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'
and record like '%
) as y
order by record_id desc
End
(NB: You can either set up a job or use perfmon to gather these)
Performance Counters for Server Audit
Performance counter: Processor(_Total)\%Processor Time gives CPU Utilization.
Threshold: Should be always below 100%, ocassional peaks are fine.
If it shows a constant 100%, there will be a bottleneck, bad response times, reponse failures etc.
Performance counter: Memory\Pages/Sec
Threshold: Sustained values higher than five indicate a bottleneck.
Significance: This counter indicates the rate at which pages are read from or written to disk to resolve hard page faults.
Performance counter: Memory\Available Mbytes
Threshold: A consistent value of less than 20 to 25 percent of installed RAM is an indication of insufficient memory.
Significance: This indicates the amount of physical memory available to processes running on the computer. Note that this counter displays the last observed value only. It is not an average.
Performance counter: Memory\Pages faults/Sec
The Page Faults/sec value is made up of hard and soft page faults (If the page is loaded in memory at the time the fault is generated, but its status is not updated as 'present' in hardware, then it is called a minor or soft page fault). So it is a little misleading value.
If page faults is too high, check the pages\sec value. If that is high means you have hard page faults and an immidiate action is required. If that is less than 5, then possibly the number is only soft page faults and nothing to worry about. Though most admins will want it to be below 15-20.
More explanation on performance counters
Tuesday, November 3, 2009
Space
----Log File(s) Used Size (KB)---
select instance_name
,cntr_value 'Log File(s) Used Size (KB)'
from sys.dm_os_performance_counters
where counter_name = 'Log File(s) Used Size (KB)'
and instance_name like '%%'
---Check total log space taken up by all DB--
select sum(cntr_value)
from sys.dm_os_performance_counters
where counter_name = 'Log File(s) Used Size (KB)'
and instance_name like '%%'
----Data File(s) Used Size (KB)---
select instance_name
,cntr_value 'Data File(s) Size (KB) '
from sys.dm_os_performance_counters
where counter_name = 'Data File(s) Size (KB) '
and instance_name like '%%'
order by 'Data File(s) Size (KB) '
---Check total data space taken up by all DB---
select sum(cntr_value)
from sys.dm_os_performance_counters
where counter_name = 'Data File(s) Size (KB) '
and instance_name like '%%'
SELECT
(Physical_memory_in_bytes/1024.0)/1024.0 AS Physical_memory_in_Mb
FROM
sys.dm_os_sys_info
--TempDB analysis for space--
USE tempdb
GO
EXEC sp_spaceused
--The following should give you some clues as to which table(s) consume most of the space in the data file(s) -- this will help you narrow down any transactions that are either taking a long time or repeatedly being left in limbo:
USE tempdb
GO
SELECT name
FROM tempdb..sysobjects
SELECT OBJECT_NAME(id), rowcnt
FROM tempdb..sysindexes
WHERE OBJECT_NAME(id) LIKE '#%'
ORDER BY rowcnt DESC
--If you can't shrink the log, it might be due to an uncommitted transaction. See if you have any long-running transactions with the following command:
DBCC OPENTRAN -- or DBCC OPENTRAN('tempdb')
--Check the oldest transaction (if it returns any), and see who the SPID is (there will be a line starting with 'SPID (Server Process ID) :'). Use that in the following:
DBCC INPUTBUFFER()
--This will tell you at least a portion of the last SQL command executed by this SPID, and will help you determine if you want to end this process with:
KILL
-------DATABASE : Tablename ROWCOUNTS----------------------------
SELECT
[TableName] = so.name,
[RowCount] = MAX(si.rows)
FROM
sysobjects so,
sysindexes si
WHERE
so.xtype = 'U'
AND
si.id = OBJECT_ID(so.name)
GROUP BY
so.name
ORDER BY
2 DESC
--The first thing you can do is simply compare the difference between the timestamp BEFORE your query, and the timestamp AFTER. For example:
DECLARE @a DATETIME, @b DATETIME
SET @a = CURRENT_TIMESTAMP
DECLARE @i INT
SET @i = 0
WHILE @i < 10000
BEGIN
SET @i = @i + 1
END
SET @b = CURRENT_TIMESTAMP
SELECT DATEDIFF(MS, @a, @b)
--You can achieve similar results by running SQL Profiler, setting appropriate filters, and watching the Duration column as your query runs.
--Finally, you can alter the above code slightly so that you see all of the durations on the messages tab of Query Analyzer:
SET STATISTICS TIME ON
-- query here
SET STATISTICS TIME OFF
/*To find the number of unallocated pages in kb,we can use the sys.dm_db_file_space_usage DMV as follows: */
SELECT SUM(unallocated_extent_page_count) AS [free pages],
(SUM(unallocated_extent_page_count)*8) AS [free space in KB]
FROM sys.dm_db_file_space_usage
select a.session_id
, b.login_name
FROM sys.dm_exec_connections a ,sys.dm_exec_sessions b
WHERE a.session_id=b.session_id
select instance_name
,cntr_value 'Log File(s) Used Size (KB)'
from sys.dm_os_performance_counters
where counter_name = 'Log File(s) Used Size (KB)'
and instance_name like '%%'
---Check total log space taken up by all DB--
select sum(cntr_value)
from sys.dm_os_performance_counters
where counter_name = 'Log File(s) Used Size (KB)'
and instance_name like '%%'
----Data File(s) Used Size (KB)---
select instance_name
,cntr_value 'Data File(s) Size (KB) '
from sys.dm_os_performance_counters
where counter_name = 'Data File(s) Size (KB) '
and instance_name like '%%'
order by 'Data File(s) Size (KB) '
---Check total data space taken up by all DB---
select sum(cntr_value)
from sys.dm_os_performance_counters
where counter_name = 'Data File(s) Size (KB) '
and instance_name like '%%'
SELECT
(Physical_memory_in_bytes/1024.0)/1024.0 AS Physical_memory_in_Mb
FROM
sys.dm_os_sys_info
--TempDB analysis for space--
USE tempdb
GO
EXEC sp_spaceused
--The following should give you some clues as to which table(s) consume most of the space in the data file(s) -- this will help you narrow down any transactions that are either taking a long time or repeatedly being left in limbo:
USE tempdb
GO
SELECT name
FROM tempdb..sysobjects
SELECT OBJECT_NAME(id), rowcnt
FROM tempdb..sysindexes
WHERE OBJECT_NAME(id) LIKE '#%'
ORDER BY rowcnt DESC
--If you can't shrink the log, it might be due to an uncommitted transaction. See if you have any long-running transactions with the following command:
DBCC OPENTRAN -- or DBCC OPENTRAN('tempdb')
--Check the oldest transaction (if it returns any), and see who the SPID is (there will be a line starting with 'SPID (Server Process ID) :
DBCC INPUTBUFFER(
--This will tell you at least a portion of the last SQL command executed by this SPID, and will help you determine if you want to end this process with:
KILL
-------DATABASE : Tablename ROWCOUNTS----------------------------
SELECT
[TableName] = so.name,
[RowCount] = MAX(si.rows)
FROM
sysobjects so,
sysindexes si
WHERE
so.xtype = 'U'
AND
si.id = OBJECT_ID(so.name)
GROUP BY
so.name
ORDER BY
2 DESC
--The first thing you can do is simply compare the difference between the timestamp BEFORE your query, and the timestamp AFTER. For example:
DECLARE @a DATETIME, @b DATETIME
SET @a = CURRENT_TIMESTAMP
DECLARE @i INT
SET @i = 0
WHILE @i < 10000
BEGIN
SET @i = @i + 1
END
SET @b = CURRENT_TIMESTAMP
SELECT DATEDIFF(MS, @a, @b)
--You can achieve similar results by running SQL Profiler, setting appropriate filters, and watching the Duration column as your query runs.
--Finally, you can alter the above code slightly so that you see all of the durations on the messages tab of Query Analyzer:
SET STATISTICS TIME ON
-- query here
SET STATISTICS TIME OFF
/*To find the number of unallocated pages in kb,we can use the sys.dm_db_file_space_usage DMV as follows: */
SELECT SUM(unallocated_extent_page_count) AS [free pages],
(SUM(unallocated_extent_page_count)*8) AS [free space in KB]
FROM sys.dm_db_file_space_usage
select a.session_id
, b.login_name
FROM sys.dm_exec_connections a ,sys.dm_exec_sessions b
WHERE a.session_id=b.session_id
Grant Permissions
To give permissions to SE's to excute but not modify stuff as dbo
GRANT execute on SCHEMA::dbo to [SE]
TO give permissions to SE's to trace
grant alter trace to [SE]
TO give permissions to SE's to a particular object
GRANT alter on table [tablename] to [SE]
Get/View Permissions for users in DB
select loginname,* from master.dbo.syslogins
where name in ('user1','user2')
select dp.NAME AS principal_name,
dp.type_desc AS principal_type_desc,
o.NAME AS object_name,
p.permission_name,
p.state_desc AS permission_state_desc
from sys.database_permissions p
left OUTER JOIN sys.all_objects o
on p.major_id = o.OBJECT_ID
inner JOIN sys.database_principals dp
on p.grantee_principal_id = dp.principal_id
where dp.NAME in ('user1','user2')
order by dp.NAME
GRANT execute on SCHEMA::dbo to [SE]
TO give permissions to SE's to trace
grant alter trace to [SE]
TO give permissions to SE's to a particular object
GRANT alter on table [tablename] to [SE]
Get/View Permissions for users in DB
select loginname,* from master.dbo.syslogins
where name in ('user1','user2')
select dp.NAME AS principal_name,
dp.type_desc AS principal_type_desc,
o.NAME AS object_name,
p.permission_name,
p.state_desc AS permission_state_desc
from sys.database_permissions p
left OUTER JOIN sys.all_objects o
on p.major_id = o.OBJECT_ID
inner JOIN sys.database_principals dp
on p.grantee_principal_id = dp.principal_id
where dp.NAME in ('user1','user2')
order by dp.NAME
About ErrorLogs
To recycle errorlogs:
EXEC sp_cycle_errorlog ;
/*
SQL Server 2005 offers an undocumented system stored procedure sp_readerrorlog.
This SP allows you to read the contents of the SQL Server error log files directly
from a query window and also allows you to search for certain keywords when reading the error file.
This is not new to SQL Server 2005, but this tip discusses how this works for SQL Server 2005.
This procedure takes four parameters:
Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...
Log file type: 1 or NULL = error log, 2 = SQL Agent log
Search string 1: String one you want to search for
Search string 2: String two you want to search for to further refine the results
If you do not pass any parameters this will return the contents of the current error log.
*/
Drop PROC [dbo].[sp_readerrorlog]
CREATE PROC [dbo].[sp_readerrorlog](
@p1 INT = 0,
@p2 INT = NULL,
@p3 VARCHAR(255) = NULL,
@p4 VARCHAR(255) = NULL)
AS
BEGIN
IF (NOT IS_SRVROLEMEMBER(N'securityadmin') = 1)
BEGIN
RAISERROR(15003,-1,-1, N'securityadmin')
RETURN (1)
END
IF (@p2 IS NULL)
EXEC sys.xp_readerrorlog @p1
ELSE
EXEC sys.xp_readerrorlog @p1,@p2,@p3,@p4
END
--"Error", "Stack Dump", "Dump Thread", "Fatal Exception"
EXEC sp_readerrorlog 0,1,'ERROR','0 to 1'
EXEC sp_readerrorlog 0,1,'Stack Dump'
EXEC sp_readerrorlog 0,1,'Dump Thread'
EXEC sp_readerrorlog 0,1,'Fatal Exception'
EXEC sp_cycle_errorlog ;
/*
SQL Server 2005 offers an undocumented system stored procedure sp_readerrorlog.
This SP allows you to read the contents of the SQL Server error log files directly
from a query window and also allows you to search for certain keywords when reading the error file.
This is not new to SQL Server 2005, but this tip discusses how this works for SQL Server 2005.
This procedure takes four parameters:
Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...
Log file type: 1 or NULL = error log, 2 = SQL Agent log
Search string 1: String one you want to search for
Search string 2: String two you want to search for to further refine the results
If you do not pass any parameters this will return the contents of the current error log.
*/
Drop PROC [dbo].[sp_readerrorlog]
CREATE PROC [dbo].[sp_readerrorlog](
@p1 INT = 0,
@p2 INT = NULL,
@p3 VARCHAR(255) = NULL,
@p4 VARCHAR(255) = NULL)
AS
BEGIN
IF (NOT IS_SRVROLEMEMBER(N'securityadmin') = 1)
BEGIN
RAISERROR(15003,-1,-1, N'securityadmin')
RETURN (1)
END
IF (@p2 IS NULL)
EXEC sys.xp_readerrorlog @p1
ELSE
EXEC sys.xp_readerrorlog @p1,@p2,@p3,@p4
END
--"Error", "Stack Dump", "Dump Thread", "Fatal Exception"
EXEC sp_readerrorlog 0,1,'ERROR','0 to 1'
EXEC sp_readerrorlog 0,1,'Stack Dump'
EXEC sp_readerrorlog 0,1,'Dump Thread'
EXEC sp_readerrorlog 0,1,'Fatal Exception'
Box shutdown from console
Box shutdown from console
-----------------------------
shutdown -r -f -m \\DBA1 -t 1
-----------------------------
shutdown -r -f -m \\DBA1 -t 1
Currently running query and kill query session_id
Query to see which query is running
-------------------------------------
select sqltext.text,req.session_id,req.blocking_session_id,req.status,req.command,req.cpu_time,req.total_elapsed_time
from master.sys.dm_exec_requests req cross apply master.sys.dm_exec_sql_text(sql_handle) as sqltext
kill(session_id)
-------------------------------------
select sqltext.text,req.session_id,req.blocking_session_id,req.status,req.command,req.cpu_time,req.total_elapsed_time
from master.sys.dm_exec_requests req cross apply master.sys.dm_exec_sql_text(sql_handle) as sqltext
kill(session_id)
Monday, November 2, 2009
Find Dependencies
Find all procs/functions/views dependent on a particular table
SELECT DISTINCT o.name, o.xtype
FROM syscomments c
INNER JOIN sysobjects o ON c.id=o.id
WHERE c.TEXT LIKE '%tablename%'
Find all foreign keys on a table
SELECT f.name AS ForeignKey,
OBJECT_NAME(f.parent_object_id) AS TableName,
COL_NAME(fc.parent_object_id, fc.parent_column_id) AS ColumnName,
OBJECT_NAME (f.referenced_object_id) AS ReferenceTableName,
COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS ReferenceColumnName
FROM sys.foreign_keys AS f
INNER JOIN sys.foreign_key_columns AS fc
ON f.object_id = fc.constraint_object_id
where OBJECT_NAME(f.parent_object_id) in ('tablename')
Find all tables with a particular column
SELECT name FROM sysobjects WHERE id IN ( SELECT id FROM syscolumns WHERE name = 'columnname' )
Find indexes on a table
SELECT object_name(i.object_id) as objectName,i.[name] as indexName,
object_name(i.index_id) as IndexId
FROM sys.indexes i
where i.[name] like 'IX%'
and object_name(i.object_id) in ('tablename')
GROUP BY i.object_id, i.index_id, i.[name]
Will give you a list of all the child tables that are using a column from this table
SELECT f.name AS ForeignKey,
OBJECT_NAME(f.parent_object_id) AS TableName,
COL_NAME(fc.parent_object_id, fc.parent_column_id) AS ColumnName,
OBJECT_NAME (f.referenced_object_id) AS ReferenceTableName,
COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS ReferenceColumnName
FROM sys.foreign_keys AS f
INNER JOIN sys.foreign_key_columns AS fc
ON f.object_id = fc.constraint_object_id
where OBJECT_NAME (f.referenced_object_id) = ''
Will give you a list of all foreign keys are that are present on this table
SELECT f.name AS ForeignKey,
OBJECT_NAME(f.parent_object_id) AS TableName,
COL_NAME(fc.parent_object_id, fc.parent_column_id) AS ColumnName,
OBJECT_NAME (f.referenced_object_id) AS ReferenceTableName,
COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS ReferenceColumnName
FROM sys.foreign_keys AS f
INNER JOIN sys.foreign_key_columns AS fc
ON f.object_id = fc.constraint_object_id
where OBJECT_NAME(f.parent_object_id) = ''
SELECT DISTINCT o.name, o.xtype
FROM syscomments c
INNER JOIN sysobjects o ON c.id=o.id
WHERE c.TEXT LIKE '%tablename%'
Find all foreign keys on a table
SELECT f.name AS ForeignKey,
OBJECT_NAME(f.parent_object_id) AS TableName,
COL_NAME(fc.parent_object_id, fc.parent_column_id) AS ColumnName,
OBJECT_NAME (f.referenced_object_id) AS ReferenceTableName,
COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS ReferenceColumnName
FROM sys.foreign_keys AS f
INNER JOIN sys.foreign_key_columns AS fc
ON f.object_id = fc.constraint_object_id
where OBJECT_NAME(f.parent_object_id) in ('tablename')
Find all tables with a particular column
SELECT name FROM sysobjects WHERE id IN ( SELECT id FROM syscolumns WHERE name = 'columnname' )
Find indexes on a table
SELECT object_name(i.object_id) as objectName,i.[name] as indexName,
object_name(i.index_id) as IndexId
FROM sys.indexes i
where i.[name] like 'IX%'
and object_name(i.object_id) in ('tablename')
GROUP BY i.object_id, i.index_id, i.[name]
Will give you a list of all the child tables that are using a column from this table
SELECT f.name AS ForeignKey,
OBJECT_NAME(f.parent_object_id) AS TableName,
COL_NAME(fc.parent_object_id, fc.parent_column_id) AS ColumnName,
OBJECT_NAME (f.referenced_object_id) AS ReferenceTableName,
COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS ReferenceColumnName
FROM sys.foreign_keys AS f
INNER JOIN sys.foreign_key_columns AS fc
ON f.object_id = fc.constraint_object_id
where OBJECT_NAME (f.referenced_object_id) = '
Will give you a list of all foreign keys are that are present on this table
SELECT f.name AS ForeignKey,
OBJECT_NAME(f.parent_object_id) AS TableName,
COL_NAME(fc.parent_object_id, fc.parent_column_id) AS ColumnName,
OBJECT_NAME (f.referenced_object_id) AS ReferenceTableName,
COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS ReferenceColumnName
FROM sys.foreign_keys AS f
INNER JOIN sys.foreign_key_columns AS fc
ON f.object_id = fc.constraint_object_id
where OBJECT_NAME(f.parent_object_id) = '
SQL 2005 CROSS APPLY
Firstly for using cross apply,
You need to check the compability of a database simply by running the below sql command.
sp_dbcmptlevel 'DatabaseName'
If the compatibilty is less than 90, the CROSS APPLY command will not work.
Change it to 90 using:
EXEC sp_dbcmptlevel DatabaseName, 90;
GO
To find all the most recently executed procs using cross apply use the sql below:
This tells the last time the procs were executed in recent month or so depending upon the capacity of cache. It does not give the last executed time information on all procedures in tha database. If the proc is executed only once a month or less frequently, then it might have been pushed out of the cache by the time we actually check the stats. If we want something better, we will have to login information in a table everytime the proc fires. (I am just an amateur on this so incase, there is a better solution, please feel free to post here and I did be glad to learn more)
SELECT a.execution_count ,
OBJECT_NAME(objectid) Name,
query_text = SUBSTRING(
b.text,
a.statement_start_offset/2,
( CASE WHEN a.statement_end_offset = -1
THEN len(convert(nvarchar(max), b.text)) * 2
ELSE a.statement_end_offset
END - a.statement_start_offset)/2
) ,
b.dbid ,
dbname = db_name(b.dbid) ,
b.objectid ,
a.creation_time,
a.last_execution_time,
a.*
FROM sys.dm_exec_query_stats a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) as b
WHERE OBJECT_NAME(objectid) = ''
ORDER BY a.last_execution_time DESC
You need to check the compability of a database simply by running the below sql command.
sp_dbcmptlevel 'DatabaseName'
If the compatibilty is less than 90, the CROSS APPLY command will not work.
Change it to 90 using:
EXEC sp_dbcmptlevel DatabaseName, 90;
GO
To find all the most recently executed procs using cross apply use the sql below:
This tells the last time the procs were executed in recent month or so depending upon the capacity of cache. It does not give the last executed time information on all procedures in tha database. If the proc is executed only once a month or less frequently, then it might have been pushed out of the cache by the time we actually check the stats. If we want something better, we will have to login information in a table everytime the proc fires. (I am just an amateur on this so incase, there is a better solution, please feel free to post here and I did be glad to learn more)
SELECT a.execution_count ,
OBJECT_NAME(objectid) Name,
query_text = SUBSTRING(
b.text,
a.statement_start_offset/2,
( CASE WHEN a.statement_end_offset = -1
THEN len(convert(nvarchar(max), b.text)) * 2
ELSE a.statement_end_offset
END - a.statement_start_offset)/2
) ,
b.dbid ,
dbname = db_name(b.dbid) ,
b.objectid ,
a.creation_time,
a.last_execution_time,
a.*
FROM sys.dm_exec_query_stats a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) as b
WHERE OBJECT_NAME(objectid) = '
ORDER BY a.last_execution_time DESC
Subscribe to:
Posts (Atom)