Quantcast
Channel: SQL Server Database Engine forum
Viewing all 15930 articles
Browse latest View live

virtual_memory_committed_kb from sys.dm_os_memory_clerks

$
0
0

Can anyone tell what's virtual_memory_committed_kb from sys.dm_os_memory_clerks

virtual_memory_committed_kbbigint

Specifies the amount of virtual memory that is committed by a memory clerk. The amount of committed memory should always be less than the amount of reserved memory. Is not nullable.

what's the different between virtual_memory_committed_kb and pages_kb in the same dmv ?


Lock Timeout Rate.

$
0
0

Good Morning!

  How can I audit which query / login is causing the alarm below? Still, any material that can help in understanding this alarm?

Thanks.


Doria

SQL Server resource pool MAX_MEMORY_PERCENT

$
0
0
  • MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT

    These settings are the minimum and maximum amount of memory reserved for the resource pool that can not be shared with other resource pools. The memory referenced here is query execution grant memory, not buffer pool memory (for example, data and index pages). Setting a minimum memory value for a pool means that you are ensuring that the percentage of memory specified will be available for any requests that might run in this resource pool. This is an important differentiator compared to MIN_CPU_PERCENT, because in this case memory may remain in the given resource pool even when the pool does not have any requests in the workload groups belonging to this pool. Therefore it is crucial that you be very careful when using this setting, because this memory will be unavailable for use by any other pool, even when there are no active requests. Setting a maximum memory value for a pool means that when requests are running in this pool, they will never get more than this percentage of overall memory.

the above is the definition from the web 

https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor-resource-pool?view=sql-server-ver15

but it didn't tell the MAX_MEMORY_PERCENT control what kind of memory ? buffer pool ? memory grant to query execution ?

SET ( LOCK_ESCALATION = { AUTO | TABLE | DISABLE } )

$
0
0

So for Auto is that , lock are up to partition level (if there is partition)

while 

TABLE , always escalate to TABLE LEVEL ?? no matter partition or not ?


AUTO
This option allows SQL Server Database Engine to select the lock escalation granularity that's appropriate for the table schema.

  • If the table is partitioned, lock escalation will be allowed to the heap or B-tree (HoBT) granularity. In other words, escalation will be allowed to the partition level. After the lock is escalated to the HoBT level, the lock will not be escalated later to TABLE granularity.
  • If the table isn't partitioned, the lock escalation is done to the TABLE granularity.

TABLE
Lock escalation is done at table-level granularity whether the table is partitioned or not partitioned. TABLE is the default value.

sys.dm_tran_version_store : version_sequence_num

$
0
0

In what chances that version_sequence_num will increase ?

when two transaction update two separate row of the table, it should only generate to records on table  sys.dm_tran_version_store with version_sequence_num 1 

and is that after the transaction are commited. the record of the sys.dm_tran_version_store will be deleted ?

dm_os_ring_buffers :RING_BUFFER_OOM'

$
0
0

;WITH rb
AS (
SELECT CAST (record as xml) record_xml FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type = 'RING_BUFFER_OOM'
)
SELECT
rx.value('(@id)[1]', 'bigint') AS RecordID
,DATEADD (ms, -1 * osi.ms_ticks - rx.value('(@time)[1]', 'bigint'), GETDATE()) AS DateOccurred
,rx.value('(OOM/Action)[1]', 'varchar(30)') AS MemoryAction
,rx.value('(OOM/Pool)[1]', 'int') AS MemoryPool
,rx.value('(MemoryRecord/MemoryUtilization)[1]', 'bigint') AS MemoryUtilization
FROM rb
CROSS APPLY rb.record_xml.nodes('Record') record(rx)
CROSS JOIN sys.dm_os_sys_info osi
ORDER BY rx.value('(@id)[1]', 'bigint') 

what's the definition of MemoryPool(Pool) and MemoryUtilization(MemoryUtilization)

Index Rebuild with is_ms_shipped = 0

$
0
0

Fellow DBAs,

I have a custom fragmentation script that has been running great for years - it still is.

But, I have seen different fragmentation scripts that include is_ms_shipped = 0 and some do not have that.

Is there a recommended approach for filtering on the object with this for rebuilding indexes or updating stats?

Thx

MG.

What are the steps needed to turn off NTFS file compression on Read/Write Databases?

$
0
0

Hi

I have a site where someone turned on NTFS file compression after the databases were created. I need to turn off the compression but I wanted to make sure I didn't miss any steps. 

I am concerned about possible corruption in the mdf files and the backups are failing I assume due to compression. 

Should I stop sql server service to remove the compression on the NTFS file's sql server folder or could that cause more issues by bringing up the database suspect? 

Any other gotchas I should be aware of? 

Thank you for you time! 

Sue


Sue


Shrinking of logfle of a database

$
0
0

Logfile was about 80G reduced the size by shrinking the logfile. It appears shrinking the logfile did not release all the space. After the backup of the database can I change the recovery mode to simple from full and that will reduce the size of the log.

Filter index question

$
0
0

I am comming to this example

https://www.mssqltips.com/sqlservertip/2353/performance-advantages-of-sql-server-filtered-statistics/

CREATE TABLE MyRegionTable(idINT, Location NVARCHAR(100), USState CHAR(2))
GO
CREATE TABLE MySalesTable(idINT, detail INT, quantity INT)
GO
CREATE CLUSTERED INDEX IDX_d1 ON MyRegionTable(id)
GO
CREATE INDEX IDX_MyRegionTable_name ON MyRegionTable(Location)
GO
CREATE STATISTICS IDX_MyRegionTable_id_name ON MyRegionTable(id,Location)
GO
CREATE CLUSTERED INDEX IDX_MySalesTable_id_detailON MySalesTable(id,detail)
GO
INSERT MyRegionTable VALUES(0,'Atlanta', 'GA')
INSERT MyRegionTable VALUES(1, 'San Francisco', 'CA')
GO
SET NOCOUNT ON
-- MySalesTable will contain 1 row for Atlanta and 1000 rows for San Francisco
INSERT MySalesTable VALUES(0, 0, 50)
DECLARE @i INT
SET
@i = 1
WHILE @i <=1000 BEGIN
INSERT
MySalesTable VALUES (1,@i, @i*3)
SET @i = @i +1
END
GO
UPDATE STATISTICS MyRegionTable WITH fullscan
UPDATE STATISTICS MySalesTable WITH fullscan
GO

--- So when come into the following statement 

SELECT detail FROM MyRegionTable JOIN MySalesTable ON MyRegionTable.id = MySalesTable.id
WHERE
Location='Atlanta'OPTION (recompile)

so we know the estimation is wrong. But I am question why the join query doesn't consider the stats of 

IDX_MySalesTable_id_detail?????? 

Incremental Stats partition level statistics are not used by SQL Server CE

$
0
0

We can do the 

UPDATE STATISTICS [WideWorldImporters].[Sales].[CustomerTransactions]

(CX_Sales_CustomerTransactions) WITH RESAMPLE ON PARTITIONS(3)

on serveral partition . And the results should be reflected in the main statistics (using DBCC show_statitisc)

I see some on the online page saying partition level statistics are not used by SQL Server CE? 

So why we still need to take care of it?

https://www.sqlshack.com/introducing-sql-server-incremental-statistics-for-partitioned-tables/

SQL Server CE

$
0
0

The CE is base on 4 assumption 

  • Independence: Data distributions on different columns are assumed to be independent of each other, unless correlation information is available and usable.
  • Uniformity: Distinct values are evenly spaced and that they all have the same frequency. More precisely, within each histogram step, distinct values are evenly spread and each value has the same frequency.
  • Containment (Simple): Users query for data that exists.
  • Inclusion: For filter predicates where Column = Constant, the constant is assumed to actually exist for the associated column. If a corresponding histogram step is non-empty, one of the step's distinct values is assumed to match the value from the predicate.

Are there any examples for each of these ?

SQL Server soft-numa

$
0
0
If the server has 2 socket and each has 8 phyiscal core per each. and enable hardware numa. So SQL Server 2016 will still do the auto-software numa configuration during startup ?

Transaction Log recovery phases

$
0
0

https://docs.microsoft.com/en-us/azure/azure-sql/accelerated-database-recovery

I am not quite sure the design for Phase 2 Redo , why it's not start scanning from the last commit lsn ?

auto update stats : sample

$
0
0

Are there any threshold that the auto update stats will use FULL scan or default sample size ?

I see some small tables, (around 10k rows), will trigger auto update stats with full scan. 


Row estimation for query using variable

$
0
0

I have a table 

CREATE TABLE BillingInfo(
ID INT IDENTITY,
BillingDate DATETIME,
BillingAmt MONEY,
BillingDesc varchar(500));

 

ALTER TABLE BillingInfo 
  ADD  CONSTRAINT [PK_BillingInfo_ID] 
  PRIMARY KEY CLUSTERED (ID);

CREATE NONCLUSTERED INDEX IX_BillingDate
  ON dbo.BillingInfo(BillingDate);

And I have the stats like in the following (this is the most update one, I didn't touch any rows / insert any rows)

And in the following query 

declare @BeginDate date
declare @EndDate date

set   @BeginDate = '2005-01-01'
set   @EndDate  = '2005-01-03';
SELECT BillingDate, BillingAmt
  FROM BillingInfo
  WHERE BillingDate between @BeginDate AND @EndDate

Results has 865 rows

There is a execution plan : like in the following

Just wonder how the sql server estimate the row as 164317???

If I don't use variable instead , I use the following query , the estimation is much close, but still don't know how the estimate number come out

SQM file generated in a sqllserver

$
0
0
There are SQM files auto generated in SQLserver. How can We stop autogenerating the files.

PSSDIAG Output

$
0
0

Hi Team,

Just curious to know that, Generally what tools Microsoft use to Analyze PSSDIAG Output.


FileSystem

$
0
0

Hi,

We are migrating from SQL Server 2016 which is on ReFS/64k block size. We have OLTP and OLAP system ranging from 10 to 15TB sizes.

What is the best practice? Shall i continue same ReFS with 64k block size or any other recommendation?

Thanks




SQL Server resource pool MAX_MEMORY_PERCENT

$
0
0
  • MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT

    These settings are the minimum and maximum amount of memory reserved for the resource pool that can not be shared with other resource pools. The memory referenced here is query execution grant memory, not buffer pool memory (for example, data and index pages). Setting a minimum memory value for a pool means that you are ensuring that the percentage of memory specified will be available for any requests that might run in this resource pool. This is an important differentiator compared to MIN_CPU_PERCENT, because in this case memory may remain in the given resource pool even when the pool does not have any requests in the workload groups belonging to this pool. Therefore it is crucial that you be very careful when using this setting, because this memory will be unavailable for use by any other pool, even when there are no active requests. Setting a maximum memory value for a pool means that when requests are running in this pool, they will never get more than this percentage of overall memory.

the above is the definition from the web 

https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor-resource-pool?view=sql-server-ver15

but it didn't tell the MAX_MEMORY_PERCENT control what kind of memory ? buffer pool ? memory grant to query execution ?

Viewing all 15930 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>