Hi Alll,
What is DAC and what the use of this?
--
Krish
Hi Alll,
What is DAC and what the use of this?
--
Krish
During our weekly maintenance jobs for database integrity (DBCC CheckDB() ) I receive the following error when one of my DBs is getting checked. I manually ran DBCC on the DB and it ran without error.
The environment here is a 150Gb DB the drive has 1500gb available for use, this a SAN drivethat disk fragmentation is low if none existent. So, I am guessing that the DB snapshots itself to run this DBCC and then drops it and its causing some sort of space issue? Where would this snapshot be, TempDB or hiden somewhere by the SQL engine?
I am exploring ColumnStore indexing and comparing with traditional row store. I wrote two sets of queries doing the same thing - one using columnstore index and the other with primary key clustered and non clustered index. The first one ran three times faster but taking twice cpu time compared to the second one. Wondering why? All the examples I have seen on MSDN show reduced cpu time as well as elapsed time. any explation would be appreciated...
1st query statistics (columnstore index)
SQL Server Execution Times:
CPU time = 55108 ms, elapsed time = 9082 ms.
2nd Query statistics
SQL Server Execution Times:
CPU time = 29468 ms, elapsed time = 31902 ms.
Hi ,
I created a maintenance plan for taking backups for user created databases . but after executing i found that the task is failing. i went to error logs and found that there is a extra space after the database name and that why it is failing. but i don't know how to remove that space after that database name . as every thing is done through maintenance plan wizard. i don't know how to remove that space and why it is taking extra space after the database name.
Hi experts,
I saw a few case studies in Fusion IO. This case (http://www.fusionio.com/case-studies/betonsoft/ ) shows 10X faster checkdb run times using Fusion IO. This case (http://www.fusionio.com/case-studies/equinox/ )shows 18X faster query load improvement. So I run a POC recently.
In the configuration, I compare EVA 8100(168 * 15Krpm FC HDDs) with Fusion IO Duo 2.4TB. In SQLIO, you can see my test result shows 42K IOPS(100% read) which is almost the same as Microsoft test. Also, the average latency is 1ms.
Hi all,
I set up a workload group and associated resource pool to limit cpu usage for certain users to 30 %.
If I now start a workload for a user in default pool (no limits) and shortly after the same workload for a user in my newly created workload group, sql server needs ~ 2 min. to level out the cpu usage to 70/30.
Is this behavior normal ?
Does this means that in scenarios where high cpu usage occurs < 2 min. resource governour cannot level out the workload as desired ??
Thanks for your input
acki4711
I know it's best practice to create as many tempDB databases as how many CPUs you have. So i have question:
If i have 6 core CPU with HT ON (12 logical cores inside windows) and i have 3 SQL instances, how many tempDBs per instance do i need?
Hi,
I enabled c2 auditing on a sql server and loaded those files in to a table for analysing last one month database activity.
Requirement: I want to list out all the users who accessed any database in that server.
Can you please let me know what data should i analyse and also can you provide me the query for analysing data.
SELECT TOP 1000 [TextData]
,[BinaryData]
,[DatabaseID]
,[TransactionID]
,[LineNumber]
,[NTUserName]
,[NTDomainName]
,[HostName]
,[ClientProcessID]
,[ApplicationName]
,[LoginName]
,[SPID]
,[Duration]
,[StartTime]
,[EndTime]
,[Reads]
,[Writes]
,[CPU]
,[Permissions]
,[Severity]
,[EventSubClass]
,[ObjectID]
,[Success]
,[IndexID]
,[IntegerData]
,[ServerName]
,[EventClass]
,[ObjectType]
,[NestLevel]
,[State]
,[Error]
,[Mode]
,[Handle]
,[ObjectName]
,[DatabaseName]
,[FileName]
,[OwnerName]
,[RoleName]
,[TargetUserName]
,[DBUserName]
,[LoginSid]
,[TargetLoginName]
,[TargetLoginSid]
,[ColumnPermissions]
,[LinkedServerName]
,[ProviderName]
,[MethodName]
,[RowCounts]
,[RequestID]
,[XactSequence]
,[EventSequence]
,[BigintData1]
,[BigintData2]
,[GUID]
,[IntegerData2]
,[ObjectID2]
,[Type]
,[OwnerID]
,[ParentName]
,[IsSystem]
,[Offset]
,[SourceDatabaseID]
,[SqlHandle]
,[SessionLoginName]
,[PlanHandle]
,[GroupID]
FROM [audit].[dbo].[audit]
Hi All,
I have 24 monthly partitions on a table (dbo.EMPLOYEE) in SQL Server 2005 & every day we are receiving around 3 -4 millions records. ETL is taking more than 3-4 hours to load the data and during load Select is very slow.
I thought to create a separate (dbo.EMPLOYEE_TEMP) table with one partition on same file group [where we have partitions on (dbo.EMPLOYEE)] with different file for daily load. So for ETL destination table will be (dbo.EMPLOYEE_TEMP) and once load is completed then I want to Merge/attaché this file to File/Filegroup of dbo.EMPLOYEE.
As I know Attaching/Merging will be very faster and we can move one day data from dbo.EMPLOYEE_TEMP to dbo.EMPLOYEE in few seconds.
I would like to know really this is possible or not, If yes please could you provide me few links or step to achieve the same
Thanks Shiven:) If Answer is Helpful, Please Vote
Hi,
We are planning for upgrade from sql server 2005 to sql server 2012, used sql server upgrade advisor and it shows no issues to upgrade. On the applications side, development team is also telling that the applications(they developed) will not fail with the new version. unfortunately there are some old applications that are in heavy use but our development team doesn't have their source code, and their vendors are also out of scence now, and those are working very smoothly with sql server 2005.
In the testing there is no problem but when we will go for the production may be we face some problem (it is not sure) and a rollback might be the option.
To apply this upgrade we must have a rollback plan (either it would be used or not).
Experts, please suggest some quick (instance have round 18 databases) and safe rollback plan.
Best Regards
khalil
Many Thanks & Best Regards, Hua Min
John M. Couch
We have a C# .Net Windows froms application with a few hundred Users. The application has 3 layers (Application,Business Logic,Data Access) and the Data Access layer connects all clients to one SQLServer instance using integrated security. Selects and Updates are done with stored procedures. Recently the application has been rolled out to Users at different locations who have a slower connection to the network and to the production SQLServer instance. For these Users selects and updates are taking significantly longer making the application slow for the User. What are my options from a SQLServer viewpoint to help improve performance ?
Any ideas would be greatly appreciated
Thanks John
Hi,
Is it possible to run SQL LocalDB in single user environment to restore master database? I have access only to sqlCmd tool.
I tried following from DOS window.
"C:\Program Files\Microsoft SQL Server\110\LocalDB\Binn\sqlservr.exe" -m
sqlcmd -S (localdb)\v11.0
If I run RESTORE master database in the sqlcmd window, I get a message stating I am not running in single user mode. Is my syntax/steps incorrect or SQL LocalDB does not run in single user mode?
Thanks
Pare.
Hi Gurus,
Is there any query to get explain plan in sqlserver. How to get it.
please help me.
Thanks,
Venkat
SQL Server not recognizing all teh temp db files
We have 8 files and sql server only sees 2 after restart.
Any help is appreciated
David Yard
Hello Forum
Recently I have performed a Reporting Services 2008 Migration: -
Moved the databases from a 2005 instance to a 2008 R2 instance.
Replaced the application Server setup with Virtual Servers. Application Server setup consist of two Virtual Servers with Windows Server 2008, configured in a Network Load Balancing setup.
The issue that we now have is that whenever we try to review a Report History it displays the error
An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. (rsReportServerDatabaseError) Get Online Help For more information about this error navigate to the report server on the local server machine, or enable remote errors
We have been getting this error on both sets of application Servers so I'm guessing it is a database issue of some description.
Note that I'm getting this error whenever I try to access any Report History. I have run checkdb on both RS Databases and there are no issues there.
Please click "Mark As Answer" if my post helped. Tony C.
Hi
i am working on a table with 3 milion record and i decide to create index on table and also use table partitioning to partition the table on date(year), i create index and partition in afew way but in all case after i query on table and i compare with old table(without index) it takes longer time to execute and return result.
i create index in diffrent way :
1) create 4 file group and partition table to each file group
ALTER DATABASE NewSyslog ADD FILEGROUP FileGroup1 ALTER DATABASE NewSyslog ADD FILEGROUP FileGroup2 ALTER DATABASE NewSyslog ADD FILEGROUP FileGroup3 ALTER DATABASE NewSyslog ADD FILEGROUP FileGroup4 --Create Database File ALTER DATABASE NewSyslog ADD FILE ( NAME = Logs1, FILENAME = 'E:\Syslog\logs_filegroup1.ndf', SIZE = 1MB ) TO FILEGROUP FileGroup1 GO ALTER DATABASE NewSyslog ADD FILE ( NAME = Logs2, FILENAME = 'E:\Syslog\logs_filegroup2.ndf', SIZE = 1MB ) TO FILEGROUP FileGroup2 GO ALTER DATABASE NewSyslog ADD FILE ( NAME = Logs3, FILENAME = 'E:\Syslog\logs_filegroup3.ndf', SIZE = 1MB ) TO FILEGROUP FileGroup3 GO ALTER DATABASE NewSyslog ADD FILE ( NAME = Logs4, FILENAME = 'E:\Syslog\logs_filegroup4.ndf', SIZE = 1MB ) TO FILEGROUP FileGroup4
then i create partition function and scheme like :
CREATE PARTITION FUNCTION HitDateRange (datetime) AS RANGE LEFT FOR VALUES ('1/1/2013', '1/1/2014', '1/1/2015') GO CREATE PARTITION SCHEME HitDateRangeScheme AS PARTITION HitDateRange TO ( FileGroup1, FileGroup2, FileGroup3, FileGroup4 )
so i create my table on partiton scheme
CREATE TABLE [dbo].[Logs]( [ID] [int] IDENTITY(1,1) NOT NULL, [Timestamp] [datetime] NOT NULL, [SourceIPAddress] [nvarchar](50) NOT NULL, [FullUrl] [nvarchar](4000) NOT NULL, [Url] [nvarchar](512) NOT NULL, [Action] [nvarchar](10) NOT NULL, [User] [nvarchar](50) NULL, [TTL] [int] NULL, CONSTRAINT [PK_Logs] PRIMARY KEY CLUSTERED ([ID] ASC,[Timestamp])) ON [HitDateRangeScheme] ([Timestamp])
then i insert 3 milion record into table and then test the query performance on this table and the old table, i saw the query run a little faster on old table.
so i create an index below:
2)
Create NonClustered Index NI_User_Timestamp On [dbo].[Logs]([User],[Timestamp])
the result is slower than old table,
3) then i create Index Include :
Create NonClustered Index NI_User_SourceIPAddress_FullUrl_Url_TTL On [dbo].[Logs]([User],[Timestamp]) Include([SourceIPAddress],[FullUrl],[Url],[TTL])
then i query table
Select [Timestamp] ,[SourceIPAddress] ,[FullUrl], [Url], [User] ,[TTL] From dbo.Logs Where [Timestamp] Between '1/1/2012' And '12/29/2013' And [User] = 'bill'
i test my select query whitout where clause ... but still it is a little slower.
4)then i recreate the table with Index partitioning like :
Create NonClustered Index NI_IP_User On [dbo].[Logs]([User]) On [HitDateRangeScheme]([Timestamp])
5) also i partition my table just on primary, means that i did not create file group for each partition,All To ([Primary])
6) also i did not partition my table, i make clusterd index on ID, and a nonclustered index on [Timestamp],[User]
Create NonClustered Index NI_Timestamp_User On [dbo].[Logs]([User],[Timestamp])
7) i create a nonclustered index just on [User] Column
after all these step still my query on new table is still a little slower.
so thanks for any help
Alimardani