Quantcast
Channel: SQL Server Database Engine forum
Viewing all 15930 articles
Browse latest View live

SQL Server 2012 syspolicy_purge_history fails step 3 "Erase Phantom System Health Records."

$
0
0

I recently installed SQL Server 2012 Ent (x64). The install is a stand alone default instance.

The syspolicy_purge_history job fails on step three with the following error:

Executed as user: <DomainName>\sql_xxx_agt_svc. A job step received an error at line 1 in a PowerShell script. The corresponding line is 'set-executionpolicy RemoteSigned -scope process -Force'. Correct the script and reschedule the job. The error information returned by PowerShell is: 'Security error.  '.  Process Exit Code -1.  The step failed.

Here is the code that is in the step:

if ('$(ESCAPE_SQUOTE(INST))' -eq 'MSSQLSERVER') {$a = '\DEFAULT'} ELSE {$a = ''};
(Get-Item SQLSERVER:\SQLPolicy\$(ESCAPE_NONE(SRVR))$a).EraseSystemHealthPhantomRecords()

The domain user is the account that SQL Agent is running as.

What permissions need to be granted to this account?



Here is the link to the Connect post:

https://connect.microsoft.com/SQLServer/feedback/details/754063/sql-server-2012-syspolicy-purge-history-job-step-3-fails-with-security-error


How to Identify the Main Cause of Poor Performance

$
0
0

I'm trying to identify the cause of the difference in performance between my development machine and a client's server.
Both are running SQL Server 2012 Express Advanced, with the same configuration and copies of the same database.
It is a small database, only about 70MB.

The difference was noted with a poorly performing SELECT query which runs on my dev machine in 2-3 sec, but takes 6-8 sec on the client's server. The query has now been optimized, so itself is not an issue, but the performance difference is of concern.

Dev machine is Win 7, 8GB Ram, 10,000 RPM SATA HD, mdf & ldf on same drive.

Production Server is Windows Server 2012 Hyper-V, 16GB Ram, RAID 5, mdf & ldf on same drive.
To eliminate the VM, we installed SQL Server Express on the host and got about a 20% improvement, but still far slower than my dev machine, so we are currently concentrating on the host.
Host is Windows Server 2012 Standard, Intel Xeon CPU E5-2620 0 @ 2.00GHz, 64GB RAM, PERC 710 Hard Drives (7200 RPM), RAID 5, mdf & ldf on same drive.

Running the query with statistics on, results are:
parse and compile CPU time (ms): dev= 1732, host= 4212
parse and compile elapsed time (ms): dev= 2609, host= 4782
Execution CPU Time (ms): dev= 343, host= 795
Execution elapsed Time (ms): dev= 446, host= 1301

Typically, any server performs better than my dev machine, so I'm concerned about performance once the system grows.

I've found several articles on collecting performance data, but little on how to use it to identify the main problem.

How can I track down the main problem?


GB Tables load into SQL Server memory

$
0
0

Hi, I  have scenario as follows in 2012 version

1. Db size 200GB, Physical memory 32GB, I have four  50GB tables. how can I load this in to SQL Server memory ?

2. Assume we have not specified min/max memory allocation levels and left it as it is.

3. This scenario, how its work 2014 with In memory engine. how to configure to get optimized queries which work with four tables.

regards

Ashwan

 

GetDTCAddress - Timeout expired

$
0
0
Our ASP.NET web app uses distributed transactions with sql server.  Unfortunately, it timesout on GetDTCAddress.

Recently upgraded from clustered SQL2008 to SQL2014 (new servers).



TypeName: System.Data.SqlClient.SqlException, System.Data
Message: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
Source: .Net SqlClient Data Provider
StackTrace:    at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error)
   at System.Data.SqlClient.TdsParserStateObject.ReadSniSyncOverAsync()
   at System.Data.SqlClient.TdsParserStateObject.TryReadNetworkPacket()
   at System.Data.SqlClient.TdsParserStateObject.TryPrepareBuffer()
   at System.Data.SqlClient.TdsParserStateObject.TryReadByte(Byte& value)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()
   at System.Data.SqlClient.SqlDataReader.get_MetaData()
   at System.Data.SqlClient.TdsParser.TdsExecuteTransactionManagerRequest(Byte[] buffer, TransactionManagerRequestType request, String transactionName, TransactionManagerIsolationLevel isoLevel, Int32 timeout, SqlInternalTransaction transaction, TdsParserStateObject stateObj, Boolean isDelegateControlRequest)
   at System.Data.SqlClient.TdsParser.GetDTCAddress(Int32 timeout, TdsParserStateObject stateObj)
   at System.Data.SqlClient.SqlInternalConnectionTds.GetDTCAddress()
   at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx)
   at System.Data.ProviderBase.DbConnectionPool.PrepareConnection(DbConnection owningObject, DbConnectionInternal obj, Transaction transaction)
   at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
   at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.Open()

How can we troubleshoot?

thanks


How do i Maintain all activity logs in Particular Database or server

$
0
0

Hello,

I am using SQL Server 2012.

I Want To Maintain all Type Logs In Particulars database or server. I want to track all Query Which Execute in Particulars Database. and all other acitivity ?


Prem Shah

When and where to use MAXDOP = 0?

$
0
0

Hi All,

I've a pure DW/reporting instance with 24 logical processors.
I've set MAXDOP =0 to make maximum use of CPU cores.

Let me know your thoughts, inputs, advantages or problems with this setting.

Regards,
Pavan

SQLSTATE 42000, Error 1105 - Full primary filegroup

$
0
0

Hi,

I have problem with growing database. I Have MSSQL 2012 Standard edition, ver 11.0.5058.0, 64bit.

I add to table new rows by job, but now it fail:

Executed as user: check_usr. Could not allocate space for object 'dbo.log_sql'.'PK_log_sql' in database 'My_monitor' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. [SQLSTATE 42000] (Error 1105).  The step failed.

PK_log_sql is Primary Key of table. Table log_sql have about 2,2GB, index 4,8MB. Other tables in database have size around 100MB. Every table of database has the same problem.

Datafile have 4,7GB, free space 2,3GB, autogrow 100MB, limit 5GB, transaction log 1GB, autogrow 100MB, limit 3GB. On disk is over 200GB free space. On server is another dabases with size over 30GB.

What I try:

  • drop PK a create it again
  • backup DB, drop and restore
  • rebuild index
  • manualy increase datafile and transaction log size
  • after manualy increase datafile and transaction log size, test insert, manualy shrink to 10% free space

no DBCC check print error.

Thanks for help.

Moving FILESTREAM to a new database

$
0
0

Our development organization has created an application employing FILESTREAM, which through the pilot has been incorporated into a schema within our Data Warehouse Staging database.  Going to production, the development and BI teams have determined that they want it separated out into a separate database, and they'd like to separate it in the current pilot environment (DEV).  

How can I best move (or at least copy) the existing FILESTREAM data from the current database into a new one?


no disk space!

$
0
0

One of the sql instances; suddenly run out of space both in data and log disk... How can I find what triggered this?

I don't see much in the error logs... and in the disk, I see most of datafiles more or less the same size... The instance had plenty of space last week... tempdb remains the original size; so, it wasn't tempdb

How can I found out what triggered this? thoughts?

thanks!!

Dropping a Primary Key constraint - 35 minutes and counting...

$
0
0

Okay, so I have a very large, poorly designed table (37.5 million rows) that I have been asked to investigate.  I have a copy of this database table and the first thing I noticed was that the Primary Key was pretty much useless and there were no sensible indexes.  Every query hitting this table ended up table scanning.

So I thought I would try dropping the existing Primary Key constraint and then creating a more natural key that would make data retrieval quicker (hopefully).  I understand that creating a clustered index on this table is going to take a long time as ALL the data will need to be reorganised (I estimate this will take at least 1 hour).  However, just dropping the existing Primary Key constraint is taking forever.

This isn't locking; I can see that the server is doing a lot of disk reading/ writing and the wait type in Activity Monitor is PAGEIOLATCH_EX.

I would have thought that just dropping a primary key would not change the data in the table, just delete the associated index.  Obviously I am wrong, so what is it doing??

Long Running Transaction in SQL Server 2008 R2

$
0
0

I have a vendor database that has a long running transaction.

It does not matter if you set SIMPLE or FULL with frequent log backups.

The transaction will increase the transaction log and consequently, the log will be full with no more space on the volume.

This is SQL Server 2008 R2.

Are there any options besides turning AutoGrowth of the transaction log off?

Thanks in advance;

how to prevent user of db_owner to backup database

$
0
0

The Story:

vendor did a full backup for his database and put some folder not being backed up by TSM, he is the owner of the database, and delete his backup later. The backup is not copy only, all the differential and log backups taken are based on his FULL backup. so they cannot be restored.

QUESTION:

To prevent this happen in the future, what is the normal practices? or any way to prevent db_owner to do the ad-hoc full backup?

I am thinking of using DENY backup database, or write a policy  claiming no responsibility if vendor make it happen again.


After upgrade to SQL 2014 unable to evaluate policies against Registered Servers in CMS

$
0
0

I have a Central Management server set up, I had no issues evaluating policies against the servers I had set up. 

We upgraded from 2012 to 2014 and now I am unable to run\evaluate policies against the registered servers. 

I receive a error:


TITLE: Microsoft SQL Server Management Studio
------------------------------

Value cannot be null.
Parameter name: source (System.Core)

------------------------------
BUTTONS:

OK
------------------------------


==================================

Value cannot be null.
Parameter name: source (System.Core)

------------------------------
Program Location:

   at System.Linq.Enumerable.Any[TSource](IEnumerable`1 source, Func`2 predicate)
   at Microsoft.SqlServer.Management.SqlStudio.Controls.ConnectionPropertiesControl.UpdateConnectionInfo()
   at Microsoft.SqlServer.Management.SqlStudio.Controls.ConnectionPropertiesControl.SetConnectionInfo(ConnectionInfoBase connectionInfo)
   at Microsoft.SqlServer.Management.TaskForms.TaskFormControl2005UI.Initialize(TaskFormManager taskFormManager)
   at Microsoft.SqlServer.Management.TaskForms.TaskFormManager.get_Control()
   at Microsoft.SqlServer.Management.TaskForms.TaskFormDialogHost.Microsoft.SqlServer.Management.TaskForms.ITaskFormManagerHost.Initialize(ITaskFormManager control)
   at Microsoft.SqlServer.Management.ActionHandlers.ShowTaskUIDialogActionHandler.RunTaskForm(IContext context)
   at Microsoft.SqlServer.Management.ActionHandlers.DialogBasedActionHandler.RunTaskFormThread(Object contextObject)

I have since attempted to Repair the install

I tried to reinstall\repair just the Management msi.

and someone had mentioned on a different forum to run SP1, we already have SP1, so I ran the latest hot fix

Horizantal patitions with new data auto increment

$
0
0

Hi  Expert,

I need to create table partition which need to generate new partition (Auto) for new  data.

ex I already created

CREATE PARTITION FUNCTION FullOrderDateKeyRangePFN(DATETIME) AS RANGE LEFT FOR VALUES ( '20011231 23:59:59.997', '20021231 23:59:59.997', '20031231 23:59:59.997', '20041231 23:59:59.997' );

what will happen to when add new data with 20151104 23:00:00 ? where its resided.

regards

ashwan


Database Engine Tuning advisor failed to connect to ipc port

$
0
0
In Mangagement Studio i highlight my query and then click on 'Analyse query in Database Engine Advisor' but i get the following error message: "Failed to connect to an IPC Port: The system cannot find the file specified". 

Seems like if I reboot my computer it works one time then get the same error on the second time.

I'm running developer edition with service pack 2.

Any idea?



Thanks.

SQL Server 2012 - 11.0.5569 - Backup Process Failed with Msgs 845, Time-out while waiting for Buffer Latch type 3

$
0
0

Hi All,

I need suggestion regarding one database that cannot be backup. Every time backup process is run against this particular database, it always failing with error message 845. The buffer latch error always happened on the same location (1, 24956). So I suspect something is not right with that particular page.

Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.
Msg 845, Level 17, State 1, Line 1
Time-out occurred while waiting for buffer latch type 3 for page (1:24956), database ID 8.

What I have done:

- Run

dbcc checkdb('db_name') with no_infomsgs, all_errormsgs

Result: Failed with same error message while running snapshot.

Msg 1823, Level 16, State 8, Line 1
A database snapshot cannot be created because it failed to start.
Msg 7928, Level 16, State 1, Line 1
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.
Msg 5030, Level 16, State 12, Line 1
The database could not be exclusively locked to perform the operation.
Msg 7926, Level 16, State 1, Line 1
Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details.
Msg 845, Level 17, State 1, Line 1
Time-out occurred while waiting for buffer latch type 3 for page (1:24956), database ID 8.

- Run:

DBCC TRACEON (3604);
dbcc page (8, 1, 24956, 0);
DBCC TRACEOFF (3604);
GO

Result:

Msg 845, Level 17, State 1, Line 2
Time-out occurred while waiting for buffer latch type 2 for page (1:24956), database ID 8.

- Run scan disk, no error found.

The last backup for this database is from 6/19/2015. Apparently nobody was monitoring this database until I arrived last week. I'm thinking to put this database in single user mode and then run checkdb again. Appreciate any input on this matter. Thank you.

ALTER DATABASE WITH ROLLBACK INMMEDIATE statement doesn't work

$
0
0

Primary platofrm: Sql12k, 7.0 Ultimate Pro OS

I'm launching the aforementioned statement from one MASTER session windows and I get this message, I am stuck, I though ROLLBACK INMEDIATE go throught any already session open.

Msg 5064, Level 16, State 1, Line 1
Changes to the state or options of database 'GFSYSTEM' cannot be made at this time. The database is in single-user mode, and a user is currently connected to it.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Negative run_duration in msdb.dbo.sysjobhistory

$
0
0

What would cause a negative run_duration?

select run_status, run_date, run_time, run_duration
from msdb.dbo.sysjobhistory h
where run_duration < 0
run_status    run_date    run_time    run_duration
1    20140521    31121    -954439187

All columns if interested:

select *
from msdb.dbo.sysjobhistory h
where run_duration < 0

instance_id    job_id    step_id    step_name    sql_message_id    sql_severity    message    run_status    run_date    run_time   run_duration    operator_id_emailed    operator_id_netsent    operator_id_paged    retries_attempted    server
2093981    25A7AE74-1832-420A-86D5-60F70299789A    5    NOKRWW_PS_HSDPAW_MNC1_RAW    0    0    Executed as user: NT SERVICE\SQLAgent$NQR. The step succeeded.   1    20140521    31121    -954439187    0    0    0    0    NQRDATABASE\NQR

How to move log of Simple Recovery Mode DB w/o actually copying the .ldf?

$
0
0

Hi guys,

I have a bunch of data marts running in simple recovery mode. I need to tell the databases to use a new drive for the log files.

Since these are simple recovery mode the log files are logically empty.

How can I avoid copying a 10gb .ldf that doesn't have any info in it?

I've tried detach + attach (with a the log path pointing to the new location). I was hoping this would recreate an empty log @ the new location but it throws OS error 2 errors (file not found)

I also tried the take offline, alter db modify file (path to new location)

but again, it doesn't work when I bring it back online (similar file not available error & it recreates it on the old drive)

Do I really need to waste time copying tens of gigs of uselessness for it to work?

Cheers

(this is sql 2014 enterprise)


Jakub @ Melbourne, Australia Blog

In-Memory OLTP use with existing tables, index,procedures

$
0
0

Hi,

1. I need to make use of in memory engine for my pr-existed develop procedures ,tables ,index.  do I need and code changes for application and how to store tables /indexes in OLTP memory

assume table index may have primary key index as well.

2. If table with one primary index and 2 foreign constraints, 3 non clusters indexed. which one able o load to memory area and how t do that.

3 In memory is lock free zone. usually locks will happpen in RDMS context . how this works without locks

regards

ashwan

Viewing all 15930 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>