Do I need to have SQL 2008 on both ends in order for this feature to work?
Do I need to have SQL 2008 on both ends in order for this feature to work?
On a Windows 2008 R2 server running SQL Server 2012, I started getting these recently:
SQL Server Scheduled Job 'Intraday Backup 1500.Subplan_1' (0x65C01B2C6FA5124493CA18E3665B09AA) - Status: Failed - Invoked on: 2012-09-10 16:18:58 - Message: The job failed. Unable to determine if the owner (<DOMAIN>\administrator) of job Intraday
Backup 1500.Subplan_1 has server access (reason: Could not obtain information about Windows NT group/user '<DOMAIN>\administrator', error code 0x6e. [SQLSTATE 42000] (Error 15404)).
After some research it seemed that I should change the owner of the job to something else, so I made it SQLServerAgent, that seemed to be the only thing that worked.
Now I'm getting these:
The application-specific permission settings do not grant Local Activation permission for the COM Server application with CLSID
{FDC3723D-1588-4BA3-92D4-42C430735D7D}
and APPID
{83B33982-693D-4824-B42E-7196AE61BB05}
to the user NT SERVICE\SQLSERVERAGENT SID (S-1-5-80-355555556-2666666661-2777777773-2888888883-1999999990) from address LocalHost (Using LRPC). This security permission can be modified using the Component Services administrative tool.
I did my usual digging around and couldn't find much that was useful, apart from notes about locating the appid mentioned in the registry and going to Component Services to fix the permissions. The appid translates to MS DTS Server, nothing like that
is listed in DCOM.
I've done a fair bit of searching and reading and got nowhere with this, maybe I'm missing the point. The job does run now, I'm just averse to errors in the Event Log, I mean they wouldn't be there if it didn't matter, would they?
Hi,
How to change default SET options of .NET SqlClient Data Provider ?
.NET sets the options which effectively mean 'user options', etc. are completely useless because .NET overrides it.
(Recommendation to run a SQL to set the SET options explicitly isn't an answer I'm looking for.)
Thank you
Hi
We have a problem with a database that has some hidden data somewhere. We cannot shrink the files or database any more, but when we query table sizes, index sizes we do not come near the database size.
I found that when I query sysindexes
select *, dpages *8 [size in kbs]
from sysindexes
where indid <= 1
order by dpages *8 desc, 1 desc
I have one row that belongs to sys.sysrscols with 50 million rows and about 23 GB of data.
When I query the sys.sysrscols-table
SELECT
*
FROM sys.sysrscols rs
I can only see exactly 4000 rows. Since it's exactly 4000 rows it makes me think that sys.sysrscols is a view.
However I have grouped the results from the 4000 rows and found some tables that exists most frequently. I have tried to rebuild all their indexes and statistics.
I have also tried to rebuild all stats and indexes on sys.sysrscols.
I have also run DBCC checkdb and also DBCC CHECKTABLE('sys.sysrscols'). I have also tried with REPAIR_REBUILD as parameters.
None of the checkdb/checktable seems to indicate any problems.
Nothing seems to help. I have tried to reboot the sql-services and also the server. I have also tried to take a couple of backups to see if that might help.
Any other ideas on how to be able to reclaim the space?
Shelley
Our scenario is that we have above SQL version running on 2012 R2 in a Failover Hyper-V Cluster.
We've been getting issues with slow running query's and timeouts. I had 24Gb RAM assigned across 6 instances. Two of the instances had grabbed the majority with about 9Gb each. The Cluster Manager and HyperV manager reported most of the total memory (24gb) as assigned. In task manager, and resource manager, ditto.
There are no max memory settings applied apart from default on the instances, so how is this memory being allocated? Certainly when we had issues on one instance, there didn't seem to be a lot going on on the other. I was expecting memory to be released for use elsewhere.
I'm not really sure best way of configuring this. For now I've thrown another 8gb at the server. It just seems strange that certain instances grab a lot of memory and then are reluctant to release it.
Ian
Hi,
I have a SQL 2008 R2 server running SP2. I changed the start time to hour A on a maintenance job under maintenance plan, a few weeks later I changed it back (hour B). The maintenance job still runs at the changed start time (hour A). I even completly deleted the job, and I do not see that job under SQL maintenance plan and agent job anymore. But somehow SQL still tries to start the job at the changed time (hour A), and sends out error messages saying the job fails.
I have restarted the entire server also. Same issue.
The job is not a special one. Just a db index optimize script that ran fine within the maintenance job before I changed the time. Actually, now I just set it up as a SQL agent job, not a maintenance job, and it runs fine.
Any advice?
Thanks in advance.
Hi Solution Provides,
My Sql Server 2012 (Prod Server) is restarted 3 times in a week. how should i find whats the reason behind SQL Server restart.
i didn't find much more information in SQL error logs or Windows-Application-Logs.
How to avoid in future ?
We have a SQL Server 2014 DB installed on a VM running 2012 R2 Standard with Datacenter 2012 R2 as the host. We have a Scale Out File Server (SOFS) running 2012 R2 on two nodes clustered (CSV). The entire network is 10GB. SQL server is configured to use the SOFS cluster for it's databases files (using SMB3 as a network share). When restoring a backup to the server as a new database restore, the process takes about an hour (it's a big DB 210GB). When restoring the backup over the top of the existing database (nearly similar file/DB sizes), the restore takes 10 minutes. I have read about "Instant File Initialization" but that appears to only apply to directly attached disks where the OS can block allocate the data using some "Volume Maintenance" permissions. The network traffic during the "initialization phase" seems to be floating about 700mbs, yet when the actual restore counter starts (reading the backup file and writing the data) the network activity steps up to 6.5Gbs - sometimes higher - the count down appears to take 10 minutes either way. So I'm quite sure that the 50 minutes is attributable to the initialization phase. Which seems horribly excessive.
Has anyone had similar experience using this kind of configuration?
HI ,
I am trying to attach the adventure works db, into ssms, and have not been able to , i have searched the forum, the close answer i saw, was to put the path into a double column but it does not work,
because as soon as i click enter, it autmatically append some path to the begining of the path i typr, thereby shifting the double column into the path, and i still get this error
C:\Users\Public\Downloads\AdventureWorksDataFiles.zip" Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or
Hi,
What are Indirect checkpoints introduced in SQL 2012? What are they used for and how are they different from old traditional checkpoints which were used to flush dirty pages to disk?
Thank you and appreciate your help.
Regards,
Sam
Hi
Is it safe to Run DBCC DBINFO(DBNAME) WITH TABLERESULTS On Production database, because I read somewhere that it is not advised to run this command in prod db
Regards
Good morning,
This is probably me missing something trivial, or having a bit of a maths fail, but I'm hoping someone can help with a problem I'm having.
I'm trying to figure out how much data is stored within each partition within a database (SQL 2005).
The table structure is this:
[DocumentID] [uniqueidentifier] NOT NULL, [ImageFileType] [uniqueidentifier] NULL, [ImageFile] [image] NULL, [expired_date_at_archival] [datetime] NULL, [archival_date] [datetime] NULL, [CreateDate] [datetime] NULL,
With a primary key on the DocumentID column.
I've tried working through the link here : http://technet.microsoft.com/en-us/library/aa933068(v=sql.80).aspx (which I grant is for SQL 2000, not 2005) and tried to get the size of the rows, however, my problem (I think) comes with the "IMAGE" data type, which I think is handled differently.
What I've tried doing is to add up the number size of each column in bytes, and then add the average DATALENGTH of the Image column and multiply by the number of rows in each individual partition. This puts me around 60GB out (in total).
Number Of Rows | 2814036 |
Columns | Size(bytes) |
[DocumentID] | 16 |
[ImageFileType] | 16 |
[ImageFile](avg datalength) | 184716 |
[expired_date_at_archival] | 8 |
[archival_date] | 8 |
[CreateDate] | 8 |
Avg Row Size | 184772 |
Null Bitmap | 4 |
RowSize | 184780 |
Total Size GB (rowsize * rows) \1000\1000\1000 | 519977.57 |
The actual size of my table is 560GB or so... (with the index and unused space being tiny, about 1GB in total)
I expect that I'm assuming the image size and this is where i'm COMPLETELY wrong... but I dont understand WHY I'm wrong... can anybody help or point me in a better direction?
Regards,
Andy
Hi,
I have a query I'm running on a SQL 2008R2 database which is supposed to tell me if I need to reorg or rebuild non clustered indexes. For testing purposes I'm specifically naming an index, however my result set returns two rows, each with a different fragmentation percentage. I'm pretty sure the query is picking up the fragmentation of the PK event though it has a different name.
Any ideas as to what is wrong with my SQL code and why it's returning two rows when it should return one?
SELECT I.[name], SC.[name], T.[name], CASE WHEN ST.avg_fragmentation_in_percent BETWEEN 5 AND 30 THEN 'REORGANIZE' WHEN ST.avg_fragmentation_in_percent > 30 THEN 'REBUILD' END AS [FragPerc], ST.avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats(DB_ID(), NULL,NULL,NULL,NULL) ST INNER JOIN sys.[indexes] AS I ON I.object_id = ST.object_id INNER JOIN sys.tables T ON T.object_id = I.object_id INNER JOIN sys.[schemas] AS SC ON SC.schema_id = T.schema_id WHERE i.name = 'IX_InternalAdjEndorsement_PolicyID_TransactionID' AND --ST.avg_fragmentation_in_percent > 5 AND I.type_desc = 'NONCLUSTERED';
I'm attempting to track down a blocking problem and the inputbuffer changes rapidly, so a SQL Blocked Process Profiler trace isn't working. I've devised a query that gets fired off via SQL Agent when the blocked process alert is greater than zero and captures all the spids at the time the block occurs, along with their execution plan and SQL statement. I'm including data from sys.dm_tran_session_transactions which is described here:
http://msdn.microsoft.com/en-us/library/ms188739.aspx
What I'm interested in knowing is what the value of enlist_count means? The definition is described as follows:
"Number of active requests in the session working on the transaction."
So, if the enlist_count value is 0 and the open session count is 1, does that mean that the query is in a sleeping state?
In contrast, if the enlist_count is 1 and an open transaction count of 1, does that mean that the query is in a running state?
If the enlist_count is > 1 and the open transaction count is >=1 does that mean that I have a transaction with a SPID that is parallelizing?
Thanks
- Bob
Hi,
I want to create a role and grant execution permission to all store procedure within a database in sql 2000. For this I had been using this scripts for 2005 and above:
/* CREATE A NEW ROLE */
CREATE ROLE db_executor
/* GRANT EXECUTE TO THE ROLE */
GRANT EXECUTE TO db_executor
Unfortunately this didn't work for sql 2000. Really appreciated if somebody help me out.
Thanks in advance
Hi,
I am planning to apply service pack 3 for SQL 2008 R2 and Service pack 4 for SQL server 2008. This is my first time and I am applying first QA and DEV environment. I have one confusion. In cluster once you fail over sql resources to active node all of the sql services including SQLSERVER and Agent are automatically stopped in passive node where we apply service pack. But in Stand alone, The services are not automatically stopped. Do I need to manually stop those services like SQLSERVER, Agent, Browser and others if any before I start applying service pack?
Early Response is highly appreciated.
Thanks In Advance