我们有一个使用Microsoft SQL Server作为数据库后端的企业应用程序。我们已经遇到了相当多的情况,客户已经将应用程序扩展为一个巨大的数据库,并且正在运行的一些查询正在为他们自己和其他用户造成锁定和性能问题。
我们已经尝试应用尽可能多的索引,并对所有查询进行了优化,但我们有一个应用程序必须适合许多不同的客户类型,所以很难创建一个适合all.We的解决方案,因为我们没有资源为每个客户应用特定于客户的索引/性能。
我们知道导致问题的主要查询是那些为驱动报告和kpi而生成的查询。
我的问题是,有没有一种方法可以分散应用程序的负载,这样日常使用就不会受到报告/kpi生成的影响。也就是说,我们能否以某种方式镜像/复制数据库,以便将日常操作发送到SQL实体A,并将数据密集型查询发送到SQL实体B?因此,数据密集型查询对日常事务没有影响,我们可以将去往SQL实体B的查询排队。
在此场景中,SQL实体A和B需要始终保持对齐,但SQL实体B将始终是只读的。
有没有人能建议我们可以尝试实现这一目标的任何途径?或者,有没有另一种我应该考虑的方法来让我们的性能获胜。
谢谢
发布于 2013-10-09 17:20:33
看起来你可以选择任何复制选项,而且一切都很好。在我之前的一个工作中,我们使用了日志传送http://technet.microsoft.com/en-us/library/ms187103.aspx来实现这一目的。
此外,您还可以使用复制类型:http://technet.microsoft.com/en-us/library/bb677158.aspx,看看哪种复制类型最适合您,因为您可以做的不仅仅是辅助数据库上的报告。
如果我没有记错最初的经验,Log Shipping非常容易设置,所以您可能想要从那里开始。
发布于 2013-10-10 06:32:05
啊..。性能调整SQL Server et。阿尔。我最喜欢的东西!
有没有人能建议我们可以尝试实现这一点的方法?
根据您提供的信息,我会对数据进行垂直分区。这意味着维护一个用于实际OLTP ( CRUD事务)和一个用于KPI(服务器B)的数据库(服务器A)。
对于复制,我将使用事务性复制-当正确执行时,延迟将小于1秒。我想不出什么实际情况下这是不合适的。事实上,大多数报告都是在前一天结束时完成的,“实时”通常意味着最后5分钟
为了管理复制过程,我将从一个简单的控制台应用程序开始,期望在适当的时候扩展它以满足需求。控制台应用程序应该使用以下名称空间(转念一想,可能还有更晚的名称空间可用于SQL2012)
using Microsoft.SqlServer.Management.Common;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Replication;使用控制台应用程序,您可以在单个界面中管理发布、订阅和任何跟踪令牌。这将是一个痛苦的配置(所有这些权限,密码和路径),但一旦它启动并运行,您将能够优化事务数据库的数据和报表服务器为...报告。
我会有一个复制拓扑,对于大表,每个表有效地一个订阅,其余的(查找表,视图sp)只有一个订阅。我会复制主键,但不会复制约束、表引用、触发器(依赖于源数据库的完整性)。您也不应该复制索引-可以手动为报表服务器配置/优化索引。
您还可以选择适合KPI的文章(不需要复制文本、varchar(max)等)
下面发布了一些帮助函数,以帮助您进行操作。
或者有没有其他我应该考虑的方法来让我们获得性能上的胜利?
在我的卑微经验中,总是可以做一些事情来提高性能。归根结底是时间->成本->效益。有时候,在功能上做一点妥协会给你带来很多性能上的好处。
问题在于细节,但需要注意的是……
进一步随机思考
您已经确定了您的基础架构问题之一-混合使用OLTP和BI/Reporting。我不清楚你的经验和你的性能问题有多糟糕,而复制绝对是正确的方法,如果你处于“救火”模式,你可以尝试。
order by条款,KPI是否昂贵。确保聚集索引是正确的(REM:它们不需要存在于主键上)。一旦有了data一些关于你的配置的更多信息将是有用的-当你说巨大的时候,有多大?多大/什么类型的磁盘,RAM规格是什么等等-我问的原因是这样的……你可能会在接下来的40个工作日(每天500美元以上?)调优--那会给你买很多硬件!-更多的RAM,更多的磁盘,更快的磁盘--用于临时数据库或索引分区的固态硬盘。换句话说..。你可能对硬件的要求太高了(你的老板也对你要求太高了)
接下来,您将描述一个企业应用程序是否为Enterpise SQL Server许可。如果是这样,您就走运了--您可以创建绑定到模式的分区视图,并将查询委托给“正确的”服务器。不过,这种模型存在一些问题--即连接,但它确实为您提供了一个有效的替代选项。
复制代码
我知道我把它放在什么地方了。在下面找到一些RMO的帮助器函数,您可能会发现这些函数在开始复制时很有用。在过去的某个时刻,它是实时代码,但可能比我想象的更早-请将其视为伪代码。
(PS如果你愿意,欢迎你直接联系)
public static class RMOHelper
{
public static void PreparePublicationDb(MyServer Src, MyServer Dist)
{
ReplicationDatabase publicationDb = new ReplicationDatabase(Src.Database, Src.ServerConnection);
if (publicationDb.LoadProperties())
{
if (!publicationDb.EnabledTransPublishing)
{
publicationDb.EnabledTransPublishing = true;
}
// If the Log Reader Agent does not exist, create it.
if (!publicationDb.LogReaderAgentExists)
{
// Specify the Windows account under which the agent job runs.
// This account will be used for the local connection to the
// Distributor and all agent connections that use Windows Authentication.
publicationDb.LogReaderAgentProcessSecurity.Login = Dist.WinUId;
publicationDb.LogReaderAgentProcessSecurity.Password = Dist.WinPwd;
// Explicitly set authentication mode for the Publisher connection
// to the default value of Windows Authentication.
publicationDb.LogReaderAgentPublisherSecurity.WindowsAuthentication = true;
// Create the Log Reader Agent job.
publicationDb.CreateLogReaderAgent();
DeleteJobAgentSchedule(publicationDb.LogReaderAgentName);
}
}
else
{
throw new ApplicationException(String.Format(
"The {0} database does not exist at {1}.",
publicationDb,
Src.ServerName));
}
}
public static TransPublication PrepareTransPublication(MyServer Src, MyServer Dist, string publicationName)
{
// Set the required properties for the transactional publication.
TransPublication publication = new TransPublication();
publication.ConnectionContext = Src.ServerConnection;
publication.Name = publicationName;
publication.DatabaseName = Src.Database;
if (publicationName == "relation")
{
float d = 0;
}
// Specify a transactional publication (the default).
publication.Type = PublicationType.Transactional;
publication.ConflictRetention = 4;
publication.RetentionPeriod = 72;
// Activate the publication so that we can add subscriptions.
publication.Status = State.Active;
// Enable push and pull subscriptions and independent Distribition Agents.
publication.Attributes = PublicationAttributes.AllowPull|PublicationAttributes.AllowPush|PublicationAttributes.IndependentAgent;
//publication.Attributes &= PublicationAttributes.AllowSyncToAlternate;
// Specify the Windows account under which the Snapshot Agent job runs.
// This account will be used for the local connection to the
// Distributor and all agent connections that use Windows Authentication.
publication.SnapshotGenerationAgentProcessSecurity.Login = Dist.WinUId;
publication.SnapshotGenerationAgentProcessSecurity.Password = Dist.WinPwd;
// Explicitly set the security mode for the Publisher connection
// Windows Authentication (the default).
publication.SnapshotGenerationAgentPublisherSecurity.WindowsAuthentication = true;
publication.SnapshotGenerationAgentProcessSecurity.Login =Dist.WinUId;
publication.SnapshotGenerationAgentProcessSecurity.Password = Dist.WinPwd;
publication.AltSnapshotFolder = @"\\192.168.35.4\repldata\";
if (!publication.IsExistingObject)
{
// Create the transactional publication.
publication.Create();
// Create a Snapshot Agent job for the publication.
publication.CreateSnapshotAgent();
// DeleteJobAgentSchedule(ByVal jobID As Guid) As Boolean
}
else
{
//throw new ApplicationException(String.Format(
// "The {0} publication already exists.", publicationName));
}
return publication;
}
public static TransArticle PrepareTransArticle(TransPublication TransPub, Happy.MI.Replication.Article Article)
{
TransArticle TransArticle = new TransArticle();
TransArticle.ConnectionContext = TransPub.ConnectionContext;
TransArticle.Name = Article.Name;
TransArticle.DatabaseName = TransPub.DatabaseName;
TransArticle.SourceObjectName = Article.Name;
TransArticle.SourceObjectOwner = "dbo";
TransArticle.PublicationName = TransPub.Name;
//article.Type = ArticleOptions.LogBased;
//article.FilterClause = "DiscontinuedDate IS NULL";
// Ensure that we create the schema owner at the Subscriber.
if (TransArticle.IsExistingObject)
{
//do somethinbg??
}
else
{
TransArticle.SchemaOption |= CreationScriptOptions.Schema;
TransArticle.SchemaOption |= CreationScriptOptions.AttemptToDropNonArticleDependencies;
if (!Article.ObjectType.HasValue)
{
throw new Exception(string.Format("unknown schema object type for trans article {0}", Article.Name));
}
if (Article.ObjectType.Value== DataAccessADO.ObjectType.USER_TABLE)
{
TransArticle.SchemaOption |= CreationScriptOptions.ClusteredIndexes;
TransArticle.SchemaOption |= CreationScriptOptions.DriChecks;
TransArticle.SchemaOption |= CreationScriptOptions.DriDefaults;
TransArticle.SchemaOption |= CreationScriptOptions.DriPrimaryKey;
TransArticle.SchemaOption |= CreationScriptOptions.DriUniqueKeys;
//TransArticle.SchemaOption |= CreationScriptOptions.ExtendedProperties;
//TransArticle.SchemaOption |= CreationScriptOptions.NonClusteredIndexes;
TransArticle.Type = ArticleOptions.LogBased;
TransArticle.AddReplicatedColumns(Article.IncludedColumns.ToArray());
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.VIEW)
{
TransArticle.Type= ArticleOptions.ViewSchemaOnly;
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.SQL_SCALAR_FUNCTION)
{
TransArticle.Type = ArticleOptions.FunctionSchemaOnly;
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.SQL_STORED_PROCEDURE)
{
TransArticle.Type = ArticleOptions.ProcSchemaOnly;
}
else
{
throw new Exception(string.Format("unsupported schema object type {0}", Article.ObjectType.Value));
}
// Create the article.
TransArticle.Create();
}
return TransArticle;
}
public static TransSubscription PrepareSubscription(TransPublication TransPub, MyServer Src, MyServer Dest, MyServer Dist)
{
// Define the push subscription.
//TransPullSubscription subscription = new TransPullSubscription();
//subscription.ConnectionContext = Dest.ServerConnection;
//subscription.PublisherName = Src.ServerName;
//subscription.PublicationName = TransPub.Name;
//subscription.PublicationDBName = Src.Database;
//subscription.DatabaseName = Dest.Database;
TransSubscription subscription = new TransSubscription();
subscription.ConnectionContext = TransPub.ConnectionContext;
subscription.PublicationName = TransPub.Name;
subscription.DatabaseName = TransPub.DatabaseName;
subscription.SubscriptionDBName = Dest.Database;
subscription.SubscriberName = Dest.ServerName;
subscription.LoadProperties();
//subscription.Remove();
// Specify the Windows login credentials for the Distribution Agent job.
subscription.SynchronizationAgentProcessSecurity.Login = Dist.WinUId;
subscription.SynchronizationAgentProcessSecurity.Password = Dist.WinPwd;
if(!subscription.IsExistingObject){
// Create the push subscription.
// By default, subscriptions to transactional publications are synchronized
// continuously, but in this case we only want to synchronize on demand.
subscription.AgentSchedule.FrequencyType = ScheduleFrequencyType.Continuously;
subscription.Create();
PrepareSnapshot(TransPub, Src, Dist);
}
return subscription;
}
public static void PrepareSnapshot(TransPublication TPub, MyServer Src, MyServer Dist)
{
SnapshotGenerationAgent agent = new SnapshotGenerationAgent();
agent.Distributor = Dist.ServerName;
agent.DistributorSecurityMode = SecurityMode.Standard;
agent.DistributorLogin = Dist.SQLUId;
agent.DistributorPassword = Dist.WinPwd;
agent.Publisher = TPub.SqlServerName;
agent.PublisherSecurityMode = SecurityMode.Standard;
agent.PublisherLogin = Src.SQLUId;
agent.PublisherPassword = Src.SQLPwd;
agent.Publication = TPub.Name;
agent.PublisherDatabase = TPub.DatabaseName;
agent.ReplicationType = ReplicationType.Transactional;
// Start the agent synchronously.
agent.GenerateSnapshot();
}
public static void ApplySubscription(Happy.MI.Replication.Subscription _subscription)
{
Happy.MI.Replication.Publication p = _subscription.Publication;
RMOHelper.PreparePublicationDb(_subscription.Publication.Src, _subscription.Publication.Dist);
TransPublication TransPub = RMOHelper.PrepareTransPublication(p.Src, p.Dist, p.PublicationName);
foreach (Happy.MI.Replication.Article a in p.Articles)
{
a.LoadProperties();
TransArticle ta = RMOHelper.PrepareTransArticle(TransPub, a);
ta.ConnectionContext.Disconnect();
}
TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, _subscription.Dest, p.Dist);
if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
{
// Start the Distribution Agent asynchronously.
TransSub.SynchronizeWithJob();
}
TransSub.ConnectionContext.Disconnect();
//foreach (Happy.MI.Replication.Subscription s in p.Subscriptions)
//{
// TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, s.Dest, p.Dist);
// if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
// {
// // Start the Distribution Agent asynchronously.
// TransSub.SynchronizeWithJob();
// }
// TransSub.ConnectionContext.Disconnect();
//}
//TransPub.ConnectionContext.Disconnect();
}
public static void Create(Happy.MI.Replication.Publication p)
{
RMOHelper.PreparePublicationDb(p.Src, p.Dist);
TransPublication TransPub = RMOHelper.PrepareTransPublication(p.Src, p.Dist, p.PublicationName);
foreach (Happy.MI.Replication.Article a in p.Articles)
{
a.LoadProperties();
RMOHelper.PrepareTransArticle(TransPub, a);
}
foreach (Happy.MI.Replication.Subscription s in p.Subscriptions)
{
TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, s.Dest, p.Dist);
if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
{
// Start the Distribution Agent asynchronously.
TransSub.SynchronizeWithJob();
}
}
}
private static void DeleteJobAgentSchedule(string s)
{
// Private Function DeleteSchedule(ByVal scheduleID As Integer) As Boolean
// Dim result As Boolean
// If (scheduleID > 0) Then
// Dim msdbConnectionString As String = Me.PublicationConnectionString.Replace(String.Format("Initial Catalog={0};", Me.PublicationDbName), "Initial Catalog=msdb;")
// Dim db As New SQLDataAccessHelper.DBObject(msdbConnectionString)
// '-- Delete Job Schedule
// Dim parameters As New List(Of System.Data.SqlClient.SqlParameter)
// parameters.Add(New System.Data.SqlClient.SqlParameter("@schedule_id", SqlDbType.Int))
// parameters.Add(New System.Data.SqlClient.SqlParameter("@force_delete", SqlDbType.Bit))
// parameters(0).Value = scheduleID
// parameters(1).Value = True
// Dim rowsAffected As Integer
// result = (db.RunNonQueryProcedure("sp_delete_schedule", parameters, rowsAffected) = 0)
// db.Connection.Close()
// db.Connection.Dispose()
// Else
// Throw New ArgumentException("DeleteSchedule(): ScheduleID must be greater than 0")
// End If
// Return result
//End Function
}
public static int PublicationEstimatedTimeBehind(Happy.MI.Replication.Subscription s)
{
PublicationMonitor mon = new PublicationMonitor();
mon.DistributionDBName = s.Publication.Dist.Database;
mon.PublisherName = s.Publication.Src.ServerName;
mon.PublicationDBName = s.Publication.Src.Database;
mon.Name = s.Publication.PublicationName;
mon.ConnectionContext = s.Publication.Src.ServerConnection;
DataSet ds1 = mon.EnumSubscriptions2(SubscriptionResultOption.AllSubscriptions);
ds1.WriteXml(@"c:\desktop\ds1.xml");
//ds.Tables[0].ToString();
if (mon.LoadProperties())
{
PendingCommandInfo pci = mon.TransPendingCommandInfo(s.Dest.ServerName, s.Dest.Database, SubscriptionOption.Push);
return pci.EstimatedTimeBehind;
}
else
{
throw new Exception(string.Format("Unable to load properties for subscription [{0}][{1}]",s.Dest.ServerName, s.Publication.PublicationName));
}
}
public static int TraceTokenPost(Happy.MI.Replication.Subscription s)
{
TransPublication TransPub = new TransPublication();
TransPub.ConnectionContext = s.Publication.Src.ServerConnection;
TransPub.Name = s.Publication.PublicationName;
TransPub.DatabaseName = s.Publication.Src.Database;
if (TransPub.LoadProperties())
{
return TransPub.PostTracerToken();
}
return 0;
}
public static bool TraceTokenReceive(Happy.MI.Replication.Subscription s, int TokenId){
PublicationMonitor mon = new PublicationMonitor();
mon.DistributionDBName = s.Publication.Dist.Database;
mon.PublisherName = s.Publication.Src.ServerName;
mon.PublicationDBName = s.Publication.Src.Database;
mon.Name = s.Publication.PublicationName;
mon.ConnectionContext = s.Publication.Src.ServerConnection;
if (mon.LoadProperties())
{
DataSet ds= mon.EnumTracerTokenHistory(TokenId);
int latency;
string str = ds.Tables[0].Rows[0]["overall_latency"].ToString();
bool res = int.TryParse(str, out latency);
return res;
}
else
{
throw new Exception(string.Format("Unable to load properties for subscription [{0}][{1}]", s.Dest.ServerName, s.Publication.PublicationName));
}
}
public static void Cmd(string cnct)
{
string script = System.IO.File.ReadAllText(@"C:\tfs\CA\Dev\MI\Happy.MI\PostReplicationScripts\GC1.txt");
SqlConnection connection = new SqlConnection(cnct+";Connection Timeout=5");
Server server = new Server(new ServerConnection(connection));
//server.ConnectionContext.InfoMessage += new System.Data.SqlClient.SqlInfoMessageEventHandler(ConnectionContext_InfoMessage);
server.ConnectionContext.ExecuteNonQuery(script);
server.ConnectionContext.Disconnect();
}
}发布于 2013-10-08 21:49:44
您可以查看分区表,以报告/BI操作不会影响日常OLTP性能的方式对数据进行分区。当您需要清理旧数据时,它还可以节省一些宝贵的时间。
https://stackoverflow.com/questions/18760914
复制相似问题