我正在处理SAN上的~1TB db、服务器40+cores、内存500G+、db文件(没有对lun段的控制),现在的瓶颈是文件争用。目前正在运行的2014年,但最终“发布”很可能是最新版本的企业。
通过模式大小(我们不使用dbo)来分隔出最大的块数据使用。发现在磁盘上有大小约200克的烛光,将它们放入文件组w/4文件ea。最后阶段是采取主要的和分裂成4以及。下面是我为将模式移动到文件组中所做的proc,它不会以所需的方式对主文件组工作。
alter proc dbo.admin_MoveSchema (
@schema varchar(128)
,@exec bit = 0
,@storage varchar(2048) = null
,@filegroup varchar(128) = null
,@files int = 4
)
as
/* ***************************************************************************
* example:
exec dbo.admin_MoveSchema @schema = 'Schema1', @exec = 1
* ***************************************************************************
*/
begin try
declare
@msg varchar(255)
,@separator varchar(255) = replicate('=',128)
set nocount on;
if nullif(@storage,'') is null
begin
set @storage = (select top 1 reverse(right(reverse(physical_name),abs(charindex('\',reverse(physical_name)) - len(physical_name)))) + '\' from sys.database_files where physical_name like '%.mdf')
end;
set @filegroup = rtrim(ltrim(replace(replace(@filegroup,']',''),'[','')));
if nullif(@filegroup,'') is null
begin
set @filegroup = 'FG_' + @schema /* will obviously fail if the schema name is already at max size of 128 */
end;
declare
@s nvarchar(max)
,@i int = 1
set @msg = 'Creating Filegroup ' + @filegroup;
print @separator;
print '||' + replicate(' ', 39) + @msg;
raiserror(@separator,0,0) with nowait;
begin try
set @s = '
if not exists (select 1 from sys.filegroups where name = ''' + @filegroup + ''') and ''' + @filegroup + ''' <> ''primary''
begin
alter database ' + quotename(db_name()) + '
add filegroup ' + quotename(@filegroup) + ';
print ''' + quotename(@filegroup) + ' added!''
end
else
begin
print ''' + quotename(@filegroup) + ' exists.''
end;'
if @exec = 1
begin
exec(@s);
end
else
begin
print(@s);
end;
set @msg = 'Creating Files for ' + @filegroup;
print @separator;
print '||' + replicate(' ', 39) + @msg;
raiserror(@separator,0,0) with nowait;
while @i <= @files
begin
set @s = '
if not exists (select * from sys.sysfiles where name = ''' + @filegroup + '_' + right('00' + rtrim(@i),3) + ''')
begin
alter database [' + db_name() + ']
add file
(
name = ' + quotename(@filegroup + '_' + right('00' + rtrim(@i),3)) + ',
filename = ''' + @storage + @filegroup + '_' + right('00' + rtrim(@i),3) + '.ndf'',
size = 8mb,
maxsize = unlimited,
filegrowth = 10%
)
to filegroup ' + quotename(@filegroup) + ';
print ''added file: ' + quotename(@filegroup + '_' + right('00' + rtrim(@i),3)) + ''';
end
else
begin
print ''' + quotename(@filegroup + '_' + right('00' + rtrim(@i),3)) + ' exists'';
end; ';
if @exec = 1
begin
exec(@s);
end
else
begin
print(@s);
end;
set @i = @i + 1;
end;
end try
begin catch
print error_message();
end catch;
declare
@schema_name varchar(128)
,@table_name varchar(128)
,@column_name varchar(max) -- collection of columns with asc/desc
,@index_name varchar(128)
set @msg = 'Moving tables in the schema ' + quotename(@schema) + ' to filegroup ' + quotename(@filegroup);
print @separator;
print '||' + replicate(' ',18) + @msg;
raiserror(@separator,0,0) with nowait;
declare idxGen cursor local fast_forward
for
select
s.name schema_name
,t.name table_name
,i.name index_name
,stuff((select convert(varchar(max),',') + quotename(c.name) + (case when ic.is_descending_key = 1 then ' desc' else ' asc' end)
from sys.columns c
inner join sys.index_columns ic on c.column_id = ic.column_id and c.object_id = ic.object_id and ic.index_id = i.index_id
where c.column_id = ic.column_id and t.object_id = ic.object_id
order by ic.key_ordinal for xml path('')),1,1,'') column_name
from sys.indexes (nolock) i
inner join sys.tables (nolock) t on t.object_id = i.object_id and i.type_desc = 'CLUSTERED'
inner join sys.schemas (nolock) s on s.schema_id = t.schema_id and t.is_ms_shipped = 0
where s.name = @schema
open idxGen;
fetch next from idxGen into @schema_name,@table_name,@index_name,@column_name;
/*
first match wins, covers unique clustered, clustered and "with" defaults on most tables
for unique and not. I'm not sure if the last 2 every trigger, will check later.
*/
while @@fetch_status = 0
begin
select @s = '
begin try
begin try
create unique clustered index ' + quotename(@index_name) + ' on ' + quotename(@schema_name) + '.' + quotename(@table_name) + ' (' + @column_name + ')
with (drop_existing=on)
on ' + quotename(@filegroup) + ';
end try
begin catch
create clustered index ' + quotename(@index_name) + ' on ' + quotename(@schema_name) + '.' + quotename(@table_name) + ' (' + @column_name + ')
with (drop_existing=on)
on ' + quotename(@filegroup) + ';
end catch;
raiserror(''[Moved/On]: ' + quotename(@schema_name) + '.' + quotename(@table_name) + ' to ' + quotename(@filegroup) + ''', 0,0);
end try
begin catch
print error_message();
raiserror(''[### MOVE FAILED ###]: ' + quotename(@schema_name) + '.' + quotename(@table_name) + ''', 0,0);
end catch; ';
if @exec = 1
begin
exec(@s);
end
else
begin
print(@s);
end;
fetch next from idxGen into @schema_name,@table_name,@index_name,@column_name;
end;
close idxGen;
deallocate idxGen;
print @separator;
print '||' + replicate(' ',15) + 'Moved Tables in schema:' + quotename(@schema) + ' to filegroup:' + quotename(@filegroup);
print '||' + replicate(' ',15) + 'If any [### MOVE FAILED ###] were printed above, manual move of that table may be needed.'
print @separator;
print ' ';
end try
begin catch
print 'Error! find out why!';
throw
end catch
GO 我现在的问题是自动化和/或复制,轻松地将主要的数据分割成自己的4拆分。
添加文件和运行dbcc收缩文件(1,emptyfile)的过程是可以的,但是我希望在所有文件的页数都在或接近对方时自动停止该进程。
我正在考虑创建和启动一个sql代理作业,执行一个循环来检查每个文件的页面,并在文件为>= mdf页面时( dbcc收缩文件的spid)来阻止它,但我确实希望将其保存在一个没有外部依赖/要求的脚本中(比如sql已启动)。
实际上,我需要打开一个新的查询并执行它,直到我接近匹配为止,然后停止执行dbcc命令。
if object_id('tempdb..#fileStats','U') is not null
begin
drop table #fileStats;
end;
create table #fileStats (
FileId int
,FileGroup int
,TotalExtents bigint
,UserExtents bigint
,Name nvarchar(128)
,FileName nvarchar(2048)
);
insert into #fileStats (FileId, FileGroup, TotalExtents, UserExtents, Name, FileName)
exec('dbcc showfilestats');
select
UserExtents - (select sum(userextents) / count(FileId) from #fileStats where FileGroup = 1) extleft
,UserExtents
,Name
,FileName
from #fileStats
where FileGroup = 1
;我可以把它放在一个循环中等待延迟和杀死/退出接近匹配,但我试图从它的手动过程。它需要由不熟练的或自动化的过程执行。
有什么想法吗?(已要求如此,但已无济于事)
发布于 2019-03-12 00:55:56
如果将文件用于临时存储,则可以简化此操作。
在步骤4中,SQL将使用比例填充算法将数据移动到四个文件中,从而生成4个大小相近的文件。
https://dba.stackexchange.com/questions/231873
复制相似问题