我有3个或多个文件需要合并,数据如下所示。
file 1
0334.45656
0334.45678
0335.67899
file 2
0334.89765
0335.12346
0335.56789
file 3
0334.12345
0335.45678
0335.98764文件% 4中的预期输出,
0334.89765
0334.89765
0334.89765
0334.12345
0335.67899
0335.12346
0335.56789
0335.45678
0335.98764到目前为止,我已经尝试过了,但第4个文件中的数据没有按排序顺序出现,
#!/usr/bin/perl
my %hash;
my $outFile = "outFile.txt";
foreach $file(@ARGV)
{
print "$file\n";
open (IN, "$file") || die "cannot open file $!";
open (OUT,">>$outFile") || die "cannot open file $!";
while ( <IN> )
{
chomp $_;
($timestamp,$data) = split (/\./,$_);
$hash{$timeStamp}{'data'}=$data;
if (defined $hash{$timeStamp})
{
print "$_\n";
print OUT"$_\n";
}
}
}
close (IN);
close (OUT);发布于 2014-03-23 09:56:54
我通常不会建议这样做,但unix实用程序应该能够很好地处理这一点。
将3个文件
cat在一起。sort对合并的文件进行排序。但是,使用perl可以做到以下几点:
#!/usr/bin/perl
use strict;
use warnings;
my @data;
push @data, $_ while (<>);
# Because the numbers are all equal length, alpha sort will work here
print for sort @data;但是,正如我们已经讨论过的,这些文件可能会非常大。因此,如果您能够利用所有文件都已经排序的事实,那么它在内存和速度方面都会更高效。
因此,下面的解决方案对文件进行流式处理,按while的每个循环顺序取出下一个文件:
#!/usr/bin/perl
# Could name this catsort.pl
use strict;
use warnings;
use autodie;
# Initialize File handles
my @fhs = map {open my $fh, '<', $_; $fh} @ARGV;
# First Line of each file
my @data = map {scalar <$_>} @fhs;
# Loop while a next line exists
while (@data) {
# Pull out the next entry.
my $index = (sort {$data[$a] cmp $data[$b]} (0..$#data))[0];
print $data[$index];
# Fill In next Data at index.
if (! defined($data[$index] = readline $fhs[$index])) {
# End of that File
splice @fhs, $index, 1;
splice @data, $index, 1;
}
}发布于 2014-03-23 18:49:07
以一种更可重用的方式使用米勒的想法,
use strict;
use warnings;
sub get_sort_iterator {
my @fhs = map {open my $fh, '<', $_ or die $!; $fh} @_;
my @d;
return sub {
for my $i (0 .. $#fhs) {
# skip to next file handle if it doesn't exists or we have value in $d[$i]
next if !$fhs[$i] or defined $d[$i];
# reading from $fhs[$i] file handle was success?
if ( defined($d[$i] = readline($fhs[$i])) ) { chomp($d[$i]) }
# file handle at EOF, not needed any more
else { undef $fhs[$i] }
}
# compare as numbers, return undef if no more data
my ($index) = sort {$d[$a] <=> $d[$b]} grep { defined $d[$_] } 0..$#d
or return;
# return value from $d[$index], and set it to undef
return delete $d[$index];
};
}
my $iter = get_sort_iterator(@ARGV);
while (defined(my $x = $iter->())) {
print "$x\n";
}输出
0334.12345
0334.45656
0334.45678
0334.89765
0335.12346
0335.45678
0335.56789
0335.67899
0335.98764发布于 2014-03-23 13:07:51
假设每个输入文件都已按升序排列,并且其中至少有一行,则此脚本可以按升序合并它们:
#!/usr/bin/perl
use warnings;
use strict;
use List::Util 'reduce';
sub min_index {
reduce { $_[$a] < $_[$b] ? $a : $b } 0 .. $#_;
}
my @fhs = map { open my $fh, '<', $_; $fh } @ARGV;
my @data = map { scalar <$_> } @fhs;
while (@data) {
my $idx = min_index(@data);
print "$data[$idx]";
if (! defined($data[$idx] = readline $fhs[$idx])) {
splice @data, $idx, 1;
splice @fhs, $idx, 1;
}
}注意:这与@Miller提供的第二个脚本基本相同,但更清晰、更简洁。
https://stackoverflow.com/questions/22586089
复制相似问题