我需要将一个大的(500 MB)文本文件(一个log4net异常文件)拆分为可管理的块,例如100个5 MB的文件。
我认为这应该是PowerShell的公园散步。我该怎么做?
Answers:
对于PowerShell而言,这是一项比较容易的任务,同时由于标准的Get-Content cmdlet不能很好地处理非常大的文件而使情况变得复杂。我建议做的是使用.NET StreamReader类在PowerShell脚本中逐行读取文件,并使用Add-Content
cmdlet将每一行写入文件名中索引不断增加的文件。像这样:
$upperBound = 50MB # calculated by Powershell
$ext = "log"
$rootName = "log_"
$reader = new-object System.IO.StreamReader("C:\Exceptions.log")
$count = 1
$fileName = "{0}{1}.{2}" -f ($rootName, $count, $ext)
while(($line = $reader.ReadLine()) -ne $null)
{
Add-Content -path $fileName -value $line
if((Get-ChildItem -path $fileName).Length -ge $upperBound)
{
++$count
$fileName = "{0}{1}.{2}" -f ($rootName, $count, $ext)
}
}
$reader.Close()
对于某些现有答案,请多加警告-对于大型文件,它们的运行速度非常慢。对于一个1.6 GB的日志文件,我几个小时后就放弃了,意识到在第二天恢复工作之前它还无法完成。
两个问题:对Add-Content的调用打开,查找并关闭源文件中每一行的当前目标文件。每次读取一点源文件并寻找新行也会使事情变慢,但我猜是Add-Content是主要的罪魁祸首。
以下变体产生的输出会稍差一些:它将在行的中间分割文件,但会在不到一分钟的时间内分割我的1.6 GB日志:
$from = "C:\temp\large_log.txt"
$rootName = "C:\temp\large_log_chunk"
$ext = "txt"
$upperBound = 100MB
$fromFile = [io.file]::OpenRead($from)
$buff = new-object byte[] $upperBound
$count = $idx = 0
try {
do {
"Reading $upperBound"
$count = $fromFile.Read($buff, 0, $buff.Length)
if ($count -gt 0) {
$to = "{0}.{1}.{2}" -f ($rootName, $idx, $ext)
$toFile = [io.file]::OpenWrite($to)
try {
"Writing $count to $to"
$tofile.Write($buff, 0, $count)
} finally {
$tofile.Close()
}
}
$idx ++
} while ($count -gt 0)
}
finally {
$fromFile.Close()
}
StreamReader
吗?这样您就可以换行了?
简单的单线可根据行数进行拆分(在这种情况下为100):
$i=0; Get-Content .....log -ReadCount 100 | %{$i++; $_ | Out-File out_$i.txt}
Out-File out_$i.txt -Encoding UTF8}
与此处的所有答案相同,但使用StreamReader / StreamWriter分割新行(逐行,而不是尝试一次将整个文件读入内存)。这种方法可以以我所知道的最快方式拆分大文件。
注意:我很少进行错误检查,因此我无法保证它会在您的情况下正常运行。它为我做的(1.7 GB TXT文件的400万行在95秒内分成了每个文件100,000行)。
#split test
$sw = new-object System.Diagnostics.Stopwatch
$sw.Start()
$filename = "C:\Users\Vincent\Desktop\test.txt"
$rootName = "C:\Users\Vincent\Desktop\result"
$ext = ".txt"
$linesperFile = 100000#100k
$filecount = 1
$reader = $null
try{
$reader = [io.file]::OpenText($filename)
try{
"Creating file number $filecount"
$writer = [io.file]::CreateText("{0}{1}.{2}" -f ($rootName,$filecount.ToString("000"),$ext))
$filecount++
$linecount = 0
while($reader.EndOfStream -ne $true) {
"Reading $linesperFile"
while( ($linecount -lt $linesperFile) -and ($reader.EndOfStream -ne $true)){
$writer.WriteLine($reader.ReadLine());
$linecount++
}
if($reader.EndOfStream -ne $true) {
"Closing file"
$writer.Dispose();
"Creating file number $filecount"
$writer = [io.file]::CreateText("{0}{1}.{2}" -f ($rootName,$filecount.ToString("000"),$ext))
$filecount++
$linecount = 0
}
}
} finally {
$writer.Dispose();
}
} finally {
$reader.Dispose();
}
$sw.Stop()
Write-Host "Split complete in " $sw.Elapsed.TotalSeconds "seconds"
拆分1.7 GB文件的输出:
...
Creating file number 45
Reading 100000
Closing file
Creating file number 46
Reading 100000
Closing file
Creating file number 47
Reading 100000
Closing file
Creating file number 48
Reading 100000
Split complete in 95.6308289 seconds
我经常需要做同样的事情。诀窍是让标头重复进入每个拆分块。我编写了以下cmdlet(PowerShell v2 CTP 3),它可以解决问题。
##############################################################################
#.SYNOPSIS
# Breaks a text file into multiple text files in a destination, where each
# file contains a maximum number of lines.
#
#.DESCRIPTION
# When working with files that have a header, it is often desirable to have
# the header information repeated in all of the split files. Split-File
# supports this functionality with the -rc (RepeatCount) parameter.
#
#.PARAMETER Path
# Specifies the path to an item. Wildcards are permitted.
#
#.PARAMETER LiteralPath
# Specifies the path to an item. Unlike Path, the value of LiteralPath is
# used exactly as it is typed. No characters are interpreted as wildcards.
# If the path includes escape characters, enclose it in single quotation marks.
# Single quotation marks tell Windows PowerShell not to interpret any
# characters as escape sequences.
#
#.PARAMETER Destination
# (Or -d) The location in which to place the chunked output files.
#
#.PARAMETER Count
# (Or -c) The maximum number of lines in each file.
#
#.PARAMETER RepeatCount
# (Or -rc) Specifies the number of "header" lines from the input file that will
# be repeated in each output file. Typically this is 0 or 1 but it can be any
# number of lines.
#
#.EXAMPLE
# Split-File bigfile.csv 3000 -rc 1
#
#.LINK
# Out-TempFile
##############################################################################
function Split-File {
[CmdletBinding(DefaultParameterSetName='Path')]
param(
[Parameter(ParameterSetName='Path', Position=1, Mandatory=$true, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]
[String[]]$Path,
[Alias("PSPath")]
[Parameter(ParameterSetName='LiteralPath', Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
[String[]]$LiteralPath,
[Alias('c')]
[Parameter(Position=2,Mandatory=$true)]
[Int32]$Count,
[Alias('d')]
[Parameter(Position=3)]
[String]$Destination='.',
[Alias('rc')]
[Parameter()]
[Int32]$RepeatCount
)
process {
# yeah! the cmdlet supports wildcards
if ($LiteralPath) { $ResolveArgs = @{LiteralPath=$LiteralPath} }
elseif ($Path) { $ResolveArgs = @{Path=$Path} }
Resolve-Path @ResolveArgs | %{
$InputName = [IO.Path]::GetFileNameWithoutExtension($_)
$InputExt = [IO.Path]::GetExtension($_)
if ($RepeatCount) { $Header = Get-Content $_ -TotalCount:$RepeatCount }
# get the input file in manageable chunks
$Part = 1
Get-Content $_ -ReadCount:$Count | %{
# make an output filename with a suffix
$OutputFile = Join-Path $Destination ('{0}-{1:0000}{2}' -f ($InputName,$Part,$InputExt))
# In the first iteration the header will be
# copied to the output file as usual
# on subsequent iterations we have to do it
if ($RepeatCount -and $Part -gt 1) {
Set-Content $OutputFile $Header
}
# write this chunk to the output file
Write-Host "Writing $OutputFile"
Add-Content $OutputFile $_
$Part += 1
}
}
}
}
我在尝试将单个vCard VCF文件中的多个联系人拆分为单独的文件时发现了这个问题。这是我根据李的代码所做的。我必须查找如何创建新的StreamReader对象,并将null更改为$ null。
$reader = new-object System.IO.StreamReader("C:\Contacts.vcf")
$count = 1
$filename = "C:\Contacts\{0}.vcf" -f ($count)
while(($line = $reader.ReadLine()) -ne $null)
{
Add-Content -path $fileName -value $line
if($line -eq "END:VCARD")
{
++$count
$filename = "C:\Contacts\{0}.vcf" -f ($count)
}
}
$reader.Close()
对于我的源文件,许多答案都太慢了。我的源文件是10 MB到800 MB之间的SQL文件,需要将其拆分为行数大致相等的文件。
我发现以前使用Add-Content的一些答案很慢。等待数小时才能完成拆分的情况并不少见。
我没有尝试暴龙的答案,但它似乎仅按文件大小而不是行数进行拆分。
以下内容适合我的目的。
$sw = new-object System.Diagnostics.Stopwatch
$sw.Start()
Write-Host "Reading source file..."
$lines = [System.IO.File]::ReadAllLines("C:\Temp\SplitTest\source.sql")
$totalLines = $lines.Length
Write-Host "Total Lines :" $totalLines
$skip = 0
$count = 100000; # Number of lines per file
# File counter, with sort friendly name
$fileNumber = 1
$fileNumberString = $filenumber.ToString("000")
while ($skip -le $totalLines) {
$upper = $skip + $count - 1
if ($upper -gt ($lines.Length - 1)) {
$upper = $lines.Length - 1
}
# Write the lines
[System.IO.File]::WriteAllLines("C:\Temp\SplitTest\result$fileNumberString.txt",$lines[($skip..$upper)])
# Increment counters
$skip += $count
$fileNumber++
$fileNumberString = $filenumber.ToString("000")
}
$sw.Stop()
Write-Host "Split complete in " $sw.Elapsed.TotalSeconds "seconds"
对于54 MB的文件,我得到了输出...
Reading source file...
Total Lines : 910030
Split complete in 1.7056578 seconds
我希望其他人在寻找一个简单的,基于行的分割脚本来满足我的需求时,会发现这很有用。
还有一个快速的(有些脏)的单缸纸:
$linecount=0; $i=0; Get-Content .\BIG_LOG_FILE.txt | %{ Add-Content OUT$i.log "$_"; $linecount++; if ($linecount -eq 3000) {$I++; $linecount=0 } }
您可以通过更改硬编码的3000值来调整每批的第一行数。
我做了一些修改,根据每个部分的大小分割文件。
##############################################################################
#.SYNOPSIS
# Breaks a text file into multiple text files in a destination, where each
# file contains a maximum number of lines.
#
#.DESCRIPTION
# When working with files that have a header, it is often desirable to have
# the header information repeated in all of the split files. Split-File
# supports this functionality with the -rc (RepeatCount) parameter.
#
#.PARAMETER Path
# Specifies the path to an item. Wildcards are permitted.
#
#.PARAMETER LiteralPath
# Specifies the path to an item. Unlike Path, the value of LiteralPath is
# used exactly as it is typed. No characters are interpreted as wildcards.
# If the path includes escape characters, enclose it in single quotation marks.
# Single quotation marks tell Windows PowerShell not to interpret any
# characters as escape sequences.
#
#.PARAMETER Destination
# (Or -d) The location in which to place the chunked output files.
#
#.PARAMETER Size
# (Or -s) The maximum size of each file. Size must be expressed in MB.
#
#.PARAMETER RepeatCount
# (Or -rc) Specifies the number of "header" lines from the input file that will
# be repeated in each output file. Typically this is 0 or 1 but it can be any
# number of lines.
#
#.EXAMPLE
# Split-File bigfile.csv -s 20 -rc 1
#
#.LINK
# Out-TempFile
##############################################################################
function Split-File {
[CmdletBinding(DefaultParameterSetName='Path')]
param(
[Parameter(ParameterSetName='Path', Position=1, Mandatory=$true, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]
[String[]]$Path,
[Alias("PSPath")]
[Parameter(ParameterSetName='LiteralPath', Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
[String[]]$LiteralPath,
[Alias('s')]
[Parameter(Position=2,Mandatory=$true)]
[Int32]$Size,
[Alias('d')]
[Parameter(Position=3)]
[String]$Destination='.',
[Alias('rc')]
[Parameter()]
[Int32]$RepeatCount
)
process {
# yeah! the cmdlet supports wildcards
if ($LiteralPath) { $ResolveArgs = @{LiteralPath=$LiteralPath} }
elseif ($Path) { $ResolveArgs = @{Path=$Path} }
Resolve-Path @ResolveArgs | %{
$InputName = [IO.Path]::GetFileNameWithoutExtension($_)
$InputExt = [IO.Path]::GetExtension($_)
if ($RepeatCount) { $Header = Get-Content $_ -TotalCount:$RepeatCount }
Resolve-Path @ResolveArgs | %{
$InputName = [IO.Path]::GetFileNameWithoutExtension($_)
$InputExt = [IO.Path]::GetExtension($_)
if ($RepeatCount) { $Header = Get-Content $_ -TotalCount:$RepeatCount }
# get the input file in manageable chunks
$Part = 1
$buffer = ""
Get-Content $_ -ReadCount:1 | %{
# make an output filename with a suffix
$OutputFile = Join-Path $Destination ('{0}-{1:0000}{2}' -f ($InputName,$Part,$InputExt))
# In the first iteration the header will be
# copied to the output file as usual
# on subsequent iterations we have to do it
if ($RepeatCount -and $Part -gt 1) {
Set-Content $OutputFile $Header
}
# test buffer size and dump data only if buffer is greater than size
if ($buffer.length -gt ($Size * 1MB)) {
# write this chunk to the output file
Write-Host "Writing $OutputFile"
Add-Content $OutputFile $buffer
$Part += 1
$buffer = ""
} else {
$buffer += $_ + "`r"
}
}
}
}
}
}
做这个:
文件1
还有一个快速的(有些脏)的单缸纸:
$linecount=0; $i=0;
Get-Content .\BIG_LOG_FILE.txt | %
{
Add-Content OUT$i.log "$_";
$linecount++;
if ($linecount -eq 3000) {$I++; $linecount=0 }
}
您可以通过更改硬编码的3000值来调整每批的第一行数。
Get-Content C:\TEMP\DATA\split\splitme.txt | Select -First 5000 | out-File C:\temp\file1.txt -Encoding ASCII
文件2
Get-Content C:\TEMP\DATA\split\splitme.txt | Select -Skip 5000 | Select -First 5000 | out-File C:\temp\file2.txt -Encoding ASCII
文件3
Get-Content C:\TEMP\DATA\split\splitme.txt | Select -Skip 10000 | Select -First 5000 | out-File C:\temp\file3.txt -Encoding ASCII
等等…
听起来像是UNIX命令拆分的一项工作:
split MyBigFile.csv
只需在不到10分钟的时间内将我的55 GB的csv文件拆分为21k块即可。
虽然它不是PowerShell的本机,但例如附带有用于Windows的git软件包https://git-scm.com/download/win
由于这些行在日志中可以是可变的,因此我认为最好是每个文件采用多行。以下代码段在19秒(18.83 ..秒)内处理了一个400万行日志文件,将其分成500,000行块:
$sourceFile = "c:\myfolder\mylargeTextyFile.csv"
$partNumber = 1
$batchSize = 500000
$pathAndFilename = "c:\myfolder\mylargeTextyFile part $partNumber file.csv"
[System.Text.Encoding]$enc = [System.Text.Encoding]::GetEncoding(65001) # utf8 this one
$fs=New-Object System.IO.FileStream ($sourceFile,"OpenOrCreate", "Read", "ReadWrite",8,"None")
$streamIn=New-Object System.IO.StreamReader($fs, $enc)
$streamout = new-object System.IO.StreamWriter $pathAndFilename
$line = $streamIn.readline()
$counter = 0
while ($line -ne $null)
{
$streamout.writeline($line)
$counter +=1
if ($counter -eq $batchsize)
{
$partNumber+=1
$counter =0
$streamOut.close()
$pathAndFilename = "c:\myfolder\mylargeTextyFile part $partNumber file.csv"
$streamout = new-object System.IO.StreamWriter $pathAndFilename
}
$line = $streamIn.readline()
}
$streamin.close()
$streamout.close()
可以轻松地将其转换为带有参数的函数或脚本文件,以使其更具通用性。它使用StreamReader
和StreamWriter
来实现其速度和很小的内存占用
我的要求有些不同。我经常使用逗号分隔和制表符分隔的ASCII文件,其中一行是一条数据记录。而且它们确实很大,因此我需要将它们分为可管理的部分(同时保留标题行)。
因此,我恢复了我的经典VBScript方法,并将一个小的.vbs脚本混在一起,该脚本可以在任何Windows计算机上运行(它由Window上的WScript.exe脚本宿主引擎自动执行)。
这种方法的好处是它使用文本流,因此不会将基础数据加载到内存中(或者至少不会一次全部加载到内存中)。结果是它运行速度极快,并且实际上不需要太多内存即可运行。我刚刚在i7上使用此脚本拆分的测试文件的文件大小约为1 GB,包含约1200万行文本,并拆分为25个部分文件(每个文件约50万行)–处理过程大约需要2分钟,它在任何时候都不会超过3 MB内存。
需要注意的是,由于文本流对象使用“ ReadLine”功能一次只能处理一行,因此它依赖于具有“行”(意味着每个记录都用CRLF分隔)的文本文件。但是,嘿,如果您使用的是TSV或CSV文件,那就完美了。
Option Explicit
Private Const INPUT_TEXT_FILE = "c:\bigtextfile.txt"
Private Const REPEAT_HEADER_ROW = True
Private Const LINES_PER_PART = 500000
Dim oFileSystem, oInputFile, oOutputFile, iOutputFile, iLineCounter, sHeaderLine, sLine, sFileExt, sStart
sStart = Now()
sFileExt = Right(INPUT_TEXT_FILE,Len(INPUT_TEXT_FILE)-InstrRev(INPUT_TEXT_FILE,".")+1)
iLineCounter = 0
iOutputFile = 1
Set oFileSystem = CreateObject("Scripting.FileSystemObject")
Set oInputFile = oFileSystem.OpenTextFile(INPUT_TEXT_FILE, 1, False)
Set oOutputFile = oFileSystem.OpenTextFile(Replace(INPUT_TEXT_FILE, sFileExt, "_" & iOutputFile & sFileExt), 2, True)
If REPEAT_HEADER_ROW Then
iLineCounter = 1
sHeaderLine = oInputFile.ReadLine()
Call oOutputFile.WriteLine(sHeaderLine)
End If
Do While Not oInputFile.AtEndOfStream
sLine = oInputFile.ReadLine()
Call oOutputFile.WriteLine(sLine)
iLineCounter = iLineCounter + 1
If iLineCounter Mod LINES_PER_PART = 0 Then
iOutputFile = iOutputFile + 1
Call oOutputFile.Close()
Set oOutputFile = oFileSystem.OpenTextFile(Replace(INPUT_TEXT_FILE, sFileExt, "_" & iOutputFile & sFileExt), 2, True)
If REPEAT_HEADER_ROW Then
Call oOutputFile.WriteLine(sHeaderLine)
End If
End If
Loop
Call oInputFile.Close()
Call oOutputFile.Close()
Set oFileSystem = Nothing
Call MsgBox("Done" & vbCrLf & "Lines Processed:" & iLineCounter & vbCrLf & "Part Files: " & iOutputFile & vbCrLf & "Start Time: " & sStart & vbCrLf & "Finish Time: " & Now())
这是我的解决方案,将名为patch6.txt的文件(约32,000行)拆分为每个1000行的单独文件。它不是很快,但是可以完成工作。
$infile = "D:\Malcolm\Test\patch6.txt"
$path = "D:\Malcolm\Test\"
$lineCount = 1
$fileCount = 1
foreach ($computername in get-content $infile)
{
write $computername | out-file -Append $path_$fileCount".txt"
$lineCount++
if ($lineCount -eq 1000)
{
$fileCount++
$lineCount = 1
}
}