我花了相当多的时间尝试优化文件散列算法,以尽可能地检测出性能的每一个下降。
请看我之前的SO线程:
Get File Hash Performance/Optimization
FileChannel ByteBuffer and Hashing Files
Determining Appropriate Buffer Size
有人多次建议使用Java NIO来获得本机性能提升(通过将缓冲区保留在系统中,而不是将它们引入到JVM中)。然而,我的NIO代码运行得相当慢,不是基准测试(用每种算法一遍又一遍地散列相同的文件,以否定任何可能扭曲结果的OS/Drive“魔术”。
我现在有两个方法可以做同样的事情:
This one runs faster almost every time:
/**
* Gets Hash of file.
*
* @param file String path + filename of file to get hash.
* @param hashAlgo Hash algorithm to use. <br/>
* Supported algorithms are: <br/>
* MD2, MD5 <br/>
* SHA-1 <br/>
* SHA-256, SHA-384, SHA-512
* @param BUFFER Buffer size in bytes. Recommended to stay in<br/>
* multiples of 2 such as 1024, 2048, <br/>
* 4096, 8192, 16384, 32768, 65536, etc.
* @return String value of hash. (Variable length dependent on hash algorithm used)
* @throws IOException If file is invalid.
* @throws HashTypeException If no supported or valid hash algorithm was found.
*/
public String getHash(String file, String hashAlgo, int BUFFER) throws IOException, HasherException {
StringBuffer hexString = null;
try {
MessageDigest md = MessageDigest.getInstance(validateHashType(hashAlgo));
FileInputStream fis = new FileInputStream(file);
byte[] dataBytes = new byte[BUFFER];
int nread = 0;
while ((nread = fis.read(dataBytes)) != -1) {
md.update(dataBytes, 0, nread);
}
fis.close();
byte[] mdbytes = md.digest();
hexString = new StringBuffer();
for (int i = 0; i < mdbytes.length; i++) {
hexString.append(Integer.toHexString((0xFF & mdbytes[i])));
}
return hexString.toString();
} catch (NoSuchAlgorithmException | HasherException e) {
throw new HasherException("Unsuppored Hash Algorithm.", e);
}
}My Java NIO method that runs considerably slower most of the time:
/**
* Gets Hash of file using java.nio File Channels and ByteBuffer
* <br/>for native system calls where possible. This may improve <br/>
* performance in some circumstances.
*
* @param fileStr String path + filename of file to get hash.
* @param hashAlgo Hash algorithm to use. <br/>
* Supported algorithms are: <br/>
* MD2, MD5 <br/>
* SHA-1 <br/>
* SHA-256, SHA-384, SHA-512
* @param BUFFER Buffer size in bytes. Recommended to stay in<br/>
* multiples of 2 such as 1024, 2048, <br/>
* 4096, 8192, 16384, 32768, 65536, etc.
* @return String value of hash. (Variable length dependent on hash algorithm used)
* @throws IOException If file is invalid.
* @throws HashTypeException If no supported or valid hash algorithm was found.
*/
public String getHashNIO(String fileStr, String hashAlgo, int BUFFER) throws IOException, HasherException {
File file = new File(fileStr);
MessageDigest md = null;
FileInputStream fis = null;
FileChannel fc = null;
ByteBuffer bbf = null;
StringBuilder hexString = null;
try {
md = MessageDigest.getInstance(hashAlgo);
fis = new FileInputStream(file);
fc = fis.getChannel();
bbf = ByteBuffer.allocateDirect(BUFFER); // allocation in bytes - 1024, 2048, 4096, 8192
int b;
b = fc.read(bbf);
while ((b != -1) && (b != 0)) {
bbf.flip();
byte[] bytes = new byte[b];
bbf.get(bytes);
md.update(bytes, 0, b);
bbf.clear();
b = fc.read(bbf);
}
fis.close();
byte[] mdbytes = md.digest();
hexString = new StringBuilder();
for (int i = 0; i < mdbytes.length; i++) {
hexString.append(Integer.toHexString((0xFF & mdbytes[i])));
}
return hexString.toString();
} catch (NoSuchAlgorithmException e) {
throw new HasherException("Unsupported Hash Algorithm.", e);
}
}我的想法是,Java NIO尝试使用本机系统调用等来保持处理和存储(缓冲区)在系统中和在JVM之外-这(从理论上)防止了程序必须在JVM和系统之间来回移动。理论上这应该更快..。但是,也许我的MessageDigest强制JVM引入缓冲区,从而否定了本机缓冲区/系统调用所能带来的任何性能改进?我在这个逻辑上是正确的,还是我错了?
Please help me understand why Java NIO is not better in this scenario.
发布于 2013-05-02 02:41:24
两件事可能会让你的NIO方法变得更好:
ByteBuffer而不是byte[]数组。前者应避免在文件缓存和应用程序堆之间复制数据,而后者应避免在缓冲区和字节数组之间复制数据。如果没有这些优化,您可能会得到比单纯的非NIO方法更多的复制。
https://stackoverflow.com/questions/16321299
复制相似问题