我正在使用nosetest对我的Tensorflow代码进行单元测试,但它会产生如此多的冗长输出,这使得它变得无用。
下面的测试
import unittest
import tensorflow as tf
class MyTest(unittest.TestCase):
def test_creation(self):
self.assertEquals(True, False)当使用nosetests运行时,会创建大量无用的日志:
FAIL: test_creation (tests.test_tf.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/cebrian/GIT/thesis-nilm/code/deepmodels/tests/test_tf.py", line 10, in test_creation
self.assertEquals(True, False)
AssertionError: True != False
-------------------- >> begin captured logging << --------------------
tensorflow: Level 1: Registering Const (<function _ConstantShape at 0x7f4379131c80>) in shape functions.
tensorflow: Level 1: Registering Assert (<function no_outputs at 0x7f43791319b0>) in shape functions.
tensorflow: Level 1: Registering Print (<function _PrintGrad at 0x7f4378effd70>) in gradient.
tensorflow: Level 1: Registering Print (<function unchanged_shape at 0x7f4379131320>) in shape functions.
tensorflow: Level 1: Registering HistogramAccumulatorSummary (None) in gradient.
tensorflow: Level 1: Registering HistogramSummary (None) in gradient.
tensorflow: Level 1: Registering ImageSummary (None) in gradient.
tensorflow: Level 1: Registering AudioSummary (None) in gradient.
tensorflow: Level 1: Registering MergeSummary (None) in gradient.
tensorflow: Level 1: Registering ScalarSummary (None) in gradient.
tensorflow: Level 1: Registering ScalarSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering MergeSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering AudioSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering ImageSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering HistogramSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering HistogramAccumulatorSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering Pack (<function _PackShape at 0x7f4378f047d0>) in shape functions.
tensorflow: Level 1: Registering Unpack (<function _UnpackShape at 0x7f4378f048c0>) in shape functions.
tensorflow: Level 1: Registering Concat (<function _ConcatShape at 0x7f4378f04938>) in shape functions.
tensorflow: Level 1: Registering ConcatOffset (<function _ConcatOffsetShape at 0x7f4378f049b0>) in shape functions.
......而从ipython控制台使用tensorflow似乎没有那么冗长:
$ ipython
Python 2.7.11+ (default, Apr 17 2016, 14:00:29)
Type "copyright", "credits" or "license" for more information.
IPython 4.2.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
In [2]:如何在运行nosetests时抑制以前的日志记录?
发布于 2016-08-10 20:42:37
2.0更新(10/8/19)设置TF_CPP_MIN_LOG_LEVEL应该仍然有效(请参见下面的v0.12+更新),但当前存在一个打开的问题(请参阅issue #31870)。如果设置TF_CPP_MIN_LOG_LEVEL对您无效(同样,请参见下文),请尝试执行以下操作来设置日志级别:
import tensorflow as tf
tf.get_logger().setLevel('INFO')此外,请参阅tf.autograph.set_verbosity上的文档,该文档设置签名日志消息的详细程度-例如:
# Can also be set using the AUTOGRAPH_VERBOSITY environment variable
tf.autograph.set_verbosity(1)v0.12+更新(5/20/17),使用TF 2.0+:
在TensorFlow 0.12+中,根据此issue,您现在可以通过名为TF_CPP_MIN_LOG_LEVEL的环境变量控制日志记录;它的默认值为0(显示所有日志),但可以在Level列下设置为以下值之一。
Level | Level for Humans | Level Description
-------|------------------|------------------------------------
0 | DEBUG | [Default] Print all messages
1 | INFO | Filter out INFO messages
2 | WARNING | Filter out INFO & WARNING messages
3 | ERROR | Filter out all messages 请参阅以下使用Python的通用操作系统示例:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'}
import tensorflow as tf为了全面起见,您还可以调用为Python tf_logging模块设置级别,该模块用于摘要运算、张力板、各种估计器等。
# append to lines above
tf.logging.set_verbosity(tf.logging.ERROR) # or any {DEBUG, INFO, WARN, ERROR, FATAL}在1.14版本中,如果您不按如下方式更改为使用v1接口,您将收到警告:
# append to lines above
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) # or any {DEBUG, INFO, WARN, ERROR, FATAL}
用于TensorFlow或TF的早期版本的-学习日志(v0.11.x或更低版本):
有关TensorFlow日志的信息,请查看下面的页面;使用新的更新,您可以将日志详细程度设置为DEBUG、INFO、WARN、ERROR或FATAL。例如:
tf.logging.set_verbosity(tf.logging.ERROR)该页面还介绍了可与TF-Learn模型一起使用的监视器。Here is the page。
但是,这个并不会阻止所有的日志记录(只有TF-)。我有两个解决方案;一个是“技术上正确的”解决方案(Linux),另一个是重建TensorFlow。
script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'另一方面,请参考this answer,其中涉及到修改源代码和重建TensorFlow。
发布于 2017-04-04 06:18:01
使用nosetests --nologcapture运行测试将禁用这些日志的显示。有关nosetest日志记录的更多信息:https://nose.readthedocs.io/en/latest/plugins/logcapture.html
发布于 2016-06-29 18:43:43
下面是执行此操作的an example。不幸的是,这需要修改源代码并重新构建。这里有一个tracking bug可以让它变得更简单
https://stackoverflow.com/questions/38073432
复制相似问题