我尝试使用pyautogui模块和我的函数来定位屏幕上的图像
pyautogui.locateOnScreen()但它的处理时间大约是5-10秒。还有其他方法可以让我更快地找到屏幕上的图像吗?基本上,我想要一个更快的locateOnScreen()版本。
发布于 2017-03-23 12:27:56
正式文件说在1920x1080屏幕上应该需要1-2秒,所以你的时间看起来有点慢。我会尽量优化:
grayscale=True应该提供30%的加速)这在上面链接的文档中都有描述。
这还不够快,您可以检查吡咯烷酮源,看到屏幕上的定位使用了一个特定的算法(Knuth Pratt搜索算法),用Python实现。因此,在C中实现这个部分,可能会导致相当明显的加速。
发布于 2020-10-07 16:38:39
创建一个函数并使用线程信任(需要opencv)
import pyautogui
import threading
def locate_cat():
cat=None
while cat is None:
cat = pyautogui.locateOnScreen('Pictures/cat.png',confidence=.65,region=(1722,748, 200,450)
return cat如果知道屏幕上区域参数的大致位置,则可以使用此区域参数。
在某些情况下,您可以在屏幕上定位并将区域分配给变量,并使用region=somevar作为参数,因此它看起来与上次发现的位置相同,以帮助加快检测过程。
例:
import pyautogui
def first_find():
front_door = None
while front_door is None:
front_door_save=pyautogui.locateOnScreen('frontdoor.png',confidence=.95,region=1722,748, 200,450)
front_door=front_door_save
return front_door_save
def second_find():
front_door=None
while front_door is None:
front_door = pyautogui.locateOnScreen('frontdoor.png',confidence=.95,region=front_door_save)
return front_door
def find_person():
person=None
while person is None:
person= pyautogui.locateOnScreen('person.png',confidence=.95,region=front_door)
while True:
first_find()
second_find()
if front_door is None:
pass
if front_door is not None:
find_person()发布于 2021-06-15 08:14:42
我也面临着同样的问题。虽然它是一个非常方便的图书馆,但它很慢。
我依靠x10和PIL获得了一个cv2加速:
def benchmark_opencv_pil(method):
img = ImageGrab.grab(bbox=REGION)
img_cv = cv.cvtColor(np.array(img), cv.COLOR_RGB2BGR)
res = cv.matchTemplate(img_cv, GAME_OVER_PICTURE_CV, method)
# print(res)
return (res >= 0.8).any()在使用TM_CCOEFF_NORMED的地方效果很好。(显然,您也可以调整0.8阈值)
为了完整起见,以下是完整的基准:
import pyautogui as pg
import numpy as np
import cv2 as cv
from PIL import ImageGrab, Image
import time
REGION = (0, 0, 400, 400)
GAME_OVER_PICTURE_PIL = Image.open("./balloon_fight_game_over.png")
GAME_OVER_PICTURE_CV = cv.imread('./balloon_fight_game_over.png')
def timing(f):
def wrap(*args, **kwargs):
time1 = time.time()
ret = f(*args, **kwargs)
time2 = time.time()
print('{:s} function took {:.3f} ms'.format(
f.__name__, (time2-time1)*1000.0))
return ret
return wrap
@timing
def benchmark_pyautogui():
res = pg.locateOnScreen(GAME_OVER_PICTURE_PIL,
grayscale=True, # should provied a speed up
confidence=0.8,
region=REGION)
return res is not None
@timing
def benchmark_opencv_pil(method):
img = ImageGrab.grab(bbox=REGION)
img_cv = cv.cvtColor(np.array(img), cv.COLOR_RGB2BGR)
res = cv.matchTemplate(img_cv, GAME_OVER_PICTURE_CV, method)
# print(res)
return (res >= 0.8).any()
if __name__ == "__main__":
im_pyautogui = benchmark_pyautogui()
print(im_pyautogui)
methods = ['cv.TM_CCOEFF', 'cv.TM_CCOEFF_NORMED', 'cv.TM_CCORR',
'cv.TM_CCORR_NORMED', 'cv.TM_SQDIFF', 'cv.TM_SQDIFF_NORMED']
# cv.TM_CCOEFF_NORMED actually seems to be the most relevant method
for method in methods:
print(method)
im_opencv = benchmark_opencv_pil(eval(method))
print(im_opencv)结果表明,x10有一定的改进作用。
benchmark_pyautogui function took 175.712 ms
False
cv.TM_CCOEFF
benchmark_opencv_pil function took 21.283 ms
True
cv.TM_CCOEFF_NORMED
benchmark_opencv_pil function took 23.377 ms
False
cv.TM_CCORR
benchmark_opencv_pil function took 20.465 ms
True
cv.TM_CCORR_NORMED
benchmark_opencv_pil function took 25.347 ms
False
cv.TM_SQDIFF
benchmark_opencv_pil function took 23.799 ms
True
cv.TM_SQDIFF_NORMED
benchmark_opencv_pil function took 22.882 ms
Truehttps://stackoverflow.com/questions/42973863
复制相似问题