我有丹尼尔·希夫曼的代码(下面)。我正试着读出Z坐标。我一点也不确定该怎么做,所以任何帮助都是非常感谢的。
AveragePointTracking.pde
// Daniel Shiffman
// Tracking the average location beyond a given depth threshold
// Thanks to Dan O'Sullivan
// http://www.shiffman.net
// https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing
import org.openkinect.*;
import org.openkinect.processing.*;
// Showing how we can farm all the kinect stuff out to a separate class
KinectTracker tracker;
// Kinect Library object
Kinect kinect;
void setup() {
size(640,600);
kinect = new Kinect(this);
tracker = new KinectTracker();
}
void draw() {
background(255);
// Run the tracking analysis
tracker.track();
// Show the image
tracker.display();
// Let's draw the raw location
PVector v1 = tracker.getPos();
fill(50,100,250,200);
noStroke();
ellipse(v1.x,v1.y,10,10);
// Let's draw the "lerped" location
//PVector v2 = tracker.getLerpedPos();
//fill(100,250,50,200);
//noStroke();
//ellipse(v2.x,v2.y,20,20);
// Display some info
int t = tracker.getThreshold();
fill(0);
text("Location-X: " + v1.x,10,500);
text("Location-Y: " + v1.y,10,530);
text("Location-Z: ",10,560);
text("threshold: " + t,10,590);
}
void stop() {
tracker.quit();
super.stop();
}KinectTracker.pde
class KinectTracker {
// Size of kinect image
int kw = 640;
int kh = 480;
int threshold = 500;
// Raw location
PVector loc;
// Interpolated location
PVector lerpedLoc;
// Depth data
int[] depth;
PImage display;
KinectTracker() {
kinect.start();
kinect.enableDepth(true);
// We could skip processing the grayscale image for efficiency
// but this example is just demonstrating everything
kinect.processDepthImage(true);
display = createImage(kw,kh,PConstants.RGB);
loc = new PVector(0,0);
lerpedLoc = new PVector(0,0);
}
void track() {
// Get the raw depth as array of integers
depth = kinect.getRawDepth();
// Being overly cautious here
if (depth == null) return;
float sumX = 0;
float sumY = 0;
float count = 0;
for(int x = 0; x < kw; x++) {
for(int y = 0; y < kh; y++) {
// Mirroring the image
int offset = kw-x-1+y*kw;
// Grabbing the raw depth
int rawDepth = depth[offset];
// Testing against threshold
if (rawDepth < threshold) {
sumX += x;
sumY += y;
count++;
}
}
}
// As long as we found something
if (count != 0) {
loc = new PVector(sumX/count,sumY/count);
}
// Interpolating the location, doing it arbitrarily for now
lerpedLoc.x = PApplet.lerp(lerpedLoc.x, loc.x, 0.3f);
lerpedLoc.y = PApplet.lerp(lerpedLoc.y, loc.y, 0.3f);
}
PVector getLerpedPos() {
return lerpedLoc;
}
PVector getPos() {
return loc;
}
void display() {
PImage img = kinect.getDepthImage();
// Being overly cautious here
if (depth == null || img == null) return;
// Going to rewrite the depth image to show which pixels are in threshold
// A lot of this is redundant, but this is just for demonstration purposes
display.loadPixels();
for(int x = 0; x < kw; x++) {
for(int y = 0; y < kh; y++) {
// mirroring image
int offset = kw-x-1+y*kw;
// Raw depth
int rawDepth = depth[offset];
int pix = x+y*display.width;
if (rawDepth < threshold) {
// A red color instead
display.pixels[pix] = color(245,100,100);
}
else {
display.pixels[pix] = img.pixels[offset];
}
}
}
display.updatePixels();
// Draw the image
image(display,0,0);
}
void quit() {
kinect.quit();
}
int getThreshold() {
return threshold;
}
void setThreshold(int t) {
threshold = t;
}
}发布于 2013-04-27 13:38:18
有两个主要步骤:
int offset = kw-x-1+y*kw;中也是这样)注意,坐标是镜像的,通常对索引的计算如下所示:
index = y*width+x正如得到()参考说明中所解释的那样
因此,从理论上讲,您所需要做的就是在track()方法的末尾这样做:
lerpedLoc.z = depth[kw-((int)lerpedLoc.x)-1+((int)lerpedLoc.y)*kw];就像这样:
void track() {
// Get the raw depth as array of integers
depth = kinect.getRawDepth();
// Being overly cautious here
if (depth == null) return;
float sumX = 0;
float sumY = 0;
float count = 0;
for(int x = 0; x < kw; x++) {
for(int y = 0; y < kh; y++) {
// Mirroring the image
int offset = kw-x-1+y*kw;
// Grabbing the raw depth
int rawDepth = depth[offset];
// Testing against threshold
if (rawDepth < threshold) {
sumX += x;
sumY += y;
count++;
}
}
}
// As long as we found something
if (count != 0) {
loc = new PVector(sumX/count,sumY/count);
}
// Interpolating the location, doing it arbitrarily for now
lerpedLoc.x = PApplet.lerp(lerpedLoc.x, loc.x, 0.3f);
lerpedLoc.y = PApplet.lerp(lerpedLoc.y, loc.y, 0.3f);
lerpedLoc.z = depth[kw-((int)lerpedLoc.x)-1+((int)lerpedLoc.y)*kw];
}我现在不能用kinect测试,但这应该能用。我不确定你是否会得到正确像素的深度或镜像像素的深度。唯一的其他选择是:
lerpedLoc.z = depth[((int)lerpedLoc.x)+((int)lerpedLoc.y)*kw];发布于 2013-04-26 15:14:45
有两种方法..。
丹尼尔的代码现在访问坐标的方式是使用二维向量(即.用X和Y)。您可以将其转换为三维向量(因此它也存储一个Z坐标),OpenKinect库应该以它处理X和Y的方式返回Z坐标.我想;-) (得查他的消息来源)。但这将返回每个像素的Z坐标,然后必须循环遍历,这是繁琐和计算昂贵的.
现在,Daniel在本例中实际上是这样做的,就是查找特定XY位置的深度,并在超过某个阈值时将其返回给您。这是你在rawDepth中看到的KinectTracker整数.因此,它测试这个值是否小于阈值(您可以更改该值),如果这样,它会对这些像素着色,并将它们写入图像缓冲区.然后你可以要求图像的XY坐标,或者把它传递给一个blob检测例程,等等.
发布于 2013-04-27 14:38:46
在void ()结尾添加此操作有效:
lerpedLoc.z = depth[kw-((int)lerpedLoc.x)-1+((int)lerpedLoc.y)*kw];然后,我将void ()中的最后一个块更改为下面的代码,以读取Z值:
// Display some info
int t = tracker.getThreshold();
fill(0);
text("Location-X: " + v1.x,10,500);
text("Location-Y: " + v1.y,10,530);
text("Location-Z: " + v2.z,10,560); // <<Adding this worked!
text("threshold: " + t,10,590);https://stackoverflow.com/questions/16237577
复制相似问题