首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Geotrellis,获取多边形网格Rdd中的点。

Geotrellis,获取多边形网格Rdd中的点。
EN

Stack Overflow用户
提问于 2017-11-17 21:04:17
回答 1查看 733关注 0票数 5

我需要计算多边形网格中点的平均值。

就像基于条件Polyogon.contains(点)的连接1到多个

代码语言:javascript
复制
//In:
val pointValueRdd : RDD[Feature[Point,Double]] 

val squareGridRdd : RDD[Feature[Polygon]]

//Needed Out:

val squareGridRdd : RDD[Feature[Polygon,(accum:Double,count:Int)]]
//or
val squareGridRdd : RDD[Feature[Polygon,average:Double]]

是否可以使用四叉树索引?

我读到:剪辑到网格,但我不知道,如果它是正确的工具。

http://geotrellis.readthedocs.io/en/latest/guide/vectors.html#cliptogrid

下一张图像以蓝色显示网格,以及点

一些我们欢迎的建议

EN

回答 1

Stack Overflow用户

发布于 2017-11-28 19:24:29

如果您的多边形网格可以在LayoutDefinition中表示为GeoTrellis,那么有一种简单的方法可以实现这一点。LayoutDefinition定义了GeoTrellis层用来表示大量栅格瓷砖集合的瓷砖网格。它还可以用于执行网格空间中的网格键(SpatialKey)和映射空间中的边界框(Extents)之间的转换。

我不会假设您可以用LayoutDefinition来表示网格,而是展示一个解决更一般情况的示例。如果您可以用LayoutDefinition表示多边形网格,那么这种方法将更加简单。然而,这里有一个更通用方法的代码片段。这是编译的,但没有测试,所以如果你发现它的问题,请告诉我。我们在doc-examples项目中的示例中包含了这个PR:https://github.com/locationtech/geotrellis/pull/2489

代码语言:javascript
复制
import geotrellis.raster._
import geotrellis.spark._
import geotrellis.spark.tiling._
import geotrellis.vector._

import org.apache.spark.HashPartitioner
import org.apache.spark.rdd.RDD

import java.util.UUID

// see https://stackoverflow.com/questions/47359243/geotrellis-get-the-points-that-fall-in-a-polygon-grid-rdd

val pointValueRdd : RDD[Feature[Point,Double]] = ???
val squareGridRdd : RDD[Polygon] = ???

// Since we'll be grouping by key and then joining, it's work defining a partitioner
// up front.
val partitioner = new HashPartitioner(100)

// First, we'll determine the bounds of the Polygon grid
// and the average height and width, to create a GridExtent
val (extent, totalHeight, totalWidth, totalCount) =
  squareGridRdd
    .map { poly =>
      val e = poly.envelope
      (e, e.height, e.width, 1)
    }
    .reduce { case ((extent1, height1, width1, count1), (extent2, height2, width2, count2)) =>
      (extent1.combine(extent2), height1 + height2, width1 + width2, count1 + count2)
    }

val gridExtent = GridExtent(extent, totalHeight / totalCount, totalWidth / totalCount)

// We then use this for to construct a LayoutDefinition, that represents "tiles"
// that are 1x1.
val layoutDefinition = LayoutDefinition(gridExtent, tileCols = 1, tileRows = 1)

// Now that we have a layout, we can cut the polygons up per SpatialKey, as well as
// assign points to a SpatialKey, using ClipToGrid

// In order to keep track of which polygons go where, we'll assign a UUID to each
// polygon, so that they can be reconstructed. If we were dealing with PolygonFeatures,
// we could store the feature data as well. If those features already had IDs, we could
// also just use those IDs instead of UUIDs.
// We'll also store the original polygon, as they are not too big and it makes
// the reconstruction process (which might be prone to floating point errors) a little
// easier. For more complex polygons this might not be the most performant strategy.
// We then group by key to produce a set of polygons that intersect each key.
val cutPolygons: RDD[(SpatialKey, Iterable[Feature[Geometry, (Polygon, UUID)]])] =
  squareGridRdd
    .map { poly => Feature(poly, (poly, UUID.randomUUID)) }
    .clipToGrid(layoutDefinition)
    .groupByKey(partitioner)

// While we could also use clipToGrid for the points, we can
// simply use the mapTransform on the layout to determien what SpatialKey each
// point should be assigned to.
// We also group this by key to produce the set of points that intersect the key.
val pointsToKeys: RDD[(SpatialKey, Iterable[PointFeature[Double]])] =
  pointValueRdd
    .map { pointFeature =>
      (layoutDefinition.mapTransform.pointToKey(pointFeature.geom), pointFeature)
    }
    .groupByKey(partitioner)

// Now we can join those two RDDs and perform our point in polygon tests.
// Use a left outer join so that polygons with no points can be recorded.
// Once we have the point information, we key the RDD by the UUID and
// reduce the results.
val result: RDD[Feature[Polygon, Double]] =
  cutPolygons
    .leftOuterJoin(pointsToKeys)
    .flatMap { case (_, (polyFeatures, pointsOption)) =>
      pointsOption match {
        case Some(points) =>
          for(
            Feature(geom, (poly, uuid)) <- polyFeatures;
            Feature(point, value) <- points if geom.intersects(point)
          ) yield {
            (uuid, Feature(poly, (value, 1)))
          }
        case None =>
          for(Feature(geom, (poly, uuid)) <- polyFeatures) yield {
            (uuid, Feature(poly, (0.0, 0)))
          }
      }
    }
    .reduceByKey { case (Feature(poly1, (accum1, count1)), Feature(poly2, (accum2, count2))) =>
      Feature(poly1, (accum1 + accum2, count1 + count2))
    }
    .map { case (_, feature) =>
      // We no longer need the UUID; also compute the mean
      feature.mapData { case (acc, c) => acc / c }
    }
票数 4
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/47359243

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档