我有一个云函数调用SCC的list_assets并将分页的输出转换为一个列表(以获取所有结果)。然而,由于我在组织树中有相当多的资产,获取和云函数超时(最大超时540秒)需要花费大量时间。
asset_iterator = security_client.list_assets(org_name)
asset_fetch_all=list(asset_iterator)我尝试通过WebUI导出,它工作正常(大约花了5分钟)。有没有办法使用API将云硬盘中的资产直接导出到云存储存储桶中?
发布于 2020-06-12 17:11:26
我用Python开发了同样的东西,用于导出到BQ。在BigQuery中搜索比在文件中搜索更容易。GCS存储的代码非常类似。下面是我使用BQ的工作代码
import os
from google.cloud import asset_v1
from google.cloud.asset_v1.proto import asset_service_pb2
from google.cloud.asset_v1 import enums
def GCF_ASSET_TO_BQ(request):
client = asset_v1.AssetServiceClient()
parent = 'organizations/{}'.format(os.getenv('ORGANIZATION_ID'))
output_config = asset_service_pb2.OutputConfig()
output_config.bigquery_destination.dataset = 'projects/{}/datasets/{}'.format(os.getenv('PROJECT_ID'),os.getenv('DATASET'))
content_type = enums.ContentType.RESOURCE
output_config.bigquery_destination.table = 'asset_export'
output_config.bigquery_destination.force = True
response = client.export_assets(parent, output_config, content_type=content_type)
# For waiting the finish
# response.result()
# Do stuff after export
return "done", 200
if __name__ == "__main__":
GCF_ASSET_TO_BQ('')如您所见,Env Var中有一些值(OrganizationID、projectId和Dataset)。要导出到云存储,您必须更改output_config的定义,如下所示
output_config = asset_service_pb2.OutputConfig()
output_config.gcs_destination.uri = 'gs://path/to/file' 发布于 2020-07-17 01:07:13
尝试这样做:我们使用它将查找结果上传到存储桶中。请确保为正在运行函数的SP授予对存储桶的正确权限。
def test_list_medium_findings(source_name):
# [START list_findings_at_a_time]
from google.cloud import securitycenter
from google.cloud import storage
# Create a new client.
client = securitycenter.SecurityCenterClient()
#Set query paramaters
organization_id = "11112222333344444"
org_name = "organizations/{org_id}".format(org_id=organization_id)
all_sources = "{org_name}/sources/-".format(org_name=org_name)
#Query Security Command Center
finding_result_iterator = client.list_findings(all_sources,filter_=YourFilter)
#Set output file settings
bucket="YourBucketName"
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket)
output_file_name = "YourFileName"
my_file = bucket.blob(output_file_name)
with open('/tmp/data.txt', 'w') as file:
for i, finding_result in enumerate(finding_result_iterator):
file.write(
"{}: name: {} resource: {}".format(
i, finding_result.finding.name, finding_result.finding.resource_name
)
)
#Upload to bucket
my_file.upload_from_filename("/tmp/data.txt")https://stackoverflow.com/questions/62331287
复制相似问题