首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >使用amazon AWS和S3实例进行语义分割

使用amazon AWS和S3实例进行语义分割
EN

Stack Overflow用户
提问于 2019-07-14 21:22:22
回答 1查看 176关注 0票数 0

这可能是一个简单的问题,但我已经被卡住了一段时间。我想训练一个FCN和亚马逊AWS。为此,我想使用本例中使用的过程( https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/semantic_segmentation_pascalvoc/semantic_segmentation_pascalvoc.ipynb )和我自己的数据库。

与该过程相反,我将训练和注释图像(作为.png)保存在一个S3存储桶中,其中包含四个文件夹(Training、TrainingAnnotation、Validation、ValidationAnnotaion),训练和注释文件夹中的.The文件具有相同的名称。

我用下面的代码训练我的模型:

代码语言:javascript
复制
%%time
import sagemaker
from sagemaker import get_execution_role

role = get_execution_role()
print(role)

bucket = sess.default_bucket()  
prefix = 'semantic-segmentation'
print(bucket)

from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'semantic-segmentation', repo_version="latest")
print (training_image)

s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
print(s3_output_location)

# Create the sagemaker estimator object.
ss_model = sagemaker.estimator.Estimator(training_image,
                                         role, 
                                         train_instance_count = 1, 
                                         train_instance_type = 'ml.p2.xlarge',
                                         train_volume_size = 50,
                                         train_max_run = 360000,
                                         output_path = s3_output_location,
                                         base_job_name = 'ss-notebook-demo',
                                         sagemaker_session = sess)
num_training_samples=5400
# Setup hyperparameters 
ss_model.set_hyperparameters(backbone='resnet-50', 
                             algorithm='fcn',                   
                             use_pretrained_model='True', 
                             crop_size=248, .                             
                             num_classes=4, 
                             epochs=10, 
                             learning_rate=0.0001,                             
                             optimizer='rmsprop', 'adam', 'rmsprop', 'nag', 'adagrad'.
                             lr_scheduler='poly', 'cosine' and 'step'.                           
                             mini_batch_size=16, 
                             validation_mini_batch_size=16,
                             early_stopping=True, 
                             early_stopping_patience=2, 
                             early_stopping_min_epochs=10,    
                             num_training_samples=num_training_samples) 
# Create full bucket names

bucket1 = 'imagelabel1' 
train_channel = 'Training'
validation_channel = 'Validation'
train_annotation_channel = 'TrainingAnnotation'
validation_annotation_channel =  'ValidataionAnnotation'


s3_train_data = 's3://{}/{}'.format(bucket1, train_channel)
s3_validation_data = 's3://{}/{}'.format(bucket1, validation_channel)
s3_train_annotation = 's3://{}/{}'.format(bucket1, train_annotation_channel)
s3_validation_annotation  = 's3://{}/{}'.format(bucket1, validation_annotation_channel)



distribution = 'FullyReplicated'
# Create sagemaker s3_input objects
train_data = sagemaker.session.s3_input(s3_train_data, distribution=distribution, 
                                        content_type='image/png', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution=distribution, 
                                        content_type='image/png', s3_data_type='S3Prefix')
train_annotation = sagemaker.session.s3_input(s3_train_annotation, distribution=distribution, 
                                        content_type='image/png', s3_data_type='S3Prefix')
validation_annotation = sagemaker.session.s3_input(s3_validation_annotation, distribution=distribution, 
                                        content_type='image/png', s3_data_type='S3Prefix')

data_channels = {'train': train_data, 
                 'validation': validation_data,
                 'train_annotation': train_annotation, 
                 'validation_annotation':validation_annotation}
s3://imagelabel1/Training
ss_model.fit(inputs=data_channels, logs=True)

错误消息为:

demo:培训作业出错ss-ValueError-demo-2019-07-15-06-42-25-784:失败原因: ClientError:列车通道为空。

有人知道这段代码的错误之处吗?

谢谢

西蒙

EN

回答 1

Stack Overflow用户

发布于 2019-07-18 18:13:16

看起来您的文件夹层次结构没有使用正确的名称。根据文档(https://docs.aws.amazon.com/sagemaker/latest/dg/semantic-segmentation.html#semantic-segmentation-inputoutput),它应该如下所示:

代码语言:javascript
复制
s3://bucket_name
    |
    |- train
                 |
                 | - 0000.jpg
                 | - coffee.jpg
    |- validation
                 |
                 | - 00a0.jpg
                 | - bananna.jpg              
    |- train_annotation
                 |
                 | - 0000.png
                 | - coffee.png
    |- validation_annotation
                 |
                 | - 00a0.png   
                 | - bananna.png 
    |- label_map
                 | - train_label_map.json  
                 | - validation_label_map.json 

修复这些前缀应该可以解决您的问题:

代码语言:javascript
复制
train_channel = 'Training'
validation_channel = 'Validation'
train_annotation_channel = 'TrainingAnnotation'
validation_annotation_channel =  'ValidataionAnnotation'
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/57027808

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档