首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >keypoint_map_dict[kp_config.keypoint_class_name]流: label_map_item = Tensorflow:'item‘

keypoint_map_dict[kp_config.keypoint_class_name]流: label_map_item = Tensorflow:'item‘
EN

Stack Overflow用户
提问于 2021-05-18 20:58:55
回答 1查看 119关注 0票数 0

遵循本教程,我尝试使用Tensorflow进行对象检测:https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html现在我已经接近尾声,需要运行我的pipeline.config了。当我运行此命令时,我得到以下错误:

model_builder.py", line 844, in keypoint_proto_to_params label_map_item = keypoint_map_dict[kp_config.keypoint_class_name] KeyError: 'item'

我的pipeline.config文件如下所示:

代码语言:javascript
复制
# hourglass[1] backbone. This config achieves an mAP of 41.92 +/- 0.16 on
# COCO 17 (averaged over 5 runs). This config is TPU compatible.
# [1]: https://arxiv.org/abs/1603.06937
# [2]: https://arxiv.org/abs/1904.07850

model {
  center_net {
    num_classes: 5
    feature_extractor {
      type: "hourglass_104"
      bgr_ordering: true
      channel_means: [104.01362025, 114.03422265, 119.9165958 ]
      channel_stds: [73.6027665 , 69.89082075, 70.9150767 ]
    }
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 512
        max_dimension: 512
        pad_to_max_dimension: true
      }
    }
    object_detection_task {
      task_loss_weight: 1.0
      offset_loss_weight: 1.0
      scale_loss_weight: 0.1
      localization_loss {
        l1_localization_loss {
        }
      }
    }
    object_center_params {
      object_center_loss_weight: 1.0
      min_box_overlap_iou: 0.7
      max_box_predictions: 100
      classification_loss {
        penalty_reduced_logistic_focal_loss {
          alpha: 2.0
          beta: 4.0
        }
      }
    }

    keypoint_label_map_path: "annotations/label_map_cans_in_fridge.pbtxt"
    keypoint_estimation_task {
      task_name: "human_pose"
      task_loss_weight: 1.0
      loss {
        localization_loss {
          l1_localization_loss {
          }
        }
        classification_loss {
          penalty_reduced_logistic_focal_loss {
            alpha: 2.0
            beta: 4.0
          }
        }
      }
      keypoint_class_name: "item"
      keypoint_label_to_std {
        key: "left_ankle"
        value: 0.89
      }
      keypoint_label_to_std {
        key: "left_ear"
        value: 0.35
      }
      keypoint_label_to_std {
        key: "left_elbow"
        value: 0.72
      }
      keypoint_label_to_std {
        key: "left_eye"
        value: 0.25
      }
      keypoint_label_to_std {
        key: "left_hip"
        value: 1.07
      }
      keypoint_label_to_std {
        key: "left_knee"
        value: 0.89
      }
      keypoint_label_to_std {
        key: "left_shoulder"
        value: 0.79
      }
      keypoint_label_to_std {
        key: "left_wrist"
        value: 0.62
      }
      keypoint_label_to_std {
        key: "nose"
        value: 0.26
      }
      keypoint_label_to_std {
        key: "right_ankle"
        value: 0.89
      }
      keypoint_label_to_std {
        key: "right_ear"
        value: 0.35
      }
      keypoint_label_to_std {
        key: "right_elbow"
        value: 0.72
      }
      keypoint_label_to_std {
        key: "right_eye"
        value: 0.25
      }
      keypoint_label_to_std {
        key: "right_hip"
        value: 1.07
      }
      keypoint_label_to_std {
        key: "right_knee"
        value: 0.89
      }
      keypoint_label_to_std {
        key: "right_shoulder"
        value: 0.79
      }
      keypoint_label_to_std {
        key: "right_wrist"
        value: 0.62
      }
      keypoint_regression_loss_weight: 0.1
      keypoint_heatmap_loss_weight: 1.0
      keypoint_offset_loss_weight: 1.0
      offset_peak_radius: 3
      per_keypoint_offset: true
    }
  }
}

train_config: {

  batch_size: 8 # higher volume requires more memory, can change this
  num_steps: 250000

  data_augmentation_options {
    random_horizontal_flip {
      keypoint_flip_permutation: 0
      keypoint_flip_permutation: 2
      keypoint_flip_permutation: 1
      keypoint_flip_permutation: 4
      keypoint_flip_permutation: 3
      keypoint_flip_permutation: 6
      keypoint_flip_permutation: 5
      keypoint_flip_permutation: 8
      keypoint_flip_permutation: 7
      keypoint_flip_permutation: 10
      keypoint_flip_permutation: 9
      keypoint_flip_permutation: 12
      keypoint_flip_permutation: 11
      keypoint_flip_permutation: 14
      keypoint_flip_permutation: 13
      keypoint_flip_permutation: 16
      keypoint_flip_permutation: 15
    }
  }

  data_augmentation_options {
    random_crop_image {
      min_aspect_ratio: 0.5
      max_aspect_ratio: 1.7
      random_coef: 0.25
    }
  }


  data_augmentation_options {
    random_adjust_hue {
    }
  }

  data_augmentation_options {
    random_adjust_contrast {
    }
  }

  data_augmentation_options {
    random_adjust_saturation {
    }
  }

  data_augmentation_options {
    random_adjust_brightness {
    }
  }

  data_augmentation_options {
    random_absolute_pad_image {
       max_height_padding: 200
       max_width_padding: 200
       pad_color: [0, 0, 0]
    }
  }

  optimizer {
    adam_optimizer: {
      epsilon: 1e-7  # Match tf.keras.optimizers.Adam's default.
      learning_rate: {
        cosine_decay_learning_rate {
          learning_rate_base: 1e-3
          total_steps: 250000
          warmup_learning_rate: 2.5e-4
          warmup_steps: 5000
        }
      }
    }
    use_moving_average: false
  }
  max_number_of_boxes: 100
  unpad_groundtruth_tensors: false

  fine_tune_checkpoint_version: V2
  fine_tune_checkpoint: "pre-trained-models/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0"
  fine_tune_checkpoint_type: "detection"
}

train_input_reader: {
  label_map_path: "annotations/label_map_cans_in_fridge.pbtxt"
  tf_record_input_reader {
    input_path: "annotations/train.record"
  }
  num_keypoints: 17
}

eval_config: {
  metrics_set: "coco_detection_metrics"
  use_moving_averages: false
  num_visualizations: 10
  max_num_boxes_to_visualize: 20
  min_score_threshold: 0.2
  batch_size: 1;
  parameterized_metric {
    coco_keypoint_metrics {
      class_label: "person"
      keypoint_label_to_sigmas {
        key: "nose"
        value: 0.026
      }
      keypoint_label_to_sigmas {
        key: "left_eye"
        value: 0.025
      }
      keypoint_label_to_sigmas {
        key: "right_eye"
        value: 0.025
      }
      keypoint_label_to_sigmas {
        key: "left_ear"
        value: 0.035
      }
      keypoint_label_to_sigmas {
        key: "right_ear"
        value: 0.035
      }
      keypoint_label_to_sigmas {
        key: "left_shoulder"
        value: 0.079
      }
      keypoint_label_to_sigmas {
        key: "right_shoulder"
        value: 0.079
      }
      keypoint_label_to_sigmas {
        key: "left_elbow"
        value: 0.072
      }
      keypoint_label_to_sigmas {
        key: "right_elbow"
        value: 0.072
      }
      keypoint_label_to_sigmas {
        key: "left_wrist"
        value: 0.062
      }
      keypoint_label_to_sigmas {
        key: "right_wrist"
        value: 0.062
      }
      keypoint_label_to_sigmas {
        key: "left_hip"
        value: 0.107
      }
      keypoint_label_to_sigmas {
        key: "right_hip"
        value: 0.107
      }
      keypoint_label_to_sigmas {
        key: "left_knee"
        value: 0.087
      }
      keypoint_label_to_sigmas {
        key: "right_knee"
        value: 0.087
      }
      keypoint_label_to_sigmas {
        key: "left_ankle"
        value: 0.089
      }
      keypoint_label_to_sigmas {
        key: "right_ankle"
        value: 0.089
      }
    }
  }
  # Provide the edges to connect the keypoints. The setting is suitable for
  # COCO's 17 human pose keypoints.
  keypoint_edge {  # nose-left eye
    start: 0
    end: 1
  }
  keypoint_edge {  # nose-right eye
    start: 0
    end: 2
  }
  keypoint_edge {  # left eye-left ear
    start: 1
    end: 3
  }
  keypoint_edge {  # right eye-right ear
    start: 2
    end: 4
  }
  keypoint_edge {  # nose-left shoulder
    start: 0
    end: 5
  }
  keypoint_edge {  # nose-right shoulder
    start: 0
    end: 6
  }
  keypoint_edge {  # left shoulder-left elbow
    start: 5
    end: 7
  }
  keypoint_edge {  # left elbow-left wrist
    start: 7
    end: 9
  }
  keypoint_edge {  # right shoulder-right elbow
    start: 6
    end: 8
  }
  keypoint_edge {  # right elbow-right wrist
    start: 8
    end: 10
  }
  keypoint_edge {  # left shoulder-right shoulder
    start: 5
    end: 6
  }
  keypoint_edge {  # left shoulder-left hip
    start: 5
    end: 11
  }
  keypoint_edge {  # right shoulder-right hip
    start: 6
    end: 12
  }
  keypoint_edge {  # left hip-right hip
    start: 11
    end: 12
  }
  keypoint_edge {  # left hip-left knee
    start: 11
    end: 13
  }
  keypoint_edge {  # left knee-left ankle
    start: 13
    end: 15
  }
  keypoint_edge {  # right hip-right knee
    start: 12
    end: 14
  }
  keypoint_edge {  # right knee-right ankle
    start: 14
    end: 16
  }
}

eval_input_reader: {
  label_map_path: "annotations/label_map_cans_in_fridge.pbtxt"
  tf_record_input_reader {
    input_path: "annotations/test.record"
  }
  num_keypoints: 17
}

我检查了所有目录的路径,没有问题。问题出在下面这一行:

keypoint_class_name: "item"

我从labelimg.pbtxt文件中获得的有价值的项目。它看起来像这样:

代码语言:javascript
复制
    id: 1
    name: 'ColaCan'
}

item {
    id: 2
    name: 'FantaCan'
}

item {
    id: 3
    name: 'SpriteLemonCan'
}

item {
    id: 4
    name: 'Upperside'
}


item {
    id: 5
    name: 'VanishingLine'
}

有人能告诉我我哪里做错了吗?

EN

回答 1

Stack Overflow用户

发布于 2021-05-25 21:17:59

您选择了一个带有关键点检测的模型,因此pipeline.config将查找两个标签映射。

首先是要通过边界框进行分类的对象的标签映射,然后是关键点标签映射。

object_detection应用编程接口的model/research/object_detection/data文件夹提供了此类关键点标签映射的示例。keypoint_class_name应该引用关键点标签映射中的类标签,例如'Person‘。

这也应该映射到您的训练集中的关键点标签。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/67586689

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档