首页
学习
活动
专区
圈层
工具
发布
    • 综合排序
    • 最热优先
    • 最新优先
    时间不限
  • 来自专栏python3

    HP 3PAR可用容量计算方式

    450 2 0:2:0  FC    10 degraded  418816  353280 ----- 1:1:1*          450 3 0:3:0  FC    10 degraded  450 5 0:5:0  FC    10 degraded  418816  354304 ----- 1:1:1*          450 12 1:0:0  FC    10 degraded  450 14 1:2:0  FC    10 degraded  418816  354304 ----- 1:0:1*          450 15 1:3:0   FC   10 degraded 450 17 1:5:0  FC    10 degraded  418816  355328 ----- 1:0:1*          450 24 2:0:0  FC    10 degraded 450 26 2:2:0  FC    10 degraded  418816  354304 ----- 1:2:1*          450 27 2:3:0  FC    10 degraded

    2K10发布于 2020-01-08
  • 来自专栏云原生实验室

    Ceph 故障排查笔记 | 万字经验总结

    data redundancy: 177597/532791 objects degraded (33.333%), 212 pgs degraded, 212 pgs undersized objects degraded (33.333%), 212 pgs degraded, 212 pgs undersized; application not enabled on 3 pool( s); mon master003 is low on available space PG_DEGRADED Degraded data redundancy: 177615/532845 objects degraded (33.333%), 212 pgs degraded, 212 pgs undersized pg 1.15 is active+undersized+degraded, data redundancy: 122006/1192166 objects degraded (10.234%), 102 pgs degraded, 116 pgs undersized

    8.4K30发布于 2021-05-11
  • 来自专栏summerking的专栏

    Python提取log参数生成图表

    log_channel(cluster) log [INF] : pgmap v31413: 3578 pgs: 1795 active+clean, 1518 active+recovery_wait+degraded , 265 active+recovering+degraded; 13709 GB data, 37462 GB used, 216 TB / 253 TB avail; 25963 B/s rd, 101 MB/s wr, 40 op/s; 3635096/10530407 objects degraded (34.520%); 231 MB/s, 57 objects/s recovering , 265 active+recovering+degraded; 13710 GB data, 37465 GB used, 216 TB / 253 TB avail; 202 MB/s wr, 66 op/s; 3634943/10530710 objects degraded (34.518%); 491 MB/s, 122 objects/s recovering 2021-04-22 14:

    52430编辑于 2022-09-16
  • 来自专栏磨磨谈

    预估Ceph集群恢复时间

    json 2>/dev/null') json_str = json.loads(recover_time) if json_str["pgmap"].has_key('degraded_objects ') == True: degraded_objects = json_str["pgmap"]["degraded_objects"] if json_str /recovering_objects_per_sec print "recovery objects: %s" %(degraded_objects) recovering_objects_per_sec) print conversecs(resec) else: resec=degraded_objects /1 print "recovery objects: %s" %(degraded_objects) print "recovery

    1K00发布于 2018-08-06
  • 来自专栏分布式存储

    分布式存储Ceph之PG状态详解

    degraded (33.333%), 20 pgs unclean, 20 pgs degraded pg 1.0 is active+undersized+degraded, acting +peered, last acting [2] PG_DEGRADED Degraded data redundancy: 26/39 objects degraded (66.667%), 20 pgs +degraded+peered, acting [2] c. ), 17 pgs unclean, 17 pgs degraded PG_DEGRADED Degraded data redundancy: 183/57960 objects degraded ( , 1 pg unclean, 1 pg degraded PG_DEGRADED Degraded data redundancy: 6/57927 objects degraded (0.010%)

    4K40发布于 2020-07-20
  • 来自专栏磨磨谈

    Ceph recover的速度控制

    awk '{print $1,$2,$4,$10,$15,$16,$17,$18}'dumped all in format plain0.1d 636 1272 active+recovering+degraded [5,3] 5 [5,3] 50.14 618 1236 active+recovering+degraded [1,0] 1 [1,0] 10.15 682 1364 active+recovering +degraded [0,5] 0 [0,5] 00.35 661 1322 active+recovering+degraded [2,1] 2 [2,1] 2 动态监控PG的迁移 watch -n [0,5] 0 [0,5] 00.24 674 active+recovering+degraded [5,2] 5 [5,2] 50.35 661 active+recovering+degraded [2,1] 2 [2,1] 20.37 654 active+recovering+degraded [1,0] 1 [1,0] 1 可以看到这个环境下,每个OSD上面基本上是一个PG的写入,和一个PG

    2.9K30发布于 2018-08-06
  • 来自专栏孤鸿

    EMC isilon OneFS 8.x 命令备忘

    The value in question is “Run Jobs When Degraded”: isi job status -v The next command will either put the cluster in or take it out of degraded mode depending on the boolean value: isi_gconfig -t job-config core.run_degraded=true You will need to resume the SnapshotDelete job which is likely system paused in the background with: isi job jobs resume SnapshotDelete Remember to revert the degraded mode setting back to its default with: isi_gconfig -t job-config -R core.run_degraded Checking Synciq Job Status

    78120编辑于 2022-10-04
  • 来自专栏追宇星空

    背板以太网48-200GBASE-KR4(二)

    16芯片到模块(附件120C) --400GAUI-8芯片到芯片(附件120D) --400GAUI-8芯片到模块(附件120E) FEC降级 DTE XS tx_am_sf<2:0> = {FEC _degraded_SER + rx _local_degraded,0,0} PHY XS tx_am_sf<2:0> = {PCS: rx _rm_degraded, PCS: FEC _degraded_SER + PCS : rx _local_degraded, 0}。 其中PCS: rx _rm_degreed、PCS: FEC _degreeded_SER和PCS: rx _local_degraded是来自相邻PCS的rx_rm_decaded、FEC_degraded_SER 和rx_local-degraded变量。

    37900编辑于 2024-12-24
  • 来自专栏Ceph对象存储方案

    由OSD class配置引发的PG异常状态修复

    -s cluster: id: 21cc0dcd-06f3-4d5d-82c2-dbd411ef0ed9 health: HEALTH_WARN Degraded client: 17.0KiB/s rd, 17op/s rd, 0op/s wr [root@demohost cephuser]# ceph health detail HEALTH_WARN Degraded data redundancy: 1 pg undersized PG_DEGRADED Degraded data redundancy: 1 pg undersized pg 6.9c is data redundancy: 10/3897 objects degraded (0.257%), 8 pgs degraded services: mon: 3 daemons, 1.30k objects, 1.68GiB usage: 103GiB used, 415TiB / 415TiB avail pgs: 10/3897 objects degraded

    3.6K30发布于 2018-10-25
  • 来自专栏大内老A

    ASP.NET Core 6框架揭秘实例演示[42]:检查应用的健康状况

    下面的演示程序将健康检查实现在内嵌的Check方法中,该方法会随机返回三种健康状态(Healthy、Unhealthy和Degraded)。 针对健康状态Healthy和Degraded,响应码都是“200 OK”,因为此时的应用或者服务均会被视为可用(Available)状态,两者之间只是“完全可用”和“部分可用”的区别。 按照严重程度,三种健康状态的顺序应该是Unhealthy > Degraded > Healthy,组合中最严重的健康状态就是应用整体的健康状态。 按照这个逻辑,如果应用的整体健康状态为Healthy,就意味着三个服务的健康状态都是Healthy;如果应用的整体健康状态为Degraded,就意味着至少有一个服务的健康状态为Degraded,并且没有 我们为Check方法返回的表示健康检查结果的HealthCheckResult对象设置了对应的描述性文字(Normal、Degraded和Unavailable)。

    55020编辑于 2023-07-10
  • 来自专栏sktj

    ceph分布式存储学习指南 实战

    degraded :一旦有OSD 处于down 状态. Ceph 将分配到该OSD 上的所有PG 状态 变为degraded 状态。 在OSD 重新处于叩状态之后,它将再次执行peer 操作使得所 有处于degraded 状态的PG 变为c lean 。 如果OSD 持续处于down 状态超过300s 后, 它的状态将变为out ,此时Ceph 将会从副本中恢复所有处于degraded 状态的PG 以维持复制数。 还有一 个可能使得PG 状态变为degraded 的原因,这就是当一个PG 内的一个或多个对象变 得不可用时。Ceph 假设对象应该存在于PG 中,但实际上它并不可用。 在这种情况 下, Ceph 将该PG 的状态标记为degraded 并试图从其副本中恢复PG recovering :当一个OSD 处于down 状态后,其PG 的内容将会落后于放置在其他 OSD

    92540编辑于 2022-05-14
  • 来自专栏GEE数据专栏,GEE学习专栏,GEE错误集等专栏

    Google Earth Engine——NOAA/CDR/PATMOSX/V53提供了高质量的气候数据记录(CDR),以及高级甚高分辨率辐射计(AVHRR)的亮度温度和反射率的多种云特性

    YesBit 2: Valid ec retrieval 0: No1: YesBit 3: Valid beta retrieval 0: No1: YesBit 4: Degraded Tc retrieval 0: No1: YesBit 5: Degraded ec retrieval 0: No1: YesBit 6: Degraded beta retrieval COD retrieval 0: No1: YesBit 4: Degraded REF retrieval 0: No1: YesBit 5: Convergency Tc retrieval 0: No 1: Yes Bit 5: Degraded ec retrieval 0: No 1: Yes Bit 6: Degraded COD retrieval 0: No 1: Yes Bit 4: Degraded REF retrieval 0: No 1: Yes Bit 5: Convergency

    32010编辑于 2024-02-02
  • 来自专栏腾讯云TStack专栏

    《 大话 Ceph 》 之 PG 那点事儿

    6475...kill 6475...done [root@ceph-2 ~]# ceph pg dump_stuck |egrep ^0.44 0.44 active+undersized+degraded 0,7] 0 [0,7] 0 这里我们前往 ceph-2 节点,手动停止了 osd.4,然后查看此时 PG : 0.44的状态,可见,它此刻的状态是active+undersized+degraded ,当一个 PG 所在的 OSD 挂掉之后,这个 PG 就会进入undersized+degraded 状态,而后面的[0,7]的意义就是还有两个 0.44 的副本存活在 osd.0 和 osd.7 上。 ...kill 5986...kill 5986...done [root@ceph-2 ~]# ceph pg dump_stuck |egrep ^0.44 0.44 undersized+degraded pool set rbd min_size 1 [root@ceph-2 ~]# ceph pg dump_stuck |egrep ^0.44 0.44 active+undersized+degraded

    10.2K83发布于 2017-11-06
  • 来自专栏linux commands

    Ceph,health HEALTH_ERR错误

    health HEALTH_ERR 38 pgs are stuck inactive for more than 300 seconds 64 pgs degraded data, 0 objects 69636 kB used, 20389 MB / 20457 MB avail 38 undersized+degraded +peered 26 active+undersized+degraded 于是查看了集群内的机器中被使用于集群中的硬盘的所有者和所属组: [root@node1 ~

    89610发布于 2021-08-12
  • 来自专栏全栈程序员必看

    查看服务器硬件配置信息命令_服务器硬件参数

    ------------------------------------------- Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded |dgrd=Degraded Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked| Consist ------------------------------------------- Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded |dgrd=Degraded Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked| Consist |dgrd=Degraded Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked| Consist

    11.3K20编辑于 2022-09-20
  • 来自专栏分布式存储

    ceph分布式存储-常见 PG 故障处理

    常见 PG 故障处理 3.1 PG 无法达到 CLEAN 状态 创建一个新集群后,PG 的状态一直处于 active , active + remapped 或 active + degraded 状态 如果你想要在 active + degraded 状态( 2 副本)操作你的集群,可以设置 osd pool default min size 为 2 ,这样你就可以对处于 active + degraded 3.2 卡住的 PGs 有失败发生后,PG 会进入“degraded”(降级)或“peering”(连接建立中)状态,这种情况时有发生。通常这些状态意味着正常的失败恢复正在进行。 例如, ceph health 也许显示: ceph health detail HEALTH_ERR 7 pgs degraded; 12 pgs down; 12 pgs peering; 1 pgs recovering; 6 pgs stuck unclean; 114/3300 degraded (3.455%); 1/3 in osds are down ... pg 0.5 is down

    4.6K30发布于 2020-07-20
  • 《Ceph集群数据同步异常的根因突破与恢复实践》

    Ceph作为统一存储解决方案,为电子政务、民生服务等核心系统提供块存储与对象存储服务,却在一次常规集群扩容后遭遇了严重的数据同步异常——部分存储池的PG(Placement Group)状态持续处于“degraded ”, degraded PG数量从0逐渐增至42个,占总PG数的18%。 更棘手的是,执行“ceph pg repair”命令尝试修复时,部分PG能够短暂恢复正常,但10分钟后又重新进入degraded状态;而使用“rados df”查看存储容量时,显示的已用空间与实际业务数据量存在约 随后尝试将新增节点从集群中移除,重启所有MON与OSD组件,但重启后原有节点的部分PG也开始出现degraded状态,说明故障已扩散,并非单纯由新节点导致。 针对PG修复不稳定的问题,先执行“ceph pg scrub”对所有degraded PG进行数据校验,排除数据损坏风险,再分批次执行“ceph pg repair”,每批次修复10个PG,间隔5分钟,

    43801编辑于 2025-09-05
  • 来自专栏任浩强的运维生涯

    shell脚本批量收集linux服务器的硬件信息快速实现

    Device |Normal|Damage|Rebuild|Normal Virtual Drive |Optimal|Degraded|Degraded|Optimal Physical Drive PhysDrv [1:5] -a0 磁带状态的变化,从拔盘,到插盘的过程中: Device |Normal|Damage|Rebuild|Normal Virtual Drive |Optimal|Degraded |Degraded|Optimal Physical Drive |Online|Failed –> Unconfigured|Rebuild|Online sudo find / -name MegaCli64

    3K20发布于 2019-01-30
  • 来自专栏木头编程 - moTzxx

    Composer de涉水初探

    zlib_decode(): data error 类似情景: Failed to decode response: zlib_decode(): data error Retrying with degraded mode, check https://getcomposer.org/doc/articles/troubleshooting.md#degraded-mode for more info 解决方案

    1.3K20发布于 2018-09-11
  • 来自专栏AI.NET极客圈

    .NET Core 3.0之深入源码理解HealthCheck(一)

    status: HealthStatus.Healthy, description, exception: null, data); } public static HealthCheckResult Degraded IReadOnlyDictionary<string, object> data = null) { return new HealthCheckResult(status: HealthStatus.Degraded HealthStatus的源码如下: public enum HealthStatus { Unhealthy = 0, Degraded = 1, Healthy =

    87840发布于 2020-05-18
领券