根据Spring中的文档,readCapacity和writeCapacity的默认值为1
readCapacity DynamoDb表的读取容量。见“运动提供的吞吐量” 违约:1 writeCapacity DynamoDb表的写入能力。参见动态设置的吞吐量默认值:1
从代码中,我可以看到默认值为10。
Spring中是否有任何定制。
编辑
我有一个读写能力为40的锁表。
我把我的活页夹配置成这样
spring:
cloud:
stream:
kinesis:
binder:
locks:
table: customLocks
readCapacity: 5
writeCapacity: 2
checkpoint:
table: customCheckPoints
readCapacity: 5
writeCapacity: 2
bindings:
inputone:
consumer:
listenerMode: batch
idleBetweenPolls: 500
recordsLimit: 50
inputtwo:
consumer:
listenerMode: batch
idleBetweenPolls: 500
recordsLimit: 50
bindings:
inputone:
group: my-group-1
destination: stream-1
content-type: application/json
inputtwo:
group: my-group-2
destination: stream-2
content-type: application/json我有三个容器运行这些配置。
我看到ProvisionedThroughputExceededException和customLocks表对着。
不确定绑定程序是否试图重载dynamo db锁表。
2019-05-05 07:49:52.216 WARN --- [-kinesis-shard-locks-1] ices.dynamodbv2.AmazonDynamoDBLockClient : Could not acquire lock because of a client side failure in talking to DDB
com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: 94CURTLH858HM3RRELMSB6J817VV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1632)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:3452)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:3428)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeGetItem(AmazonDynamoDBClient.java:1789)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.getItem(AmazonDynamoDBClient.java:1764)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBLockClient.readFromDynamoDB(AmazonDynamoDBLockClient.java:997)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBLockClient.getLockFromDynamoDB(AmazonDynamoDBLockClient.java:743)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBLockClient.acquireLock(AmazonDynamoDBLockClient.java:402)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBLockClient.tryAcquireLock(AmazonDynamoDBLockClient.java:567)
at org.springframework.integration.aws.lock.DynamoDbLockRegistry$DynamoDbLock.doLock(DynamoDbLockRegistry.java:504)
at org.springframework.integration.aws.lock.DynamoDbLockRegistry$DynamoDbLock.tryLock(DynamoDbLockRegistry.java:478)
at org.springframework.integration.aws.lock.DynamoDbLockRegistry$DynamoDbLock.tryLock(DynamoDbLockRegistry.java:452)
at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumerManager.lambda$run$0(KinesisMessageDrivenChannelAdapter.java:1198)
at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumerManager.dt_access$257(KinesisMessageDrivenChannelAdapter.java)
at java.util.Collection.removeIf(Collection.java:414)
at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumerManager.run(KinesisMessageDrivenChannelAdapter.java:1191)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)发布于 2019-05-04 18:51:00
对我来说,容量越大,你在AWS账户上的支付就越多。这样的配置确实可以由application.properties来更改。
spring.cloud.stream.kinesis.binder.locks.readCapacity = 10
spring.cloud.stream.kinesis.binder.locks.writeCapacity = 10而这正是在Kinesis的文档中所解释的。
https://stackoverflow.com/questions/55983945
复制相似问题