首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >使用jackc/pgx在Postgresql中插入大型对象返回“内存外”(SQLSTATE 54000)

使用jackc/pgx在Postgresql中插入大型对象返回“内存外”(SQLSTATE 54000)
EN

Stack Overflow用户
提问于 2022-08-08 11:56:02
回答 1查看 233关注 0票数 1

我使用jackc/pgx库将大型对象插入到Postgres中。当大型物体很小时,它工作得很好。然而,在一种情况下,大型物体的大小几乎是1.8GB。结果,在执行写操作时,出现了“内存不足(SQLSTATE 54000)”错误。

下面是我如何插入blobs的代码片段

代码语言:javascript
复制
import (
    "github.com/jackc/pgx/v4"
    "github.com/jackc/pgx/v4/pgxpool"
)

// Read bytes from the file to be imported as large object
b, err := ioutil.ReadFile(pathToLargeObjectFile)
txWrite, err := dbPool.Begin(ctx)
loWrite := txWrite.LargeObjects()

fmt.Printf("Creating new blob with ID : %d", ID)
id, err := loWrite.Create(ctx, ID)
// open blob with ID
obj, err := loWrite.Open(ctx, id, pgx.LargeObjectModeRead|pgx.LargeObjectModeWrite)
n, err := obj.Write(b)
fmt.printf("Written %d byte to blob %d\n", n, id)

我在这条线上有个错误

n,err := obj.Write(b)

如何防止错误并成功导入大型对象?

我读了这个帖子在Postgresql中插入大型对象将从内存错误中返回53200,它试图用块写字节。

同样的情况是否可能发生在jackc/pgx上?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-09-01 07:16:44

同样的解决方案是,我们需要以块的形式读取文件并将其写入块中。

这里需要注意的一点是,当使用obj.Write(b)时,obj对象维护指向上一次写入结束的指针。因此,如果我们连续地进行写入,则blob将在每次写入后追加。

我就是这样解决的,

代码语言:javascript
复制
// Open blob with ID
obj, err := loWrite.Open(ctx, id, pgx.LargeObjectModeRead|pgx.LargeObjectModeWrite)
if err != nil {
    fmt.Printf("Error opening blob. Error: %s",  err.Error())
    return err
}

importFile:= <string path to import file>
// reader for the blob backup file
reader := bufio.NewReader(blobBackupFile)
chunk:=0
id:= <id of the blob>
// Start reading from blob in chunks and writing it to blob
for {
    buf := make([]byte, bufferSize)    // Initializing the buffer
    bytesRead, err := reader.Read(buf) // Loading chunk into buffer
    buf = buf[:bytesRead]              // slicing it to the number of the bytes actually read

    

    if bytesRead == 0 {
        if err == io.EOF {
            fmt.Printf("Reached end of file %s", file.Name())
            break
        }
        if err != nil {
            fmt.Printf("Error reading chunks %d from file %s. Error: %s", chunk, importFile, err.Error())
            break
        }

        return err
    }
    loc, err := obj.Tell()
    if err != nil {
        fmt.Printf("Error in getting the current pointer location %s", err.Error())
    }
    fmt.Printf("BlobID: %d. Pointer at %#v ", id, loc)
    fmt.Printf("Writing chunk %d of %d bytes at address %#v of the blob %d", chunk, bytesRead, loc, id)

    bytesWritten, err := obj.Write(buf)
    if err != nil {
        fmt.Printf("Error writing bytes to blob %s", err.Error())
        return err
    }
    fmt.Printf("Written %d byte to blob %d.", bytesWritten, id)
    endLoc, err := obj.Tell()
    if err != nil {
        fmt.Printf("Error getting the pointer location after writing %s", err.Error())
    }
    fmt.Printf("Written chunk %d of %d bytes from address %#v to address %#v of the blob %d", chunk, bytesWritten, loc, endLoc, id)

    if int64(totalBytesRead) == file.Size() {
        fmt.Printf("Chunk %d was last chunk written at address %#v to address %#v of the blob %d", chunk, loc, endLoc, id)
        break
    }

    //next chunk
    chunk++
}

在编写完完整的blob后关闭blob和obj。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/73277493

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档