我已经使用以下代码在Pyspark和Redshift之间建立了连接。
import sqlalchemy as sa
from sqlalchemy.orm import sessionmaker
import psycopg2
DATABASE = "d"
USER = "user1"
PASSWORD = "1234"
HOST = "sparkvalidation.crv9zfdiseqm.us-west-2.redshift.amazonaws.com"
PORT = "5439"
SCHEMA = "public"
connection_string = "redshift+psycopg2://%s:%s@%s:%s/%s" % (USER,PASSWORD,HOST,str(PORT),DATABASE)
engine = sa.create_engine(connection_string)
session = sessionmaker()
session.configure(bind=engine)
s = session()
SetPath = "SET search_path TO %s" % SCHEMA
s.execute(SetPath)现在我如何将pyspark dataframe写到Redshift?
发布于 2019-07-05 21:56:22
如果你使用DataBricks,你可以这样写:
dataframe.write \
.format("com.databricks.spark.redshift") \
.option("url", connection_string) \
.option("dbtable", "target") \
.option("tempdir", "s3a://your_s3_tmp_bucket/tmp_data") \
.mode("error") \
.save()请注意,您需要一个s3存储桶,因为在将数据复制到redshift时通常会出现这种情况
https://stackoverflow.com/questions/51538961
复制相似问题