【问题标题】:Unable to create Hive table using Presto from a CSV File无法使用 CSV 文件中的 Presto 创建 Hive 表
【发布时间】:2019-10-30 22:34:08
【问题描述】:

我想使用 Presto 创建一个 Hive 表,并将数据存储在 S3 上的 csv 文件中。

我已经在 S3 上上传了文件,我确信 Presto 能够连接到存储桶。

现在,当我发出 create table 命令时,我在查询表时将所有值(行)都设为 NULL。

我尝试调查过类似的问题,但事实证明 Presto 在 Stackoverflow 上并不那么出名。

文件中的一些行是:

PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
1,0,3,"Braund, Mr. Owen Harris",male,22,1,0,A/5 21171,7.25,,S
2,1,1,"Cumings, Mrs. John Bradley (Florence Briggs Thayer)",female,38,1,0,PC 17599,71.2833,C85,C
3,1,3,"Heikkinen, Miss. Laina",female,26,0,0,STON/O2. 3101282,7.925,,S
4,1,1,"Futrelle, Mrs. Jacques Heath (Lily May Peel)",female,35,1,0,113803,53.1,C123,S
5,0,3,"Allen, Mr. William Henry",male,35,0,0,373450,8.05,,S
6,0,3,"Moran, Mr. James",male,,0,0,330877,8.4583,,Q
7,0,1,"McCarthy, Mr. Timothy J",male,54,0,0,17463,51.8625,E46,S
8,0,3,"Palsson, Master. Gosta Leonard",male,2,3,1,349909,21.075,,S
9,1,3,"Johnson, Mrs. Oscar W (Elisabeth Vilhelmina Berg)",female,27,0,2,347742,11.1333,,S
10,1,2,"Nasser, Mrs. Nicholas (Adele Achem)",female,14,1,0,237736,30.0708,,C
11,1,3,"Sandstrom, Miss. Marguerite Rut",female,4,1,1,PP 9549,16.7,G6,S
12,1,1,"Bonnell, Miss. Elizabeth",female,58,0,0,113783,26.55,C103,S
13,0,3,"Saundercock, Mr. William Henry",male,20,0,0,A/5. 2151,8.05,,S
14,0,3,"Andersson, Mr. Anders Johan",male,39,1,5,347082,31.275,,S
15,0,3,"Vestrom, Miss. Hulda Amanda Adolfina",female,14,0,0,350406,7.8542,,S
16,1,2,"Hewlett, Mrs. (Mary D Kingcome) ",female,55,0,0,248706,16,,S
17,0,3,"Rice, Master. Eugene",male,2,4,1,382652,29.125,,Q
18,1,2,"Williams, Mr. Charles Eugene",male,,0,0,244373,13,,S
19,0,3,"Vander Planke, Mrs. Julius (Emelia Maria Vandemoortele)",female,31,1,0,345763,18,,S
20,1,3,"Masselmani, Mrs. Fatima",female,,0,0,2649,7.225,,C

我的 csv 文件是 here,从这里获取 train.csv。因此,我的 presto 命令是:

create table testing_nan_4 ( PassengerId integer, Survived integer, Pclass integer, Name varchar, Sex varchar, Age integer, SibSp integer, Parch integer, Ticket integer, Fare double, Cabin varchar, Embarked varchar ) with ( external_location = 's3://my_bucket/titanic_train/', format = 'textfile' );

结果是:

 passengerid | survived | pclass | name | sex  | age  | sibsp | parch | ticket | fare | cabin | embarked
-------------+----------+--------+------+------+------+-------+-------+--------+------+-------+----------
 NULL        | NULL     | NULL   | NULL | NULL | NULL | NULL  | NULL  | NULL   | NULL | NULL  | NULL
 NULL        | NULL     | NULL   | NULL | NULL | NULL | NULL  | NULL  | NULL   | NULL | NULL  | NULL
 NULL        | NULL     | NULL   | NULL | NULL | NULL | NULL  | NULL  | NULL   | NULL | NULL  | NULL
 NULL        | NULL     | NULL   | NULL | NULL | NULL | NULL  | NULL  | NULL   | NULL | NULL  | NULL
 NULL        | NULL     | NULL   | NULL | NULL | NULL | NULL  | NULL  | NULL   | NULL | NULL  | NULL
 NULL        | NULL     | NULL   | NULL | NULL | NULL | NULL  | NULL  | NULL   | NULL | NULL  | NULL
 NULL        | NULL     | NULL   | NULL | NULL | NULL | NULL  | NULL  | NULL   | NULL | NULL  | NULL
 NULL        | NULL     | NULL   | NULL | NULL | NULL | NULL  | NULL  | NULL   | NULL | NULL  | NULL

并且期望得到实际数据。

【问题讨论】:

    标签: sql csv amazon-s3 hive presto


    【解决方案1】:

    Starburst Presto 目前支持 CSV Hive 存储格式,请参阅:https://docs.starburstdata.com/latest/release/release-302-e.html?highlight=csv

    还有一些工作正在进行中以使其在 PrestoSQL 中运行,请参阅:https://github.com/prestosql/presto/pull/920

    然后你可以在 Presto Hive 连接器中使用这样的表:

    CREATE TABLE hive.default.csv_table_with_custom_parameters (
        c_bigint varchar,
        c_varchar varchar)
    WITH (
        csv_escape = '',
        csv_quote = '',  
        csv_separator = U&'\0001', -- to pass unicode character
        external_location = 'hdfs://hadoop/datacsv_table_with_custom_parameters',
        format = 'CSV')
    

    在你的情况下是:

    CREATE TABLE hive.default.csv_table_with_custom_parameters (
           PassengerId int, Survived int, Pclass int, Name string, Sex string, Age int, SibSp int, Parch int, Ticket int, Fare double, Cabin string, Embarked string)
    WITH (
        csv_escape = '\',
        csv_quote = '"',  
        csv_separator = ',',
        external_location = 's3://my_bucket/titanic_train/',
        format = 'CSV')
    

    请注意,csv_escapecsv_quotecsv_separator 表属性仅支持单个字符值。

    另外"skip.header.line.count"="1" 在 Presto 中还没有 CSV 表的等价语法。所以我建议你从数据文件中删除标题。

    【讨论】:

    • "skip.header.line.count"="1" 的等效选项是 skip_header_line_count = 1
    【解决方案2】:

    目前,在文本文件格式中,您必须提供一个 0x1 分隔 ('\u0001') 的文件才能正确读取。问题是 Presto 在这里不支持自定义分隔符。

    https://github.com/prestodb/presto/issues/10905

    建议在此处使用 Hive DDL 并在 Presto 中轻松阅读。

    这是 Hive 查询:

    CREATE EXTERNAL TABLE mytable ( 
       PassengerId int, Survived int, Pclass int, Name string, Sex string, Age int, SibSp int, Parch int, Ticket int, Fare double, Cabin string, Embarked string 
    )
    
    ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
    WITH SERDEPROPERTIES (
      'separatorChar' = ',',
      'quoteChar' = '\"',
      'escapeChar' = '\\'
    )
    STORED AS TEXTFILE
    LOCATION 's3://bucket-path/csv_data/'
    TBLPROPERTIES (
      "skip.header.line.count"="1")
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2020-06-12
      • 2014-04-24
      • 1970-01-01
      • 2014-02-02
      • 2021-07-27
      • 1970-01-01
      • 2020-07-28
      • 1970-01-01
      相关资源
      最近更新 更多