【发布时间】:2015-09-10 12:25:23
【问题描述】:
我正在尝试使用 pigscript 从 hdfs 加载文件
data = LOAD '/user/Z013W7X/typeahead/time_decayed_clickdata.tsv' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);
上面提到的路径是hdfs路径。 当我使用 pig grunt 运行相同的脚本时,它执行没有任何问题,但使用脚本的相同代码显示以下问题:
输入: 从“/user/Z013W7X/typeahead/time_decayed_clickdata.tsv”读取数据失败
这是我用来调用 pig 脚本的 shell 脚本...
jar_path=/home_dir/z013w7x/workspace/tapipeline/Typeahead-APP/tapipeline/libs/takeygen-0.0.1-SNAPSHOT-jar-with-dependencies.jar
scripts_path=/home_dir/z013w7x/workspace/tapipeline/Typeahead-APP/tapipeline/pig_scripts/daily_running_scripts
dataset_path=hdfs://d-3zkyk02.target.com:8020/user/Z013W7X/typeahead
data_files=/user/Z013W7X/typeahead/data_files.zip#data
ngrams_gen_script=$scripts_path/generate_ngrams.pig
time_decayed_clickdata_file=$dataset_path/time_decayed_clickdata.tsv
all_suggestions_file=$results_path/all_suggestions.tsv
top_suggestions_file=$results_path/top_suggestions.tsv
pig -f $ngrams_gen_script -param "INPUT_TIME_DECAYED_CLICKDATA_FILE=$time_decayed_clickdata_file" -param "OUTPUT_ALL_SUGGESTIONS_FILE=$all_suggestions_file" -param "OUTPUT_TOP_SUGGESTIONS_FILE=$top_suggestions_file" -param "REGISTER=$jar_path" -param "INPUT_DATA_ARCHIVE=$data_files"
猪脚本如下-
SET mapred.create.symlink yes
SET mapred.cache.archives $INPUT_DATA_ARCHIVE
register $REGISTER
click_data = LOAD '$INPUT_TIME_DECAYED_CLICKDATA_FILE' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);
ordered_click_data = order click_data by search_count desc;
sample_data = LIMIT ordered_click_data 3000000;
mclick_data = foreach sample_data generate keyword, CEIL(search_count) as search_count, CEIL(clicks) as clicks, CEIL(cartadds) as cartadds;
fclick_data = filter mclick_data by (keyword is not null and search_count is not null and keyword != 'NULL' );
ngram_data = foreach fclick_data generate flatten(com.tgt.search.typeahead.takeygen.udf.NGramScore(keyword, search_count, clicks, cartadds))
as (stemmedKeyword:chararray, keyword:chararray, dscore:double, isUserQuery:int, contrib:double, keyscore:chararray);
grouped_data = group ngram_data by stemmedKeyword;
agg_data = foreach grouped_data generate group, flatten(com.tgt.search.typeahead.takeygen.udf.StemmedKeyword(ngram_data.keyscore)) as keyword,
SUM(ngram_data.dscore) as ascore, SUM(ngram_data.isUserQuery) as isUserQuery, SUM(ngram_data.contrib) as contrib;
filter_queries = filter agg_data by isUserQuery > 0;
all_suggestions = foreach filter_queries generate keyword, ascore;
ordered_suggestions = order all_suggestions by ascore desc;
top_suggestions = limit ordered_suggestions 200000;
rmf /tmp/all_suggestions
rmf $OUTPUT_ALL_SUGGESTIONS_FILE
rmf /tmp/top_suggestions
rmf $OUTPUT_TOP_SUGGESTIONS_FILE
store ordered_suggestions into '/tmp/all_suggestions' using PigStorage('\t','-schema');
store top_suggestions into '/tmp/top_suggestions' using PigStorage('\t','-schema');
cp /tmp/all_suggestions/part-r-00000 $OUTPUT_ALL_SUGGESTIONS_FILE
cp /tmp/top_suggestions/part-r-00000 $OUTPUT_TOP_SUGGESTIONS_FILE
【问题讨论】:
-
你是如何运行你的脚本的?
-
我正在从 shell 脚本运行脚本
-
确保您没有在本地模式下运行 pig 脚本。
-
不,不是这样的......它在读取输入文件时遇到了一些问题......
-
您可以尝试将“data_files=/user/Z013W7X/typeahead/data_files.zip#data”替换为“data_files=hdfs://d-3zkyk02.target.com:8020/user/Z013W7X”吗/typeahead/data_files.zip#data" ?
标签: hadoop apache-pig