【发布时间】:2019-10-07 21:12:06
【问题描述】:
我正在尝试使用 flink 从 kafka 读取数据执行某些功能并将结果返回到不同的 Kafka 主题,但出现以下错误。 `org.apache.flink.api.common.InvalidProgramException: MapFunction 的实现不可序列化。该对象可能包含或引用不可序列化的字段。
` 我从 kafka 收到消息 - 对其进行一些操作并返回我想发送到不同主题的对象列表。
class Wrapper implements Serializable{
@JsonProperty("viewBuilderRequests")
private ArrayList<ViewBuilderRequest> viewBuilderRequests;
public Wrapper(){}
public Wrapper(ArrayList<ViewBuilderRequest> viewBuilderRequests) {
this.viewBuilderRequests = viewBuilderRequests;
}
public List<ViewBuilderRequest> getViewBuilderRequests() {
return viewBuilderRequests;
}
public void setViewBuilderRequests(ArrayList<ViewBuilderRequest> viewBuilderRequests) {
this.viewBuilderRequests = viewBuilderRequests;
}
}
public class ViewBuilderRequest implements Serializable {
private CdmId cdmId
private ViewBuilderOperation operation
private List<ViewUserSystemIdentifier> viewUserSystemIdentifiers
public ViewBuilderRequest(){
}
public CdmId getCdmId() {
return cdmId;
}
public void setCdmId(CdmId cdmId) {
this.cdmId = cdmId;
}
public ViewBuilderOperation getOperation() {
return operation;
}
public void setOperation(ViewBuilderOperation operation) {
this.operation = operation;
}
public List<ViewUserSystemIdentifier> getViewUserSystemIdentifiers() {
return viewUserSystemIdentifiers;
}
public void setViewUserSystemIdentifiers(List<ViewUserSystemIdentifier> viewUserSystemIdentifiers) {
this.viewUserSystemIdentifiers = viewUserSystemIdentifiers;
}
public enum ViewBuilderOperation implements Serializable{
Create, Update,Delete
}
private MapFunction<String, Wrapper> parseAndSendToGraphProcessing = s ->{
UserMatchingRequest userMatchingRequest = objectMapper.readValue(s, UserMatchingRequest.class);
Wrapper wrapper = new Wrapper(janusGraphDataProcessing.handleMessage(userMatchingRequest));
return wrapper;
};
内部类也实现了 Serializable
此代码引发异常:
dataStream.map(parseAndSendToGraphProcessing)
.addSink(new FlinkKafkaProducer<Wrapper>(kafkaConfiguration.getBootstrapServers(),"graphNotifications",new WrapperSchema()));
我还对这两个对象进行了反序列化。
public class WrapperSchema implements DeserializationSchema<Wrapper>, SerializationSchema<Wrapper> {
// private final static ObjectMapper objectMapper = new ObjectMapper().configure(MapperFeature.ACCEPT_CASE_INSENSITIVE_PROPERTIES, true);
static ObjectMapper objectMapper = new ObjectMapper();
@Override
public Wrapper deserialize(byte[] message) throws IOException {
return objectMapper.readValue(message, Wrapper.class);
}
@Override
public boolean isEndOfStream(Wrapper nextElement) {
return false;
}
@Override
public byte[] serialize(Wrapper element) {
// return element.toString().getBytes();
if(objectMapper == null) {
objectMapper.setVisibility(PropertyAccessor.FIELD, JsonAutoDetect.Visibility.ANY);
objectMapper = new ObjectMapper();
}
try {
String json = objectMapper.writeValueAsString(element);
return json.getBytes();
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return new byte[0];
}
@Override
public TypeInformation<Wrapper> getProducedType() {
return TypeInformation.of(Wrapper.class);
}
}
【问题讨论】:
标签: serialization apache-kafka apache-flink