提出attention机制,用于机器翻译。

 

背景:基于RNN的机器翻译

基本思路是首先对语言x进行编码encoder,然后解码decoder为语言y。encoder和decoder可以看做两层RNN? 编码的隐藏层h和解码的隐藏层s

 

RNN encoder-decoder :

1)对输入的sentence,表示为论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate,将向量c表示为隐藏层的函数,c即为输入encode出来的向量。

论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate

2)接下来是decoder阶段,根据之前预测的翻译单词以及输入的encoder论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate 来预测下一个单词

论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate

 

本文的创新:

对(2)式的条件概率进行改写,对每个yi,context的encoder各异,记为ci

论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate

关于ci的计算:ci表示成一系列hi的线性加权,这里的hi是encoder端的隐藏层,定义为annotation,hi(个人理解)为输入的第i个词附近的information(简单的说就是输入端i的表示)

论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate

alpha系数: 

论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate

alpha或者说e代表了第j个输入词的annotation与decoder端第i-1个隐藏状态的importance,这样得到的ci会对某些位置pay attention,等价地可以看做翻译词i对原始输入某些位置pay attetnion

论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate

 

使用BiRNN:

本文使用双向RNN来catch住向前、向后的hi拼接到一起,这样的annotation更能个表征输入词i周围的信息。

网络结构:

论文笔记-Neural Machine Translation by Jointly Learning to Align and Translate

相关文章:

  • 2021-11-29
  • 2021-08-29
  • 2021-09-23
  • 2021-10-14
  • 2021-07-08
  • 2021-06-17
  • 2021-04-22
  • 2021-11-19
猜你喜欢
  • 2021-12-07
  • 2022-01-22
  • 2021-04-05
  • 2021-05-22
  • 2022-12-23
  • 2021-11-11
  • 2021-04-14
相关资源
相似解决方案