我认为没有内置或 std-lib 实用程序可以解决此问题,但您可以编写自己的小函数来创建字节偏移量到代码点偏移量的映射。
天真的方法
import typing as t
def map_byte_to_codepoint_offset(text: str) -> t.Dict[int, int]:
mapping = {}
byte_offset = 0
for codepoint_offset, character in enumerate(text):
mapping[byte_offset] = codepoint_offset
byte_offset += len(character.encode('utf8'))
return mapping
让我们用你的例子来测试一下:
>>> text = 'aβgδe'
>>> byte_offsets = [0, 1, 3, 4, 6]
>>> mapping = map_byte_to_codepoint_offset(text)
>>> mapping
{0: 0, 1: 1, 3: 2, 4: 3, 6: 4}
>>> [mapping[o] for o in byte_offsets]
[0, 1, 2, 3, 4]
优化
我还没有对此进行基准测试,但是对每个字符分别调用.encode() 可能不是很有效。此外,我们只对编码字符的字节长度感兴趣,它只能取四个值之一,每个值对应于一个连续的代码点范围。
要获得这些范围,可以研究 UTF-8 编码规范,在 Internet 上查找它们,或者在 Python REPL 中运行快速计算:
>>> import sys
>>> bins = {i: [] for i in (1, 2, 3, 4)}
>>> for codepoint in range(sys.maxunicode+1):
... # 'surrogatepass' required to allow encoding surrogates in UTF-8
... length = len(chr(codepoint).encode('utf8', errors='surrogatepass'))
... bins[length].append(codepoint)
...
>>> for l, cps in bins.items():
... print(f'{l}: {hex(min(cps))}..{hex(max(cps))}')
...
1: 0x0..0x7f
2: 0x80..0x7ff
3: 0x800..0xffff
4: 0x10000..0x10ffff
此外,在朴素方法中返回的映射包含间隙:如果我们查找位于多字节字符中间的偏移量,我们将得到 KeyError(例如,上面没有键 2例子)。为了避免这种情况,我们可以通过重复代码点偏移来填补空白。由于生成的索引将是从 0 开始的连续整数,因此我们可以使用列表而不是 dict 进行映射。
TWOBYTES = 0x80
THREEBYTES = 0x800
FOURBYTES = 0x10000
def map_byte_to_codepoint_offset(text: str) -> t.List[int]:
mapping = []
for codepoint_offset, character in enumerate(text):
mapping.append(codepoint_offset)
codepoint = ord(character)
for cue in (TWOBYTES, THREEBYTES, FOURBYTES):
if codepoint >= cue:
mapping.append(codepoint_offset)
else:
break
return mapping
上面的例子:
>>> mapping = map_byte_to_codepoint_offset(text)
>>> mapping
[0, 1, 1, 2, 3, 3, 4]
>>> [mapping[o] for o in byte_offsets]
[0, 1, 2, 3, 4]