u'foo bar' 只是unicode 中的一个字符串。 str 和 unicode 都被视为 basestring(参见 http://docs.python.org/2/howto/unicode.html、http://docs.python.org/2/library/functions.html#basestring)
>>> x = u'foobar'
>>> isinstance(x, str)
False
>>> isinstance(x,unicode)
True
>>> isinstance(x,basestring)
True
>>> print x
foobar
当您尝试从 NLTK 的语料库阅读器访问语料库时,默认数据结构是一个句子列表,其中每个句子都是一个标记列表,每个标记都是一个基本字符串。
>>> from nltk.corpus import brown
>>> print brown.sents()
[['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of', "Atlanta's", 'recent', 'primary', 'election', 'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took', 'place', '.'], ['The', 'jury', 'further', 'said', 'in', 'term-end', 'presentments', 'that', 'the', 'City', 'Executive', 'Committee', ',', 'which', 'had', 'over-all', 'charge', 'of', 'the', 'election', ',', '``', 'deserves', 'the', 'praise', 'and', 'thanks', 'of', 'the', 'City', 'of', 'Atlanta', "''", 'for', 'the', 'manner', 'in', 'which', 'the', 'election', 'was', 'conducted', '.'], ...]
如果你想要一个纯文本版本的语料库,你可以这样做:
>>> for i in brown.sents():
... print " ".join(i)
... break
...
The Fulton County Grand Jury said Friday an investigation of Atlanta's recent primary election produced `` no evidence '' that any irregularities took place .
NLTK 中有许多内部魔法可以使语料库像 NLTK 的模块一样工作,但是了解这些“预加载”语料库(或更准确地说是“预编码”语料库阅读器)中的内容的最简单方法是使用:
>>> dir(brown)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_add', '_c2f', '_delimiter', '_encoding', '_f2c', '_file', '_fileids', '_get_root', '_init', '_map', '_para_block_reader', '_pattern', '_resolve', '_root', '_sent_tokenizer', '_sep', '_tag_mapping_function', '_word_tokenizer', 'abspath', 'abspaths', 'categories', 'encoding', 'fileids', 'open', 'paras', 'raw', 'readme', 'root', 'sents', 'tagged_paras', 'tagged_sents', 'tagged_words', 'words']