Wenet脚本 BPE

Wenet脚本 BPE

  • multi_cn构建dict:把英文词用▁连起,得到▁英文词串,▁英文词串过一遍(不带▁英文文本训练的)bpe.model得子词,去重得dict
  • librispeech构建dict:用不带▁的英文词串过一遍(不带▁英文文本训练的)bpe.model得子词,去重得dict

二者的区别在于,一个bpe.model encode的对象是单词,一个是▁连起的单词串,然后再分开再encode,应该都可以?

先训练好一个bpe model,英文的,然后把这些词放进中文字典中,扩充字典(字典里的英文是bpe格式的词)

然后使用时,把正常英文encoder编码成bpe格式,然后训练;推理的时候bpe decoder解码成原来的英文单词;

中英混时,用的5000词的bpe model,但是对训练集编码,发现没有用到整个5000词,可能只用了500个子词,因此只把500个英文子词,联合着中文一起,添加到词典中,因此词典可能是6、7千字的样子(最后softmax输出的大小)

统计每个英文词出现的次数,估摸着够不够样本训练;

统计:

法一:统计英文单词的样本数【没采用,因为我们时bpe建模,没有用真正的英文单词,因此该统计无意义】

  1. 一共有多少个词
  2. 每个词的数量
  3. 每个词在不在bpe_model中,若不在,则添加进bpe.model中;(至少bpe.model要能够表示它)

法二:统计英文子词的样本数【采用】

  1. 确保每个词在bpe.model中;
  2. 看用了哪些子词,一共有多少个子词;
  3. 统计每个子词的样本数量;

将有空格的词分成没空格的字(汉字),将英文转成bpe格式的词:

  • text2token.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
#!/usr/bin/env python3

# Copyright 2017 Johns Hopkins University (Shinji Watanabe)
# Copyright 2021 JD AI Lab. All Rights Reserved. (authors: Lu Fan)
# Copyright 2021 Mobvoi Inc. All Rights Reserved. (Di Wu)
# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)

from __future__ import print_function
from __future__ import unicode_literals

import argparse
import codecs
import re
import sys

is_python2 = sys.version_info[0] == 2


def exist_or_not(i, match_pos):
start_pos = None
end_pos = None
for pos in match_pos:
if pos[0] <= i < pos[1]:
start_pos = pos[0]
end_pos = pos[1]
break

return start_pos, end_pos

def seg_char(sent):
pattern = re.compile(r'([\u4e00-\u9fa5])')
chars = pattern.split(sent)
chars = [w for w in chars if len(w.strip()) > 0]
return chars

def get_parser():
parser = argparse.ArgumentParser(
description='convert raw text to tokenized text',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--nchar',
'-n',
default=1,
type=int,
help='number of characters to split, i.e., \
aabb -> a a b b with -n 1 and aa bb with -n 2')
parser.add_argument('--skip-ncols',
'-s',
default=1,
type=int,
help='skip first n columns')
parser.add_argument('--space',
default='<space>',
type=str,
help='space symbol')
parser.add_argument('--bpe-model',
'-m',
default='conf/train_960_unigram5000.model',
type=str,
help='bpe model for english part')
parser.add_argument('--non-lang-syms',
'-l',
default=None,
type=str,
help='list of non-linguistic symobles,'
' e.g., <NOISE> etc.')
parser.add_argument('text',
type=str,
default='data_bpe/train/text',
nargs='?',
help='input text')
parser.add_argument('--trans_type',
'-t',
type=str,
default="cn_char_en_bpe",
choices=["char", "phn", "cn_char_en_bpe"],
help="""Transcript type. char/phn. e.g., for TIMIT
FADG0_SI1279 -
If trans_type is char, read from
SI1279.WRD file -> "bricks are an alternative"
Else if trans_type is phn,
read from SI1279.PHN file ->
"sil b r ih sil k s aa r er n aa l
sil t er n ih sil t ih v sil" """)
return parser


def main():
parser = get_parser()
args = parser.parse_args()

rs = []
if args.non_lang_syms is not None:
with codecs.open(args.non_lang_syms, 'r', encoding="utf-8") as f:
nls = [x.rstrip() for x in f.readlines()]
rs = [re.compile(re.escape(x)) for x in nls]

if args.bpe_model is not None:
import sentencepiece as spm
sp = spm.SentencePieceProcessor()
sp.load(args.bpe_model)

if args.text:
f = codecs.open(args.text, encoding="utf-8")
else:
f = codecs.getreader("utf-8")(
sys.stdin if is_python2 else sys.stdin.buffer)

sys.stdout = codecs.getwriter("utf-8")(
sys.stdout if is_python2 else sys.stdout.buffer)
line = f.readline()
n = args.nchar
while line:
x = line.split()
print(' '.join(x[:args.skip_ncols]), end=" ")
a = ' '.join(x[args.skip_ncols:])

# get all matched positions
match_pos = []
for r in rs:
i = 0
while i >= 0:
m = r.search(a, i)
if m:
match_pos.append([m.start(), m.end()])
i = m.end()
else:
break

if len(match_pos) > 0:
chars = []
i = 0
while i < len(a):
start_pos, end_pos = exist_or_not(i, match_pos)
if start_pos is not None:
chars.append(a[start_pos:end_pos])
i = end_pos
else:
chars.append(a[i])
i += 1
a = chars

if (args.trans_type == "phn"):
a = a.split(" ")
elif args.trans_type == "cn_char_en_bpe":
b = seg_char(a)
a = []
for j in b:
# we use "▁" to instead of blanks among english words
# warning: here is "▁", not "_"
for l in j.strip().split("▁"):
if not l.encode('UTF-8').isalpha(): #是不是英文字母
a.append(l)
else:
for k in sp.encode_as_pieces(l):
if k == "▁":
print("yelong",end=' ') # 如果不在bpe.model里,这里报错来的
a.append(k)
else:
a = [a[j:j + n] for j in range(0, len(a), n)]

a_flat = []
for z in a:
a_flat.append("".join(z))

a_chars = [z.replace(' ', args.space) for z in a_flat]
if (args.trans_type == "phn"):
a_chars = [z.replace("sil", args.space) for z in a_chars]
print(' '.join(a_chars))
line = f.readline()


if __name__ == '__main__':
main()

发现很多单词不在bpe中【用原来的bpe.model后,词典的英文词有 4719 个词】,因此要做清洗,具体操作:

清洗

重新训练bpe.model

  1. 删除text的标点符号

    1
    2
    sed -i 's/MOTHER`/MOTHER'\''/g' text.org
    # 等等 将,。、!:替换成空格
  2. 只挑选训练集中英文部分;[英文中间夹着中文,去掉中文后,其实并没有语序关系,这里就不管这种情况了]:

    1
    cut -d ' ' -f 2- text | grep "[a-zA-Z]" > text_chi_eng
  3. 将英文文本token化,变成带有▁符号

    1
    cat text_chi_eng | tr 'a-z' 'A-Z' | sed 's/\([A-Z]\) \([A-Z]\)/\1▁\2/g' | sed 's/\([A-Z]\) \([A-Z]\)/\1▁\2/g' | tr -d " " >  text_token_chi_eng

    自己写了一个去掉中文的脚本:delete_chi.py【旧】,这里的输入是带有“▁”的英文,因为后面查看有没有英文unk时,是用的带▁的英文文本,因此这里先处理

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    # python delete_chi.py > text_token_eng
    #!/usr/bin/env python3

    # Copyright 2017 Johns Hopkins University (Shinji Watanabe)
    # Copyright 2021 JD AI Lab. All Rights Reserved. (authors: Lu Fan)
    # Copyright 2021 Mobvoi Inc. All Rights Reserved. (Di Wu)
    # Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)

    from __future__ import print_function
    from __future__ import unicode_literals

    import argparse
    import codecs
    import re
    import sys

    is_python2 = sys.version_info[0] == 2


    def exist_or_not(i, match_pos):
    start_pos = None
    end_pos = None
    for pos in match_pos:
    if pos[0] <= i < pos[1]:
    start_pos = pos[0]
    end_pos = pos[1]
    break

    return start_pos, end_pos

    def seg_char(sent):
    pattern = re.compile(r'([\u4e00-\u9fa5])')
    chars = pattern.split(sent)
    chars = [w for w in chars if len(w.strip()) > 0]
    return chars

    def get_parser():
    parser = argparse.ArgumentParser(
    description='convert raw text to tokenized text',
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--nchar',
    '-n',
    default=1,
    type=int,
    help='number of characters to split, i.e., \
    aabb -> a a b b with -n 1 and aa bb with -n 2')
    parser.add_argument('--skip-ncols',
    '-s',
    default=0,
    type=int,
    help='skip first n columns')
    parser.add_argument('--space',
    default='<space>',
    type=str,
    help='space symbol')
    parser.add_argument('--bpe-model',
    '-m',
    default='data/lang_char/train_unigram5000.model',
    type=str,
    help='bpe model for english part')
    parser.add_argument('--non-lang-syms',
    '-l',
    default=None,
    type=str,
    help='list of non-linguistic symobles,'
    ' e.g., <NOISE> etc.')
    parser.add_argument('text',
    type=str,
    default='/home/yelong/data/wenet/examples/multi_cn/s0/data_4000_add_we/train/text_token_chi_eng',
    # default='/home/yelong/data/wenet/examples/multi_cn/s0/data_4000_add_we/test_1.4w/text_token_chi_eng',
    # default='/home/yelong/data/wenet/examples/multi_cn/s0/data_4000_add_we/lang_char/2',
    nargs='?',
    help='input text')
    parser.add_argument('--trans_type',
    '-t',
    type=str,
    default="cn_char_en_bpe",
    choices=["char", "phn", "cn_char_en_bpe"],
    help="""Transcript type. char/phn. e.g., for TIMIT
    FADG0_SI1279 -
    If trans_type is char, read from
    SI1279.WRD file -> "bricks are an alternative"
    Else if trans_type is phn,
    read from SI1279.PHN file ->
    "sil b r ih sil k s aa r er n aa l
    sil t er n ih sil t ih v sil" """)
    return parser


    def main():
    parser = get_parser()
    args = parser.parse_args()

    rs = []
    if args.non_lang_syms is not None:
    with codecs.open(args.non_lang_syms, 'r', encoding="utf-8") as f:
    nls = [x.rstrip() for x in f.readlines()]
    rs = [re.compile(re.escape(x)) for x in nls]

    if args.bpe_model is not None:
    import sentencepiece as spm
    sp = spm.SentencePieceProcessor()
    sp.load(args.bpe_model)

    if args.text:
    f = codecs.open(args.text, encoding="utf-8")
    else:
    f = codecs.getreader("utf-8")(
    sys.stdin if is_python2 else sys.stdin.buffer)

    sys.stdout = codecs.getwriter("utf-8")(
    sys.stdout if is_python2 else sys.stdout.buffer)
    line = f.readline()
    n = args.nchar
    while line:
    x = line.split()
    print(' '.join(x[:args.skip_ncols]), end=" ")
    a = ' '.join(x[args.skip_ncols:])

    # get all matched positions
    match_pos = []
    for r in rs:
    i = 0
    while i >= 0:
    m = r.search(a, i)
    if m:
    match_pos.append([m.start(), m.end()])
    i = m.end()
    else:
    break

    if len(match_pos) > 0:
    chars = []
    i = 0
    while i < len(a):
    start_pos, end_pos = exist_or_not(i, match_pos)
    if start_pos is not None:
    chars.append(a[start_pos:end_pos])
    i = end_pos
    else:
    chars.append(a[i])
    i += 1
    a = chars

    if (args.trans_type == "phn"):
    a = a.split(" ")
    elif args.trans_type == "cn_char_en_bpe":
    b = seg_char(a)
    a = []
    for j in b:
    # we use "▁" to instead of blanks among english words
    # warning: here is "▁", not "_"
    # for l in j.strip().split(" "):
    # count = len(j.strip().split("▁")) -1
    for l in j.strip().split('▁'):
    if l.encode('UTF-8').isalpha(): #是不是英文字母 #T-shirt这种就会没有掉 TODO MACY'S
    a.append(l)
    a.append('▁')
    # if count:
    # a.append('▁')
    # count = count - 1


    else:
    a = [a[j:j + n] for j in range(0, len(a), n)]

    a_flat = []
    for z in a:
    a_flat.append("".join(z))

    a_chars = [z.replace(' ', args.space) for z in a_flat]
    if (args.trans_type == "phn"):
    a_chars = [z.replace("sil", args.space) for z in a_chars]
    if len(a_chars) > 0:
    if a_chars[-1] == '▁':
    print(''.join(a_chars[:-1]))
    else:
    print(''.join(a_chars))
    line = f.readline()


    if __name__ == '__main__':
    main()

    上面的delete_chi.py比较慢,新删除一些行,新写了delete_chi.py:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    #!/usr/bin/env python3

    # Copyright 2017 Johns Hopkins University (Shinji Watanabe)
    # Copyright 2021 JD AI Lab. All Rights Reserved. (authors: Lu Fan)
    # Copyright 2021 Mobvoi Inc. All Rights Reserved. (Di Wu)
    # Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)

    from __future__ import print_function
    from __future__ import unicode_literals

    import argparse
    import codecs
    import re
    import sys

    is_python2 = sys.version_info[0] == 2


    def exist_or_not(i, match_pos):
    start_pos = None
    end_pos = None
    for pos in match_pos:
    if pos[0] <= i < pos[1]:
    start_pos = pos[0]
    end_pos = pos[1]
    break

    return start_pos, end_pos

    def seg_char(sent):
    pattern = re.compile(r'([\u4e00-\u9fa5])')
    chars = pattern.split(sent)
    chars = [w for w in chars if len(w.strip()) > 0]
    return chars

    def get_parser():
    parser = argparse.ArgumentParser(
    description='convert raw text to tokenized text',
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--nchar',
    '-n',
    default=1,
    type=int,
    help='number of characters to split, i.e., \
    aabb -> a a b b with -n 1 and aa bb with -n 2')
    parser.add_argument('--skip-ncols',
    '-s',
    default=0,
    type=int,
    help='skip first n columns')
    parser.add_argument('--space',
    default='<space>',
    type=str,
    help='space symbol')
    parser.add_argument('--bpe-model',
    '-m',
    default='data/lang_char/train_unigram5000.model',
    type=str,
    help='bpe model for english part')
    parser.add_argument('--non-lang-syms',
    '-l',
    default=None,
    type=str,
    help='list of non-linguistic symobles,'
    ' e.g., <NOISE> etc.')
    parser.add_argument('text',
    type=str,
    default='/home/yelong/data/wenet/examples/multi_cn/s0/data_4000_add_we/test/text_token_chi_eng',
    # default='/home/yelong/data/wenet/examples/multi_cn/s0/data_4000_add_we/test_1.4w/text_token_chi_eng',
    # default='/home/yelong/data/wenet/examples/multi_cn/s0/data_4000_add_we/lang_char/2',
    nargs='?',
    help='input text')
    parser.add_argument('--trans_type',
    '-t',
    type=str,
    default="cn_char_en_bpe",
    choices=["char", "phn", "cn_char_en_bpe"],
    help="""Transcript type. char/phn. e.g., for TIMIT
    FADG0_SI1279 -
    If trans_type is char, read from
    SI1279.WRD file -> "bricks are an alternative"
    Else if trans_type is phn,
    read from SI1279.PHN file ->
    "sil b r ih sil k s aa r er n aa l
    sil t er n ih sil t ih v sil" """)
    return parser


    def main():
    parser = get_parser()
    args = parser.parse_args()

    rs = []

    if args.bpe_model is not None:
    import sentencepiece as spm
    sp = spm.SentencePieceProcessor()
    sp.load(args.bpe_model)

    if args.text:
    f = codecs.open(args.text, encoding="utf-8")
    else:
    f = codecs.getreader("utf-8")(
    sys.stdin if is_python2 else sys.stdin.buffer)

    sys.stdout = codecs.getwriter("utf-8")(
    sys.stdout if is_python2 else sys.stdout.buffer)
    line = f.readline()
    n = args.nchar
    while line:
    # x = line.split()
    # print(' '.join(x[:args.skip_ncols]), end=" ")
    # a = ' '.join(x[args.skip_ncols:])
    a = line.strip()

    # get all matched positions
    b = seg_char(a)
    a = []
    for j in b:
    # we use "▁" to instead of blanks among english words
    # warning: here is "▁", not "_"
    # for l in j.strip().split(" "):
    # count = len(j.strip().split("▁")) -1
    for l in j.strip().split('▁'):
    if l.encode('UTF-8').isalpha() or "'" in l: #是不是英文字母 #T-shirt这种就会没有掉 TODO MACY'S
    a.append(l)
    a.append('▁')
    # if count:
    # a.append('▁')
    # count = count - 1

    a_flat = []
    for z in a:
    a_flat.append("".join(z))

    a_chars = [z.replace(' ', args.space) for z in a_flat]
    if len(a_chars) > 0:
    if a_chars[-1] == '▁':
    print(''.join(a_chars[:-1]))
    else:
    print(''.join(a_chars))
    line = f.readline()


    if __name__ == '__main__':
    main()

    对text_token_eng过一遍英文的token_fast_eng.py,看看有没有或 ▁ ,注意,这里的text_token_eng不是data_4000_add_we_bpe/train里的text_token_eng;

    1
    2
    3
    4
    5
    sed -i 's/ *$//' text_token_eng
    sed -i 's/^ *//' text_token_eng
    python token_fast_eng.py > i
    grep "<unk>" i
    # grep "▁ " i

    其中,token_fast_eng.py:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    from __future__ import print_function
    from __future__ import unicode_literals

    import argparse
    import codecs
    import re
    import sys
    is_python2 = sys.version_info[0] == 2

    def __tokenize_by_bpe_model(sp, txt):
    tokens = []
    # CJK(China Japan Korea) unicode range is [U+4E00, U+9FFF], ref:
    # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
    pattern = re.compile(r'([\u4e00-\u9fff])')
    # Example:
    # txt = "你好 ITS'S OKAY 的"
    # chars = ["你", "好", " ITS'S OKAY ", "的"]
    chars = pattern.split(txt.upper())
    mix_chars = [w for w in chars if len(w.strip()) > 0]
    for ch_or_w in mix_chars:
    # ch_or_w is a single CJK charater(i.e., "你"), do nothing.
    if pattern.fullmatch(ch_or_w) is not None:
    tokens.append(ch_or_w)
    # ch_or_w contains non-CJK charaters(i.e., " IT'S OKAY "),
    # encode ch_or_w using bpe_model.
    else:
    for p in sp.encode_as_pieces(ch_or_w):
    tokens.append(p)

    return tokens


    def tokenize(sample,sp,
    symbol_table):
    """ Decode text to chars or BPE
    Inplace operation

    Args:
    data: Iterable[{key, wav, txt, sample_rate}]

    Returns:
    Iterable[{key, wav, txt, tokens, label, sample_rate}]
    """
    txt = sample['txt'].strip()
    parts = [txt]
    tokens = []
    for part in parts:
    tokens.extend(__tokenize_by_bpe_model(sp, part))

    for i in range(len(tokens)):
    ch = tokens[i]
    if ch not in symbol_table:
    tokens[i] = '<unk>'
    # elif '<unk>' in symbol_table:
    # label.append(symbol_table['<unk>'])


    sample['tokens'] = tokens
    # sample['label'] = label
    return sample

    def read_symbol_table(symbol_table_file):
    symbol_table = {}
    with open(symbol_table_file, 'r', encoding='utf8') as fin:
    for line in fin:
    arr = line.strip().split()
    assert len(arr) == 2
    symbol_table[arr[0]] = int(arr[1])
    return symbol_table

    def main():

    import sentencepiece as spm
    sp = spm.SentencePieceProcessor()
    sp.load('data_4000_add_we/lang_char/train_unigram100.model')
    # sp.load('data_4000_add_we/lang_char/train_unigram500.model')
    # sp.load('data_4000_add_we/lang_char/train_unigram1000.model')
    symbol_table = read_symbol_table('data_4000_add_we/dict_bpe/lang_char.txt.bpe_100_eng600_chi4700_all5300')
    # symbol_table = read_symbol_table('data_4000_add_we/dict_bpe/lang_char.txt.bpe_500_eng1000_chi4700_all5700')
    # symbol_table = read_symbol_table('data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi4700_all6000')
    f = codecs.open('data_4000_add_we/test/text_token_eng1', encoding="utf-8")
    # f = codecs.open('data_4000_add_we/train/text_token_eng', encoding="utf-8")
    # f = codecs.open('data_4000_add_we/test_1.4w/text_token_eng', encoding="utf-8")
    sys.stdout = codecs.getwriter("utf-8")(
    sys.stdout if is_python2 else sys.stdout.buffer)
    line = f.readline()
    while line:
    # if len(line.strip().split()) > 1:
    data={}
    # data['key']=line.strip().split()[0]
    data['txt']=''.join(line.strip().split())
    sample = tokenize(data,sp,
    symbol_table)
    print(''.join(sample['tokens']))
    # print(' '.join(sample['tokens']))
    # print(sample['key'], sample['tokens'])
    line = f.readline()
    if __name__ == '__main__':
    main()
    1
    2
    3
    4
    5
    6
    7
    8
    cut -d ' ' -f 2- data/train/text | grep "[a-zA-Z]"  > input.txt
    /home/yelong/data/wenet/examples/multi_cn/s0/delete_chi.py input.txt > text_for_bpe_model
    sed -i 's/SIL//g' text_for_bpe_model # 不要SIL符号(bpe模型训练里不需要)
    sed -i '/^\s*$/d' text_for_bpe_model #删除空行
    sed -i 's/ \+/ /g' text_for_bpe_model # 删除连续空格
    sed -i 's/ *$//' text_for_bpe_model # 删除行尾空格
    sed -i 's/^ *//' text_for_bpe_model # 删除行首空格
    sed -i '/^\s*$/d' text_for_bpe_model #删除空行
  4. 训练集中把SIL换成blank;不参与bpe.model的训练;(先暂时删掉)

  5. 看看有多少个词:

    1
    2
    cat text_for_bpe_model | tr '\t' ' ' | awk '{if(NF>1)print$0}' | cut -d ' ' -f 2- | tr ' ' '\n'  | sort -u | wc -l
    # 24062 (2万个词)

    其实这里可以简单统计一下词频,(带英文的文本共204万条(2041819)),但是没啥意义,因为最后也没用词频来放进词典;

  6. 把一些词拆开,像合在一起的QQ啊,这种,这种词来源于train_960_unigram5000.model里不存在的词(上面text2token输出yelong)【未做】

sed -i 's/▁/ /g' qq

  1. 新训练bpe.model

生成dict

数据集用的101、100(4000h)

  1. 用新训练的bpe.model过一遍训练集,得到训练集对应的子词格式,注意,这里的训练集,英文单词时间用▁连接;text2token.py;

    1. 去重得到dict,注意,用新的bpe.model(5000子词)发现最后字典里,英文有5300个字,这比用不匹配的librispeech训练(4700个)出来的还要多;
    2. 去重得到dict,用新的bpe.model(1000子词)发现最后字典里,英文有1300个字;
    3. 去重得到dict,用新的bpe.model(100子词)发现最后字典里,英文有500个字;

添加wenetspeech文本数据(也有中英混合)

上述都用的雷博的4000小时中英混合数据的文本,现添加wenetspeech文本数据;

bpe.model(1000个子词),字典里有1480 个字;

统计每个子词的样本数

这里text_token是变成子词模式的文本(这里只认为一个词在一条文本只出现一次)

1
2
3
4
5
6
7
8
9
10
tools/text2token.py -s 0 -n 1 -m ${bpecode} \
data_4000_add_we_${en_modeling_unit}/${train_set}/text_chi_eng ${trans_type_ops} > data_4000_add_we_bpe/train/text_token

# 法一:
awk '{print"grep -w \""$1"\" text_token | wc -l "}' ../../data_4000_add_we/dict_bpe/lang_char.txt > 1
. ./1
#后来觉得 这样遍历的次数是 文本集行数*查询词数,很慢,全摆在一块儿,再遍历文本,会快一点;

# 法二:[后来发现这样不行]
# cut -d ' ' -f 1 ../../data_4000_add_we/dict_bpe/lang_char.txt | tr '\n' '|' | awk '{print"grep -E \""$1"\" text_token "}'

bpe.model(1000子词)字典里,

  • 英文有1480 个字:

字典中,词频小于500的有468个词,这种感觉样本数太少,就应该训练不好,应该把这些样本扩充或者把这些词看怎么分解一下

在500-2000个词,样本数尚可,该类型在字典中有476,不清楚该样本数能否训练好建模单元;

在大于2000个词,样本数认为足够建模型,该类型在字典中有533;

后两者占70%左右;前者占30%,说明词频小于500的也是非常多了,多达英文的30%;

  • 中文有7200个字:

字典中,词频小于500的有3363个词,这种感觉样本数太少,就应该训练不好,应该把这些样本扩充或者把这些词删掉,因为现在中文字典字数偏多,保持在5000个左右会比较合适,而且很多生僻字可以去掉;

在500-2000个词,样本数尚可,该类型在字典中有987,不清楚该样本数能否训练好建模单元;

在大于2000个词,样本数认为足够建模型,该类型在字典中有2859;

后两者占54%左右;前者占46%,说明词频小于500的也是非常多了,多达中文的46%!!

(平均训练集总字数为2.7亿,7200个字,平均每个字分到3.7w次)

  • 超过100万次的字:44个
1
一 上 下 不 个 为 么 也 了 人 什 他 以 们 会 你 到 去 可 后 吗 啊 在 大 天 好 子 就 得 我 时 是 有 来 没 的 看 能 要 说 还 这 那 都
  • 小于100次的字:2512个(若除去,则中文字典剩下4700个字)
1
○ 㖏 丌 丟 両 丨 丶 乂 乗 乜 乩 亊 亍 亓 亶 亹 仂 仝 仞 仟 仡 仮 仵 伃 伉 伋 伛 伝 伥 伧 伱 伲 伷 佉 佚 佢 佥 佧 佶 佺 佻 佾 侂 來 侉 侑 侔 侩 侪 俅 俎 俚 俛 俜 俟 俣 俤 俦 俳 俵 俶 俾 倅 個 倌 們 倢 倥 倧 倨 倬 倮 偁 偈 偓 偪 偲 偾 傈 傧 傩 傺 僆 僕 僖 僢 僬 僭 僮 僳 僶 儋 儍 兒 兕 內 円 冇 冏 冔 冨 冫 凃 凇 凕 凖 凪 凫 凼 刈 刖 別 刭 刳 刼 刿 剀 剋 剌 剛 剜 剞 剡 剣 劂 劢 劬 劭 効 劻 劼 勍 勐 動 勖 勣 勧 勰 勷 匄 匏 匦 匼 卅 単 卟 卣 卬 卮 卲 卺 卻 厍 厓 厔 厖 厙 厝 厣 厩 厶 叁 叄 叆 収 叻 吋 吔 吡 吲 吶 吿 呋 呎 呒 呓 呔 呖 呙 呣 呤 咁 咘 咝 咲 咴 咵 咾 哂 哌 哓 哕 哚 哜 哞 唑 唗 唛 唪 唳 唷 唻 唿 啁 啉 啐 啖 啫 啭 啮 啲 啶 啻 喁 喈 喑 喒 喙 喟 喭 喯 喰 営 喹 喾 嗄 嗉 嗌 嗍 嗎 嗐 嗙 嗞 嗥 嗪 嗬 嗮 嗳 嗵 嗾 嘁 嘅 嘌 嘏 嘢 嘤 嘧 嘬 嘭 噃 噏 噘 噙 噤 噫 嚅 嚆 嚒 嚟 嚭 嚯 嚲 囍 囗 囝 囟 団 囫 図 囵 囹 囿 圄 圉 圜 圧 圪 圬 圮 圯 圴 圹 圻 坌 坒 坜 坩 坫 坭 坶 坻 坼 垆 垉 垌 垍 垓 垕 垚 垟 垡 垤 垧 垩 垭 垱 垲 垴 垸 垿 埇 埈 埏 埒 埕 埗 埘 埙 埚 埜 埝 埤 埭 埴 埵 埸 埼 埽 堀 堃 堇 堋 堌 堍 堙 堞 堠 堨 堺 塄 塍 塡 塩 塬 塱 塽 墀 墁 墉 墋 墎 墒 墘 墡 墪 壅 壥 売 壴 壵 壸 壻 夌 夔 夤 夼 奁 奝 奫 奭 妁 妗 妣 妤 妧 妪 妫 妯 妱 妳 姌 姍 姒 姘 姞 姹 娈 娉 娌 娵 婄 婖 媖 媞 媪 媵 媾 嫒 嫘 嫚 嫝 嫠 嫫 嫯 嫱 嫲 嬅 嬖 嬗 嬜 嬲 孀 孑 孓 孛 孥 孱 孳 學 宍 実 寔 寘 寛 寤 實 尅 對 尓 尜 尟 尥 屃 屄 屐 屙 屣 屮 屺 岀 岈 岍 岘 岙 岜 岢 岣 岫 岬 岵 岽 岿 峁 峄 峇 峋 峤 峯 崀 崃 崆 崐 崒 崞 崤 崦 崧 崮 嵂 嵊 嵎 嵒 嵖 嵛 嵝 嵨 嵫 嵬 嵯 嵴 嶂 嶃 嶋 嶓 嶙 嶝 嶷 巯 巳 巻 巽 巿 帀 帏 帑 帔 帙 帱 帶 帻 幛 幞 幹 庋 庑 庠 庥 庹 廑 廕 廛 廨 廪 廻 廼 廾 廿 弁 弇 弐 弭 弶 彀 彊 彖 彘 彟 彧 彳 彿 徂 徉 後 徕 徜 徭 徳 徵 徼 忄 忖 忝 忪 忭 忸 忾 忿 怃 怊 怍 怏 怙 怛 怩 怿 恂 恚 恧 恫 恵 恸 恹 恽 悃 悆 悌 悒 悕 悛 悝 悫 悭 悱 悳 惇 惎 惡 惢 惲 惴 愀 愆 愍 愎 愔 愘 愛 愠 慉 慊 慒 慜 慝 慥 憀 憍 憙 憷 懑 懔 懶 戆 戋 戕 戗 戡 戢 戥 戦 戸 戽 扃 扞 扥 扦 扽 抔 抟 拊 拶 挈 挢 挲 挹 捌 捘 捜 捭 捯 捱 捴 掊 掎 掞 掭 掮 掴 掼 掾 揄 揆 揠 揩 揶 揸 揺 揾 揿 搠 搢 搦 搧 搴 搵 搽 摅 摈 摛 摭 摺 摽 撃 撄 撘 撙 撷 撺 擗 擘 擢 擤 攉 攫 攮 攴 敕 斫 斱 旃 旄 旆 旎 旒 旖 旰 旴 旸 旻 旼 昃 昇 昉 昝 昫 昶 昺 時 晊 晙 晞 晡 晳 晷 晻 暌 暕 暝 暦 暹 暾 曈 曛 曩 曷 朊 朐 杈 杓 杝 杪 杬 杲 杼 枋 枘 枞 枥 枧 枨 枰 枱 枲 枳 枹 柁 柃 柈 柊 柒 柘 柙 柝 柞 柟 柢 柤 柰 柷 柸 柽 栄 栊 栌 栎 栝 栱 栲 栳 栻 桁 桄 桅 桉 桎 桕 桜 桠 桡 桤 桫 桯 桴 桷 桼 梃 梏 梶 棂 棨 棰 棹 棻 棼 椁 椋 椐 椟 椤 椪 椴 椵 椹 椽 楀 楗 楙 楝 楢 楦 楫 楮 楯 楱 楸 楹 楽 榇 榉 榑 榖 榘 榧 榫 榼 槁 槊 槎 槔 様 槩 槭 槲 槻 樉 樋 樓 樗 樘 樨 権 樯 樽 樾 橐 橛 橥 橹 橼 檄 檎 檗 檦 檩 檫 檵 櫾 權 欤 欷 欸 欹 欻 歃 歐 歔 歘 歙 歩 歯 歳 歴 殁 殂 殄 殍 殚 殛 殪 殭 毐 毖 毘 毳 毹 氅 氆 氇 氍 氐 氕 氖 気 氘 氙 氚 氡 氣 氤 氩 氲 氽 氾 汆 汊 汎 汏 汔 汜 汨 汩 沄 沆 沇 沒 沔 沢 沤 沩 況 泅 泆 泐 泖 泘 泚 泠 泫 泬 泮 泺 洄 洇 洌 洎 洑 洣 洧 洨 洮 洳 洸 洹 洺 浃 浈 浉 浍 浐 浗 浞 浠 浥 浯 浼 涑 涔 涖 涘 涙 涠 涫 涬 涼 淖 淙 淛 淝 淠 淯 渀 済 渌 渑 渫 湉 湎 湓 湔 湜 湝 湟 湣 湫 満 溆 溍 溏 溘 溦 溱 溻 溽 滂 滏 滓 滗 滘 滠 滢 滹 滺 漭 漼 潆 潋 潟 潩 潲 潴 澉 澌 澍 澚 澧 澪 澴 澶 澹 濉 濛 濞 濩 濬 濯 瀍 瀣 瀬 灣 炁 炆 炔 炘 炝 炟 炴 烀 烃 烔 烜 焐 焓 焗 焘 無 煅 煊 煨 煳 煺 熘 熳 熵 燊 燚 燠 燧 燮 燹 爝 爨 爰 爿 牁 牂 牍 牖 牝 牤 牯 牾 犍 犰 犲 犴 犸 狃 狍 狎 狒 狝 狨 狯 狲 狳 狴 狷 狺 狻 猁 猇 猊 猗 猞 猡 猢 猱 猲 猷 猸 猹 獐 獠 獣 獬 玎 玑 玘 玚 玢 玦 玳 玹 珙 珜 珣 珥 珧 珪 珮 珰 珽 現 琇 琌 琍 琎 琚 琠 琬 琮 琯 琲 瑀 瑊 瑗 瑨 瑭 瑮 瑱 瑴 瑷 璁 璈 璘 璟 璠 璩 瓠 瓤 瓴 瓿 甑 甙 甯 甾 畀 畈 畊 畋 畎 畑 畚 畦 畯 畲 當 畹 畿 疃 疋 疎 疔 疖 疠 疥 疬 疰 疳 疴 疸 疽 痂 痈 痍 痖 痦 痩 痼 瘅 瘆 瘊 瘌 瘐 瘕 瘗 瘘 瘢 瘥 瘰 瘳 瘼 瘿 癀 癃 癍 癔 癯 癸 発 皁 皌 皕 皝 皤 皲 皴 盁 盂 盍 盤 盥 盩 眀 眄 眇 県 眍 眚 眛 眢 眦 眬 眭 睃 睇 睖 睚 睟 睥 睨 瞀 瞋 瞢 瞫 瞵 瞽 矅 矍 矐 矱 矸 矽 砀 砗 砜 砟 砢 砣 砦 砧 砩 砫 砬 砭 砲 砻 砼 硇 硎 硐 硖 硗 硚 硪 硭 硷 硼 碁 碇 碓 碚 碛 碥 碲 碶 磉 磔 磙 磡 磬 磲 磴 磻 磾 礅 礌 礓 礤 礻 礽 祆 祇 祊 祏 祐 祓 祕 祗 祚 祜 祢 祧 禊 禚 禛 禨 禩 禫 禳 秕 秣 秫 秭 稂 稔 稗 稙 稹 穀 穂 穑 穣 穰 穸 窀 窠 窣 窨 窭 窸 窾 竑 竚 竦 竲 竽 笄 笏 笕 笞 笪 笫 笮 笳 笸 笹 笺 筆 筇 筌 筘 筚 筭 筮 筲 箅 箐 箓 箜 箝 箦 箧 箪 箬 箸 箾 篁 篌 篙 篚 篥 篦 篪 篼 篾 簃 簋 簌 簏 簖 簟 簦 籀 籓 籴 籼 粜 粝 粞 粢 粲 粳 粼 粿 糁 糅 糇 糌 糍 糨 糬 糸 紘 紙 紡 経 結 絜 給 絺 継 綦 綮 綽 緊 総 緑 縠 縡 縯 縻 績 繇 纁 纔 纛 纡 纩 纮 纻 纾 绀 绂 绉 绋 绌 绐 绗 绠 绦 绨 绲 绶 绺 绻 缁 缂 缃 缑 缒 缗 缛 缟 缣 缦 缧 缫 缬 缯 缱 缲 缳 缵 缶 缾 罃 罅 罍 罘 罟 罨 罴 罾 羝 羟 羣 羧 羰 羱 羸 羼 翕 翙 翚 翥 翦 翮 耄 耆 耋 耒 耔 耖 耜 耧 耨 耪 耵 聃 聍 聒 聡 聩 聱 聴 聿 肄 肟 肣 肫 肭 肸 肼 胂 胄 胍 胗 胙 胛 胝 胨 胪 胬 胲 胴 胼 脁 脒 脔 脘 脞 脩 脰 脲 腈 腓 腘 腙 腠 腧 腭 腴 腽 膦 臁 臕 臜 臬 臺 臾 舁 舂 舄 舐 舛 舢 舣 舨 舯 舸 舾 艄 艉 艋 艏 艨 艮 艹 艿 芄 芎 芑 芔 芗 芘 芟 芤 芨 芩 芫 芰 芴 芵 芾 苁 苄 苈 苋 苌 苎 苒 苕 苜 苡 苤 苪 苫 苴 苻 苾 茀 茆 茇 茈 茌 茏 茑 茔 茕 茛 茝 茭 茺 茼 荅 荇 荏 荑 荛 荜 荠 荦 荩 荪 荭 荸 荽 莒 莙 莛 莜 莠 莦 莨 莩 莪 莳 莶 莸 莼 菀 菈 菔 菖 菘 菝 菟 菡 菪 菰 菽 萁 萆 萋 萏 萘 萜 萩 萬 萸 萼 葑 葙 葚 葜 葭 葳 葶 葺 蒌 蒑 蒗 蒡 蒨 蒯 蒴 蒹 蒺 蒽 蓁 蓊 蓍 蓖 蓠 蓣 蓥 蓼 蓿 蔟 蔣 蔸 蕈 蕐 蕖 蕞 蕤 蕲 蕹 蕺 蕻 薁 薜 薤 薨 薬 薮 薳 薷 薹 藁 藜 蘅 蘇 蘖 蘡 蘧 蘩 蘼 虓 虛 虢 虬 虮 虺 虻 虼 虿 蚋 蚍 蚜 蚡 蚧 蚨 蚬 蚰 蚴 蚵 蚶 蚺 蚿 蛄 蛉 蛏 蛞 蛩 蛭 蛱 蛲 蛸 蜃 蜇 蜉 蜊 蜍 蜛 蜞 蜢 蜩 蜮 蜱 蝓 蝣 蝤 蝥 蝮 蝰 蝲 蝻 蝽 蝾 螅 螈 螟 螫 螬 螭 螯 螵 螽 蟊 蟛 蟥 蟪 蟮 蟲 蠃 蠊 蠓 蠖 蠛 蠨 蠲 蠳 蠹 衄 衝 衮 衽 衾 衿 袆 袝 袢 裉 裒 裛 裡 裢 裥 裨 裼 裾 褊 褓 褔 褙 褡 褦 褫 褭 襀 襁 襃 襞 襦 襶 見 視 覩 親 観 觇 觌 觏 觚 觜 觥 觧 觯 觱 訇 訏 訚 許 訾 詃 詧 誊 說 調 謇 謦 謩 讃 變 讐 讠 讣 讦 讫 讵 诂 诌 诐 诒 诔 诖 诘 诜 诤 诨 诮 诰 诳 诹 诼 谂 谄 谆 谌 谔 谖 谘 谝 谠 谡 谫 谮 谯 谰 谲 谳 谵 谶 豇 豉 豊 豕 豝 豢 豨 豳 豷 豸 貅 貉 貊 貓 貔 貕 貘 負 買 貿 贲 贳 贶 贽 赀 赉 赍 赑 赓 赙 赜 赟 赧 赭 赳 趄 趔 趵 趸 趺 趼 趿 跏 跖 跗 跣 跩 跫 跬 跱 跶 跸 跹 跼 跽 踅 踔 踟 踬 踯 踺 踽 蹀 蹁 蹇 蹍 蹑 蹙 蹚 蹩 蹰 蹼 躅 躐 躞 車 転 輀 轫 轭 轳 轸 轹 轾 辂 辇 辊 辋 辎 辏 辔 辚 辺 込 迓 迤 迨 迩 迮 迳 逄 逋 逓 逖 這 逡 逦 逯 逶 遄 過 遑 遘 遠 適 遰 遽 還 邅 邕 邗 邘 邙 邛 邠 邨 邰 邳 邴 邶 邽 邾 郃 郄 郇 郈 郍 郏 郓 郕 郗 郛 郜 郢 郤 郧 郫 郯 郾 鄄 鄅 鄋 鄕 鄜 鄣 鄩 鄫 鄮 鄯 鄹 酃 酆 酇 酎 酐 酞 酡 酢 酤 酩 酹 酺 酽 醅 醌 醍 醐 醚 醢 醣 醥 醪 醮 醯 醲 醴 醵 釆 鉄 鉏 銀 銷 鋈 鋐 鋒 鋳 錒 録 錾 鍉 鍊 鍒 鎉 鎏 鎛 鏊 鏐 鏖 钆 钇 钋 钌 钍 钎 钒 钕 钚 钜 钡 钣 钤 钪 钫 钭 钯 钲 钴 钶 钸 钹 钺 钼 钽 钿 铄 铈 铊 铋 铌 铍 铑 铒 铕 铖 铗 铙 铚 铞 铟 铥 铨 铩 铪 铫 铯 铱 铳 铷 铼 锃 锆 锇 锉 锊 锑 锒 锓 锔 锕 锗 锘 锛 锜 锝 锟 锨 锫 锳 锴 锶 锷 锸 锺 镆 镊 镋 镌 镏 镒 镓 镔 镗 镘 镙 镚 镛 镝 镞 镠 镡 镢 镦 镧 镨 镩 镪 镫 镬 镮 镱 镲 長 開 閑 閟 関 閤 闇 闘 闩 闱 闳 闼 闿 阃 阆 阇 阈 阊 阋 阌 阍 阏 阒 阕 阗 阝 阬 阼 阽 陉 陔 陖 陟 陧 陬 陲 陳 陴 険 陽 隈 隗 隰 隳 隹 隻 雉 雒 雔 雠 離 雩 雫 雱 霈 霊 霑 霪 霰 靑 靛 靰 靺 靼 鞆 鞑 鞒 鞣 鞥 鞨 鞯 鞲 鞴 鞶 韪 韫 頔 頠 頫 顗 顸 顼 颀 颃 颉 颎 颏 颔 颙 颛 颞 颟 颡 颢 颧 飑 飗 飨 飮 飯 餮 饔 饧 饬 饯 饴 饸 饹 馐 馑 馓 馔 馕 馲 駃 駆 験 騕 騜 騠 騪 騴 驩 驵 驺 驽 骈 骎 骒 骓 骕 骖 骘 骝 骟 骠 骢 骧 骶 骹 骺 髀 髁 髂 髌 髑 髗 髙 髡 髪 髫 髭 髯 髹 髽 鬃 鬄 鬈 鬏 鬐 鬣 鬲 鬶 鬻 魃 魆 魉 魋 魍 魑 鮑 鲀 鲂 鲃 鲆 鲇 鲋 鲌 鲎 鲐 鲑 鲔 鲖 鲚 鲛 鲠 鲡 鲢 鲣 鲥 鲧 鲩 鲭 鲮 鲯 鲰 鲱 鲳 鲵 鲷 鲺 鲽 鳃 鳇 鳊 鳋 鳎 鳏 鳐 鳓 鳔 鳙 鳜 鳟 鳢 鳣 鳯 鳳 鴐 鴶 鵴 鶒 鶲 鷏 鷩 鷲 鸂 鸨 鸩 鸪 鸫 鸬 鸮 鸰 鸱 鸲 鸶 鸷 鸸 鸹 鸻 鹀 鹁 鹄 鹇 鹈 鹎 鹔 鹕 鹗 鹘 鹚 鹛 鹞 鹟 鹣 鹧 鹩 鹪 鹫 鹬 鹮 鹯 鹳 鹾 麂 麇 麈 麴 麸 麹 麼 麾 麿 黃 黉 黍 黒 點 黟 黠 黡 黢 黥 黧 黩 黻 鼋 鼐 鼙 鼩 鼯 鼱 鼷 鼽 齁 齉 齑 齮 龃 龅 龆 龇 龉 龠 龢 !

低频次中文都去除,剩下高频4700个字,路径为:10.22.24.2:~/data/wenet/examples/multi_cn/s0/data_4000_add_we/dict_bpe/lang_char.txt

去除:

1
cat 2 | tr ' ' '\n' | awk '{print"sed -i -e '\''/"$0"'\''/d 1"}' > 3

4700个字可能还是有点少,还要再添加一点进去:

把测试集有的,词典里没有,并且原来8000个字的字典有的,找出来

findout.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
from __future__ import print_function
from __future__ import unicode_literals

import argparse
import codecs
import re
import sys
is_python2 = sys.version_info[0] == 2

# i_unk_ii
# 6792 草书千字文是宋徽宗赵1传世 6792 草书千字文是宋徽宗赵诘传世
# 10700 不用须1接上次 10700 不用须臾接上次
# 10813 是我国古代王室在龟甲或兽骨上1刻的文字 10813 是我国古代王室在龟甲或兽骨上镌刻的文字
# 21925 在澳大利亚的国徽上也有这样的动物左边的是袋鼠右边的是11 21925 在澳大利亚的国徽上也有这样的动物左边的是袋鼠右边的是鸸鹋


def read_symbol_table(symbol_table_file):
symbol_table = {}
with open(symbol_table_file, 'r', encoding='utf8') as fin:
for line in fin:
arr = line.strip().split()
assert len(arr) == 2
symbol_table[arr[0]] = int(arr[1])
return symbol_table

symbol_table = read_symbol_table('data_4000_add_we/dict_bpe/lang_char.txt.8000')
f = codecs.open('i_unk_ii', encoding="utf-8")
sys.stdout = codecs.getwriter("utf-8")(
sys.stdout if is_python2 else sys.stdout.buffer)
line = f.readline()
while line:
unk = line.strip().split()[1]
label = line.strip().split()[3]
for i in range(len(unk)):
if unk[i] != label[i] and label[i] in symbol_table:
print(label[i])
line = f.readline()

一共有517个词,这里就先都添加进去,一共有5200个汉字,因此暂定词典大小为==6691==个字(1475个英文,6214个中文)

统计与测试集的覆盖程度

  1. 先清洗测试集文本【训练时加上清洗脚本就行了,不需要把处理完的训练集文本给花哥】:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    sed -i -e '/�/d' text
    sed -i 's/:/ /g' text
    sed -i 's/%/ /g' text
    sed -i 's/+/ /g' text
    sed -i 's/-/ /g' text
    sed -i 's/,/ /g' text
    sed -i 's/,/ /g' text
    sed -i 's/。/ /g' text
    sed -i 's/、/ /g' text
    sed -i 's/·/ /g' text
    sed -i 's/~/ /g' text
    sed -i 's/?/ /g' text
    sed -i 's/…/ /g' text
    sed -i 's/“/ /g' text
    sed -i 's/”/ /g' text
    sed -i 's/@/ /g' text
    sed -i 's/!/ /g' text
    sed -i 's/\./ /g' text
    cut -d ' ' -f 2- text | sed 's/[0-9]/ /g' > 1
    cut -d ' ' -f 1 text | paste -d ' ' - 1 > 2
    mv 2 text
    rm 1
    # sed -i 's/[0-9]/ /g' text
  2. 文本先转为token子词,然后查看是否在字典中(不在,就是unk),把wenet/dataset/processor.py的tokenize函数抠出;

自己写的token.py:当有unk时,说明测试集里有字典里没有的字;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
from __future__ import print_function
from __future__ import unicode_literals

import argparse
import codecs
import re
import sys
is_python2 = sys.version_info[0] == 2

def __tokenize_by_bpe_model(sp, txt):
tokens = []
# CJK(China Japan Korea) unicode range is [U+4E00, U+9FFF], ref:
# https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
pattern = re.compile(r'([\u4e00-\u9fff])')
# Example:
# txt = "你好 ITS'S OKAY 的"
# chars = ["你", "好", " ITS'S OKAY ", "的"]
chars = pattern.split(txt.upper())
mix_chars = [w for w in chars if len(w.strip()) > 0]
for ch_or_w in mix_chars:
# ch_or_w is a single CJK charater(i.e., "你"), do nothing.
if pattern.fullmatch(ch_or_w) is not None:
tokens.append(ch_or_w)
# ch_or_w contains non-CJK charaters(i.e., " IT'S OKAY "),
# encode ch_or_w using bpe_model.
else:
for p in sp.encode_as_pieces(ch_or_w):
tokens.append(p)

return tokens


def tokenize(sample,sp,
symbol_table,
bpe_model=None,
non_lang_syms=None,
split_with_space=False):
""" Decode text to chars or BPE
Inplace operation

Args:
data: Iterable[{key, wav, txt, sample_rate}]

Returns:
Iterable[{key, wav, txt, tokens, label, sample_rate}]
"""
if non_lang_syms is not None:
non_lang_syms_pattern = re.compile(r"(\[[^\[\]]+\]|<[^<>]+>|{[^{}]+})")
else:
non_lang_syms = {}
non_lang_syms_pattern = None

if bpe_model is not None:
sp.load(bpe_model)
else:
sp = None

assert 'txt' in sample
txt = sample['txt'].strip()
if non_lang_syms_pattern is not None:
parts = non_lang_syms_pattern.split(txt.upper())
parts = [w for w in parts if len(w.strip()) > 0]
else:
parts = [txt]

label = []
tokens = []
for part in parts:
if part in non_lang_syms:
tokens.append(part)
else:
if bpe_model is not None:
tokens.extend(__tokenize_by_bpe_model(sp, part))
else:
if split_with_space:
part = part.split(" ")
for ch in part:
if ch == ' ':
ch = "▁"
tokens.append(ch)
for i in range(len(tokens)):
ch = tokens[i]
if ch not in symbol_table:
tokens[i] = '<unk>'
# elif '<unk>' in symbol_table:
# label.append(symbol_table['<unk>'])


sample['tokens'] = tokens
# sample['label'] = label
return sample

def read_symbol_table(symbol_table_file):
symbol_table = {}
with open(symbol_table_file, 'r', encoding='utf8') as fin:
for line in fin:
arr = line.strip().split()
assert len(arr) == 2
symbol_table[arr[0]] = int(arr[1])
return symbol_table

def main():
symbol_table = read_symbol_table('data_4000_add_we/dict_bpe/lang_char.txt')
import sentencepiece as spm
sp = spm.SentencePieceProcessor()
f = codecs.open('data_4000_add_we/test/text1', encoding="utf-8")
sys.stdout = codecs.getwriter("utf-8")(
sys.stdout if is_python2 else sys.stdout.buffer)
line = f.readline()
while line:
data={}
data['key']=line.strip().split()[0]
data['txt']=''.join(line.strip().split()[1:])
sample = tokenize(data,sp,
symbol_table,
bpe_model='data_4000_add_we/lang_char/train_unigram1000.model',
non_lang_syms=None,
split_with_space=False)
print(sample['key'], ''.join(sample['tokens']))
line = f.readline()
if __name__ == '__main__':
main()

后来又改写了一版token_fast.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
from __future__ import print_function
from __future__ import unicode_literals

import argparse
import codecs
import re
import sys
is_python2 = sys.version_info[0] == 2

def __tokenize_by_bpe_model(sp, txt):
tokens = []
# CJK(China Japan Korea) unicode range is [U+4E00, U+9FFF], ref:
# https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
pattern = re.compile(r'([\u4e00-\u9fff])')
# Example:
# txt = "你好 ITS'S OKAY 的"
# chars = ["你", "好", " ITS'S OKAY ", "的"]
chars = pattern.split(txt.upper())
mix_chars = [w for w in chars if len(w.strip()) > 0]
for ch_or_w in mix_chars:
# ch_or_w is a single CJK charater(i.e., "你"), do nothing.
if pattern.fullmatch(ch_or_w) is not None:
tokens.append(ch_or_w)
# ch_or_w contains non-CJK charaters(i.e., " IT'S OKAY "),
# encode ch_or_w using bpe_model.
else:
for p in sp.encode_as_pieces(ch_or_w):
tokens.append(p)

return tokens


def tokenize(sample,sp,
symbol_table):
""" Decode text to chars or BPE
Inplace operation

Args:
data: Iterable[{key, wav, txt, sample_rate}]

Returns:
Iterable[{key, wav, txt, tokens, label, sample_rate}]
"""
txt = sample['txt'].strip()
parts = [txt]
tokens = []
for part in parts:
tokens.extend(__tokenize_by_bpe_model(sp, part))

for i in range(len(tokens)):
ch = tokens[i]
if ch not in symbol_table:
tokens[i] = '<unk>'
# elif '<unk>' in symbol_table:
# label.append(symbol_table['<unk>'])


sample['tokens'] = tokens
# sample['label'] = label
return sample

def read_symbol_table(symbol_table_file):
symbol_table = {}
with open(symbol_table_file, 'r', encoding='utf8') as fin:
for line in fin:
arr = line.strip().split()
assert len(arr) == 2
symbol_table[arr[0]] = int(arr[1])
return symbol_table

def main():
symbol_table = read_symbol_table('data_4000_add_we/dict_bpe/lang_char.txt')
import sentencepiece as spm
sp = spm.SentencePieceProcessor()
sp.load('data_4000_add_we/lang_char/train_unigram1000.model')
f = codecs.open('data_4000_add_we/test/text1', encoding="utf-8")
sys.stdout = codecs.getwriter("utf-8")(
sys.stdout if is_python2 else sys.stdout.buffer)
line = f.readline()
while line:
if len(line.strip().split()) > 1:
data={}
data['key']=line.strip().split()[0]
data['txt']=''.join(line.strip().split()[1:])
sample = tokenize(data,sp,
symbol_table)
print(sample['key'], ''.join(sample['tokens']))
# print(sample['key'], sample['tokens'])
line = f.readline()
if __name__ == '__main__':
main()

汉字覆盖率

要满足:覆盖99.9%以上,至少识别率上限是99.9%,不至于太低

  • 希望覆盖训练集99.9%(data_4000_add_we/train/text):

    • 7200汉字能够覆盖为 99.99993%(203没覆盖/290200677个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi7200_all8600)
    • 6000字汉字能够覆盖为;99.9976%(6873/290200677个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi6000_all7500)
    • 5200字汉字能够覆盖为;99.9919%:(23469/290200677个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi5200_all6700)
    • 4700字汉字能够覆盖为 99.984%:(45290 /290200677个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi4700_all6000)
  • 希望覆盖测试集99.9%(data_4000_add_we/text_1.4w):

    • 7200字汉字能够覆盖为:100%(0 unk/587421个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi7200_all8600)
    • 6000字汉字能够覆盖为;100%(0 unk/587421个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi6000_all7500)
    • 5200字汉字能够覆盖为;100%(0 unk/587421个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi5200_all6700)
    • 4700字汉字能够覆盖为:99.9938%(36 unk/587421个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi4700_all6000)
  • 希望覆盖测试集99.9%(data_4000_add_we/text_chushibiao):

    • 7200字汉字能够覆盖为:100%(0 unk /624个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi7200_all8600)
    • 6000字汉字能够覆盖为:100%(0 unk /624个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi6000_all7500)
    • 5200字汉字能够覆盖为:100%(0 unk /624个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi5200_all6700)
    • 4700字汉字能够覆盖为:100%(0 unk /624个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi4700_all6000)
  • 希望覆盖测试集99.9%(data_4000_add_we/text):7G数据

    • 7200字汉字能够覆盖为:99.9999%(1741 unk/2117514045个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi7200_all8600)
    • 6000字汉字能够覆盖为;99.9999%(1741 unk/2117514045个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi6000_all7500)(6000是又从7G测试集里加了一些)
    • 5200字汉字能够覆盖为;99.9975%(52398 unk/2117514045个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi5200_all6700)
    • 4700字汉字能够覆盖为:99.9834%(350902 unk/2117514045个字)(data_4000_add_we/dict_bpe/lang_char.txt.bpe_1000_eng1400_chi4700_all6000)