language-model
Here are 834 public repositories matching this topic...
-
Updated
Oct 22, 2020
-
Updated
Jan 17, 2022 - Python
-
Updated
Jan 30, 2022 - Rust
chooses 15% of token
From paper, it mentioned
Instead, the training data generator chooses 15% of tokens at random, e.g., in the sentence my
dog is hairy it chooses hairy.
It means that 15% of token will be choose for sure.
From https://github.com/codertimo/BERT-pytorch/blob/master/bert_pytorch/dataset/dataset.py#L68,
for every single token, it has 15% of chance that go though the followup procedure.
PositionalEmbedding
Proposed changes:
- Section that tells the reader about what companies already use Haystack, this helps strengthen the horizontal relationships in the community
The inspiration is taken from the Apache Superset repository
Status (please check what you already did):
- First draft (up for discussions & feedb
-
Updated
Jan 29, 2022 - Python
目前的多音字使用 pypinyin 或者 g2pM,精度有限,想做一个基于 BERT (或者 ERNIE) 多音字预测模型,简单来说就是假设某语言有 100 个多音字,每个多音字最多有 3 个发音,那么可以在 BERT 后面接 100 个 3 分类器(简单的 fc 层即可),在预测时,找到对应的分类器进行分类即可。
参考论文:
tencent_polyphone.pdf
数据可以用 https://github.com/kakaobrain/g2pM 提供的数据
进阶:多任务的 BERT


As reported by some people (see NielsRogge/Transformers-Tutorials#53 and on the forum), the
generate()method currently does not take into accountconfig.decoder.eos_token_id, onlyconfig.eos_token_idto properly stop generation.Hence, models that are made using
EncoderDecoderModel/`VisionEnco