Metadata-Version: 2.1
Name: bert-embedding
Version: 0.1.1
Summary: BERT token level embedding with MxNet
Home-page: https://github.com/imgarylai/bert_embedding
Author: Gary Lai
Author-email: gary@gary-lai.com
License: ALv2
Download-URL: https://github.com/imgarylai/bert_embedding/tree/master
Keywords: bert nlp mxnet gluonnlp machine deep learning sentence encoding embedding
Platform: UNKNOWN
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Classifier: Operating System :: POSIX
Classifier: Operating System :: Unix
Classifier: Operating System :: MacOS
Description-Content-Type: text/markdown
Requires-Dist: mxnet (==1.3.0)
Provides-Extra: gpu
Requires-Dist: mxnet-cu92 (==1.3.0) ; extra == 'gpu'

# Bert Embeddings

[![Build Status](https://travis-ci.org/imgarylai/bert_embedding.svg?branch=master)](https://travis-ci.org/imgarylai/bert_embedding) [![PyPI version](https://badge.fury.io/py/bert-embedding.svg)](https://badge.fury.io/py/bert-embedding)

[BERT](https://arxiv.org/abs/1810.04805), published by [Google](https://github.com/google-research/bert), is new way to obtain pre-trained language model word representation. Many NLP tasks are benefit from BERT to get the SOTA.

The goal of this project is to obtain the sentence and token embedding from BERT's pre-trained model. In this way, instead of building and do fine-tuning for an end-to-end NLP model, you can build your model by just utilizing the sentence or token embedding.

This project is implemented with [@MXNet](https://github.com/apache/incubator-mxnet). Special thanks to [@gluon-nlp](https://github.com/dmlc/gluon-nlp) team.

## Install

```
pip install bert-embedding
pip install https://github.com/dmlc/gluon-nlp/tarball/master
# If you want to run on GPU machine, please install `mxnet-cu92`.
pip install mxnet-cu92
```

> This project use API from gluonnlp==0.5.1, which hasn't been released yet. Once 0.5.1 is release, it is not necessary to install gluonnlp from source. 

## Usage

```python
from bert_embedding import BertEmbedding

bert_abstract = """We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
 Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
 As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. 
BERT is conceptually simple and empirically powerful. 
It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%."""
sentences = bert_abstract.split('\n')
bert = BertEmbedding()
result = bert.embedding(sentences)
```

This result contains following three parts in a tuple
- sentence embedding
- tokens
- tokens embedding

Below is the result from the demo code above:

```python
result[0][0]
# array([-0.835946  , -0.4605566 , -0.95620036, ..., -0.95608854,
#       -0.6258104 ,  0.7697007 ], dtype=float32)
result[0][0].shape
# (768,)
result[0][1]
# ['we', 'introduce', 'a', 'new', 'language', 'representation', 'model', 'called', 'bert', ',', 'which', 'stands', 'for', 'bidirectional', 'encoder', 'representations', 'from', 'transformers']
len(result[0][1])
# 18
len(result[0][2])
# 18
result[0][2][0]
# array([ 0.4805648 ,  0.18369392, -0.28554988, ..., -0.01961522,
#        1.0207764 , -0.67167974], dtype=float32)
result[0][2][0].shape
# (768,)
```

## Available pre-trained BERT models

| |book_corpus_wiki_en_uncased|book_corpus_wiki_en_cased|wiki_multilingual
|---|---|---|---|
|bert_12_768_12|✓|✓|✓|
|bert_24_1024_16|x|✓|x|

Example of using the large pre-trained BERT model from Google 

```python
from bert_embedding.bert import BertEmbedding

bert = BertEmbedding(model='bert_24_1024_16', dataset_name='book_corpus_wiki_en_cased')
```

Source: [gluonnlp](http://gluon-nlp.mxnet.io/model_zoo/bert/index.html) 

