Bert Serving Save

export bert model for serving

Project README

export bert model for serving

predicting with estimator is slow, use export_savedmodel instead

create virtual environment

conda env create -f env.yml

train a classifier

bash train.sh

use the classifier

bash predict.sh

export bert model

bash export.sh

check out exported model

saved_model_cli show --all --dir $exported_dir

test exported model

bash test.sh

export it yourself

def serving_input_fn():
    label_ids = tf.placeholder(tf.int32, [None], name='label_ids')
    input_ids = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='input_ids')
    input_mask = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='input_mask')
    segment_ids = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='segment_ids')
    input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn({
        'label_ids': label_ids,
        'input_ids': input_ids,
        'input_mask': input_mask,
        'segment_ids': segment_ids,
    })()
    return input_fn

and

estimator._export_to_tpu = False
estimator.export_savedmodel(FLAGS.export_dir, serving_input_fn)
Open Source Agenda is not affiliated with "Bert Serving" Project. README Source: practicingman/bert_serving
Stars
141
Open Issues
0
Last Commit
5 years ago
License

Open Source Agenda Badge

Open Source Agenda Rating