Run A Demo

a simple getting-start

Let's take the example below (Serving Prophet Model with Flask — Predicting Future (very simple) 2019.07).
Download the saved model and put to the same folder as app.py which is:

from flask import Flask, jsonify, request
from flask_cors import CORS, cross_origin

import pickle
with open('forecast_model.pckl', 'rb') as fin:
    m2 = pickle.load(fin)

app = Flask(__name__)
CORS(app)
@app.route("/katana-ml/api/v1.0/forecast/ironsteel", methods=['POST'])
def predict():
    horizon = int(request.json['horizon'])

    future2 = m2.make_future_dataframe(periods=horizon)
    forecast2 = m2.predict(future2)

    data = forecast2[['ds', 'yhat', 'yhat_lower', 'yhat_upper']][-horizon:]

    ret = data.to_json(orient='records', date_format='iso')

    return ret
# running REST interface, port=3000 for direct test
if __name__ == "__main__":
    app.run(debug=False, host='0.0.0.0', port=3000)

Then run python app.py to serve this model.

Now start postman (or any restful test tool), use this json content {"horizon":"10"} as POST content.
(Make sure the body > raw > json is selected in postman. Besides, the header key Content-Type should be application/..., e.g. application/json.)
Then 10 steps are returned as predictions.

advanced usage with k8s

build the app into an docker image:

Create config.py for HTTP server (gunicorn is used here):

from os import environ as env
import multiprocessing

PORT = int(env.get("PORT", 8080))
DEBUG_MODE = int(env.get("DEBUG_MODE", 1))

# Gunicorn config
bind = ":" + str(PORT)
workers = multiprocessing.cpu_count() * 2 + 1
threads = 2 * multiprocessing.cpu_count()

Test by: gunicorn app:app --config=config.py.
Note: use pipenv run gunicorn app:app --config=config.py, if in pipenv [ref].

Create Dockerfile:

FROM python:3.6-jessie
RUN apt update
WORKDIR /app
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
ENV PORT 8080
CMD ["gunicorn", "app:app", "--config=config.py"]

Build by docker image build -t prophet-flask . and test by docker run -p 8080:8080 prophet-flask.

create k8s deployment:

Then deploy as a k8s service by create and run app.yaml file (many options are optional):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flask-app
  labels:
    name: flask-app
spec:
  replicas: 2
  selector:
    matchLabels:
      name: flask-app
  template:
    metadata:
      name: flask-app
      labels:
        name: flask-app
    spec:
      containers:
        - name: flask-app
          image: my_reg_username/prophet-flask:my_tag
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 256Mi
            limits:
              memory: 512Mi
          env:
            - name: DEBUG_MODE
              value: "1"

Run by: kubectl apply -f app.yaml and check by kubectl get po -o wide.

expose the deployment (then it will be a service):

[ref: k8s.io]

kubectl expose deployment/flask-app --port 80 --target-port 8080 --type=LoadBalancer
# where the port, target-port,type are optional

Check the service by kubectl get svc flask-app or more details by kubectl describe svc flask-app or only its endpoints by kubectl get ep flask-app.

separate/mounted settings

Example / Tutorial Bank

Other Tools

StreamLit (Shiny alt. in Python): cn intro: 从Python代码到APP,你只需要一个小工具:GitHub已超3000星. (Github) (StreamLit.io)