This is about an image prediction model with the help of Convolutional Neural Networks, based on fastai/fastbook.
Getting started, we downloaded 300 images via Bing Image Search, then to apply a ml model by fastai in order to be able to predict images via an image upload function, included in the model. Further, we deployed the model via github and binder and heroku to a live URL, thus enabling the possibility to upload images taken via mobile devices and getting a live prediction on the categorization of the image.
In our scenario, we built a model based on 150 images of persian cats and 150 images of siamese cats that resulted in an error rate of 0.082 in the fifth epoch:
Amazingly, the amount of data needed for this was very limited, due to the pretrained model by fastai that enabled such a low error-rate.
The train loss refers to the loss of the training set in each epoch, and the valid loss refers to the loss of the validation set. Although these two differ for each epoch, in the last epoch the loss of the training set has decreased significantly, whereas the loss of the validation set is more or less on the same level as in epoch 0.
Accordingly, the confusion matrix below illustrates the relation of predictions and actuals:
As can be seen above, the only more significant error occured in concern of predicted maine coon cats, that were actually persian cats.
In concern of persian cats and siamese cats, it was possible to achieve such a low error rate due to the mostly correct labelling of the photos via Bing Image Search, however in different scenarios with other images, the error rate was definitely a lot higher due to for example incorrect labelling of photos, that did not serve well to train the model.
A more significant challenge was to deploy the model via GitHub and Binder/Heroku, however with the following procedure it turned out to be possible:
- build the complete model in jupyter notebooks
- download a export.pkl file of the model
- create a different, shorter model that refers to the complete model in a separate jupyter notebook for deployment purposes via binder/heroku with only a few lines of code and download this jupyter notebook in .ipynb format
- upload the export.pkl file, the .ipynb file for deployment, a requirements.txt file and a Procfile to a Github repository
- Heroku: create a Heroku account, connect to the GitHub repositoy and deploy the model with the help of Automatic Deployment
- works like a charm!
More questions? Feel free to contact us via the form below!
P.S. in cooperation with the websites restondo and allamenyer, which are restaurant guides for restaurants in Norway, restaurants in Netherlands and restaurants in Sweden, there will be done a pattern detection model for large texts of restaurants on these websites, in order to encounter patterns and to summarize these texts into summaries via the pattern detection model.