% This section should contain an introduction to the problem aims and objectives (0.5 page)
Currently, there are many classification tasks that are being done manually. These tasks could be done more effectively if there was tooling that would allow the easy creation of classification models, without the knowledge of data analysis and machine learning models creation.
The aim of this project is to create a classification service that requires zero user knowledge about machine learning, image classification or data analysis.
The system should allow the user to create a reasonable accurate model that can satisfy the users' need.
The system should also allow the user to create expandable models; models where classes can be added after the model has been created.hyperparameters, augmenting the data.
The project aims to create a platform where users can create different types of classification models without the users having any knowledge of image classification.
\subsection{Project Objectives}
This project's primary objectives are to:
\begin{itemize}
\item Create platform where the users can create and manage their models.
\item Create a system to automatically create and train models.
\item Create a system to automatically expand and reduce models without fully retraining the models.
\item Create an API so that users can interact programatically with the system.
\end{itemize}
This project extended objectives are to:
\begin{itemize}
\item Create a system to automatically to merge modules to increase efficiency.
\item Create a system to distribute the load of training the model's among multiple services.
This section reviews current existing thechnologies in the market that do image classification. It also reviews current image classification technologies, and which meats the requirements fot the project. This review also analysis methods that are use to distrubute the learning between various machines, and how to spread the load so miminum reloading of the models is required when running the model.
There are currently some existing software as a service(SaaS) platfomrs that do provide similar services to the ones this will project will be providing.
%Amazon provides bespoque machine learning services that if were contacted would be able to provide image classification services. Amazon provides general machine learning services \cite{amazon-machine-learning}.
Amazon provides an image classification service called ''Rekognition`` \cite{amazon-rekognition}. This services provides multiple services from face regonition, celebrity regonition, object regonition and others. One of this services is called custom labels \cite{amazon-rekognition-custom-labels} which provides the most similiar service, to the one this project is about. The custom labels service allows the users to provide custom datasets and labels and using AutoML the rekognition service would generate a model that allows the users to classify images acording to the generated model.
The models generated using Amazon's rekognition dont provide ways to update the number of labels that were originaly created without generating a new project which will envolve retraining a large part of the model which would envolve large downtime between being able to add new classes. Training models also could take 30 minutes to 24 hours \cite{amazon-rekognition-custom-labels-training} which cloud result in up to 24 hours of lag between the need of creating a new label and beeing able to classify that label. A problem also arrises when the uses needs to add more than one label at the same time, for example the user sees the need to create a new label and starts a new model training, but while the model is traning a new label is also needed the user now either stops the training of the new model and retrains a new one or waits until the one currently running stops and trains a new one. If new classification classes are required with frequency this might not be the best platform to choose.
Similarly Google also has ''Cloud Vision Api`` \cite{google-vision-api} which provides similiar services to Amazon's Rekognition. But Google's Vision Api apears to be more targetd at videos than images, as indicated by their proce sheet \cite{google-vision-price-sheet}. They have tag and product idetifiers, where every image only has one tag or product. The product identififer system seams to work diferently than the Amazon's regonition and worked based on K neighorings giving the user similar products on not classification labels \cite{google-vision-product-recognizer-guide}.
This method is more effective at allowing users to add new types of products but as it does not give defined classes as the output the system does not give the target functionality that this project is hoping to achive.
\subsection{Requirements of the Image Classification Models}
The of the main ojectives of this project is to be able to create models that can give a class given an image for anydataset. Which means that there will be no ''one solution fits all to the problem``. While the most complex way to solve a problem would most likely result in success it might not be the most efficient way to achive the problem.
This section will analyse possible models that would obtain the best results. The models for this project have to be the most effiecient as possible while resulting in the best accuracry as possible.
A classical example is the MISNT Dataset \cite{mnist}. Models for the classfication of the mnist dataset can be both vary simple or extremely complex and achive diferent levels of complexity.
For example in \cite{mist-high-accuracy} a acurracy $99.91\%$, by combining 3 Convolutional Neural Networks(CNNs), with different kernel sizes and by changing hyperparameters, augmenting the data, and in \cite{lecun-98} an accuracy of $95\%$ was accived using a 2 layer neurual network with 300 hiden nodes. Both these models achive the accuracy that is required for this project but \cite{mist-high-accuracy} is more way more expensice to run. There when deciding when to choose the what models the create the system should chose to create the model that can achive the required accuracy while taking the leas amount of effort to train.
% TODO fix the inglish in these sentance
The models for this system to work as indented shold be as small as possible while obtaining the required accuracy required to achive the task of classification the classes.
\subsection{Method of image classification models}
There all multitple ways of creating of achiving image classification, the requirements of the system are that the system should return the class that an image that belongs to. Which means that we are going to be using superfised classification methods as this ones are the ones that meet the requirements of the system.
% TODO find some papers to proff this
The system will use supervised models to classify images, using a combination of different types models, using neural networks, convulution neural networks, deed neural networks and deep convluution neural networks.
These types where chosen as they have had a large success in past in other image classification chalanges, for example in the imagenet chanlage \cite{imagenet}, which has ranked various different models in classifiying a 14 million images. The contest has been running since 2010 to 2017.
The models that participated in the contest tended to use more and more Deep convlution neural networks, out of various model that where generated there are a few landmark models that were able to acchive high acurracies, including AlexNet \cite{krizhevsky2012imagenet}, VVG, ResNet-152\cite{resnet-152}, EfficientNet\cite{efficientnet}.
These models can used in two ways in the system, they can be used to generate the models via transferlearning and by using the model structure as a basis to generate a complete new model.
The models that I will be creating will be Convolutional Neural Network(CNN) \cite{lecun1989handwritten,fukushima1980neocognitron}.
The system will be creating two types of models that cannot be expanded and models that can be expanded. For the models that can be expanded, see the section about expandable models.
The models that cannot be expanded will use a simple convolution blocks, with a similar structure as the AlexNet \cite{krizhevsky2012imagenet} ones, as the basis for the model. The size of the model will be controlled by the size of the input image, where bigger images will generate more deep and complex models.
The models will be created using TensorFlow \cite{tensorflow2015-whitepaper} and Keras \cite{chollet2015keras}. These theologies are chosen since they are both robust and used in industry.
\subsection{Expandable Models}
The current most used approach for expanding a CNN model is to retrain the model. This is done by, recreating an entire new model that does the new task, using the older model as a base for the new model\cite{amazon-rekognition}, or using a pretrained model as a base and training the last few layers.
There are also unsupervised learning methods that do not have a fixed number of classes. While this method would work as an expandable model method, it would not work for the purpose of this project. This project requires that the model has a specific set of labels which does not work with unsupervised learning which has unlabelled data. Some technics that are used for unsupervised learning might be useful in the process of creating expandable models.
The system is designed with a semi-monolithic approach. The management of the data, and generation of the models will be done in the monolith while the training/running of the models will be done in GPU dedicated nodes.
The overall workflow of a user who wants a model created would be:
\begin{itemize}
\item{The user requests the server to create a model with some base images and classes.}
\item{The system creates a model}
\item{The user requests the classification or confirmation of an image}
The system requires the generation of models. Generating all models based on one single model would decrease the complexity of the system, but it would not guarantee success.
The database search will consist of trying both previous models that are known to work to similar inputs, either by models that were previously generated by the system or known good models; base known architectures that are modified to match the size of the input images.
An example of the first approach would be to try the ResNet model, while the second approach would be using the architecture of ResNet and configuring the architecture so it is more optimized for the input images.
AutoML approach would consist of using an AutoML system to generate new models that match the task at hand.
Since the AutoML approach would be more computational intensive, it would be less desirable to run. Therefore, the approach would be for the database search to happen first, where known possibly good models would be first tested. If a good model is found, then the search stops and if no model is found, the system would resort to AutoML to find a suitable model.
% The user will interact with the platform form via a web portal. % why the web portal
% The web platform will be designed using HTML and a JavaScript library called HTMX \cite{htmx} for the reactivity that the pagers requires.
% The web server that will act as controller will be implemented using go \cite{go}, due to its ease of use.
% Go was chosen has the programming language used in the server due to its performance, i.e. \cite{node-to-go}, and ease of implementation. As compiled language go, outperforms other server technologies such as Node.js.
% Go also has easy support for C ABI, which might be needed if there is a need to interact with other tools that are implemented using C.
% The web server will also interact with python to create models. Then to run the models, it will use the libraries that are available to run TensorFlow \cite{tensorflow2015-whitepaper} models for that in go.
% \subsection{Creating Models}
% The models will be created using TensorFlow \cite{tensorflow2015-whitepaper}.
% TensorFlow was chosen because, when using frameworks like Keras \cite{chollet2015keras}, it allows the easy development of machine learning models with little code. While tools like PyTorch might provide more advanced control options for the model, like dynamic graphs, it comes at the cost of more complex python code. Since that code is generated by the go code, the more python that needs to be written, the more complex the overall program gets, which is not desirable.
% The original plan was to use go and TensorFlow, but the go library was lacking that ability. Therefore, I chose to use python to create the models.
% The go server starts a new process, running python, that creates and trains the TensorFlow model. Once the training is done, the model is saved to disk which then can be loaded by the go TensorFlow library.
% \subsection{Expandable Models}
% The approach would be based on multiple models. The first model is a large model that will work as a feature traction model, the results of this model are then given to other smaller models. These model's purpose is to classify the results of the feature extraction model into classes.
% The first model would either be an already existent pretrained model or a model that is automatically created by the platform.
% The smaller models would all be all generated by the platform, this model's purpose would be actually classification.
% This approach would offer a lot of expandability, as it makes the addition of a new class as easy as creating a new small model.