% This section should contain an introduction to the problem aims and obectives (0.5 page)
This project is to design and create a new software as a service platform, where users with no experience in machine learning, data analysis could create machine learning models to process their data.
As an easy-to-use platform needs to be able to handle: image uploads, processing, and verification; model creation, management, and expansion; and image classification.
This report will do a brief analysis of current image classification systems, followed by an overview of the design of the system, and implementation details. The report will finish with analysis of legal, ethical and societal issues, and evaluation of results, and objectives.
\subsection{Project Motivations}
Currently, there are many classification tasks that are being done manually.
Thousands of man-hours are used to classify images, this task can be automated.
There are a few easy-to-use image classification systems that require low to no knowledge of image classification.
This project aims to fill that role and provide an easy-to-use system that anyone without knowledge of image classification could use.
% These tasks could be done more effectively if there was tooling that would allow the easy creation of classification models, without the knowledge of data analysis and machine learning models creation.
% The aim of this project is to create a classification service that requires zero user knowledge about machine learning, image classification or data analysis.
% The system should allow the user to create a reasonable accurate model that can satisfy the users' need.
% The system should also allow the user to create expandable models; models where classes can be added after the model has been created. % hyperparameters, augmenting the data.
The project aims to create a platform an easy to use where users can create different types of classification models without the users having any knowledge of image classification.
\item A user can upload images, train a model on those images, and evaluate images using the web interface.
\item A user can perform the same tasks, via the API service.
\end{itemize}
\subsection{Project Structure}
The report on the project shows the development and designs stages of the project. With each section addressing a part of the design and development process.
\hyperref[sec:introduction]{Introduction}& The introduction section will do a brief introduction of the project and its objectives. \\
\hyperref[sec:lit-tech-review]{Literature and Technical Review}& The Literature and Technical Review section will introduce some current existing projects that are similar to this one, and introduce some technologies that can be used to implement this project. \\
\hyperref[sec:sanr]{Service Analysis and Requirements}& This section will analyse the project requirements. The section will define design requirements that the service will need to implement to be able to achieve the goals that were set up. \\
\hyperref[sec:sd]{Service Design}& This section will discuss how a service could be designed that it matches the requirements of the service. \\
\hyperref[sec:sd]{Service Implementation}& Information on how the design of the system was turned into software is in this section. \\
\hyperref[sec:lsec]{Legal, Societal, and Ethical Considerations}& This section will cover potential legal societal and ethical issues that might arise from the service and how they are mitigated. \\
\hyperref[sec:crpo]{Critical Review of Project Outcomes}& In this section, the project goals will compare to what was achieved. Then according to the results, the project will either be deemed successful or not.
This section reviews existing technologies in the market that do image classification. It also reviews current image classification technologies, which meet the requirements for the project. This review also analyses methods that are used to distribute the learning between various physical machines, and how to spread the load so minimum reloading of the models is required when running the model.
There are currently some existing software as a service (SaaS) platforms that do provide similar services to the ones this will project will be providing.
%Amazon provides bespoque machine learning services that if were contacted would be able to provide image classification services. Amazon provides general machine learning services \cite{amazon-machine-learning}.
Amazon provides an image classification service called ``Rekognition'' \cite{amazon-rekognition}. This service provides multiple services from face recognition, celebrity recognition, object recognition and others. One of these services is called custom labels \cite{amazon-rekognition-custom-labels} that provides the most similar service, to the one this project is about. The custom labels service allows the users to provide custom datasets and labels and using AutoML the Rekognition service would generate a model that allows the users to classify images according to the generated model.
The models generated using Amazon's Rekognition do not provide ways to update the number of labels that were created, without generating a new project. This will involve retraining a large part of the model, which would involve large downtime between being able to add new classes. Training models also could take 30 minutes to 24 hours, \cite{amazon-rekognition-custom-labels-training}, which could result in up to 24 hours of lag between the need of creating a new label and being able to classify that label. A problem also arises when the uses need to add more than one label at the same time. For example, the user sees the need to create a new label and starts a new model training, but while the model is training a new label is also needed. The user now either stops the training of the new model and retrains a new one, or waits until the one currently running stops and trains a new one. If new classification classes are required with frequency, this might not be the best platform to choose.
Similarly, Google also has ``Cloud Vision API'' \cite{google-vision-api} which provides similar services to Amazon's Rekognition. But Google's Vision API appears to be more targeted at videos than images, as indicated by their price sheet \cite{google-vision-price-sheet}. They have tag and product identifiers, where every image only has one tag or product. The product identifier system seams to work differently than the Amazon's Rekognition and worked based on K neighbouring giving the user similar products on not classification labels \cite{google-vision-product-recognizer-guide}.
This method is more effective at allowing users to add new types of products, but as it does not give defined classes as the output, the system does not give the target functionality that this project is aiming to achieve.
The of the main objectives of this project are to be able to create models that can give a class given an image for any dataset. Which means that there will be no ``one solution fits all to the problem''. While the most complex way to solve a problem would most likely result in success, it might not be the most efficient way to achieve the results.
This section will analyse possible models that would obtain the best results. The models for this project have to be the most efficient as possible while resulting in the best accuracy as possible.
A classical example is the MNIST Dataset \cite{mnist}. Models for the classification of the MNIST dataset can be both simple or extremely complex and achieve different levels of complexity.
For example, in \cite{mist-high-accuracy} an accuracy $99.91\%$, by combining 3 Convolutional Neural Networks (CNNs), with different kernel sizes and by changing hyperparameters, augmenting the data, and in \cite{lecun-98} an accuracy of $95\%$ was achieved using a 2 layer neural network with 300 hidden nodes. Both these models achieve the accuracy that is required for this project, but \cite{mist-high-accuracy} are more computational intensive to run. When deciding when to choose what models they create, the system should choose to create the model that can achieve the required accuracy while taking the leas amount of effort to train.
The models for this system to work as indented should be as small as possible while obtaining the required accuracy required to achieve the task of classification of the classes.
As the service might need to handle many requests, it needs to be able to handle as many requests as possible. This would require that the models are easy to run, and smaller models are easier to run; therefore the system requires a balance between size and accuracy.
There are all multiple ways of achieving image classification, the requirements of the system are that the system should return the class that an image that belongs to. Which means that we will be using supervised classification methods, as these are the ones that meet the requirements of the system.
The system will use supervised models to classify images, using a combination of different types of models, using neural networks, convolution neural networks, deed neural networks and deep convolution neural networks.
These types were decided as they have had a large success in the past in other image classification challenges, for example in the ImageNet challenges \cite{imagenet}, which has ranked different models in classifying a 14 million images. The contest has been running since 2010 to 2017.
The models that participated in the contest tended to use more and more Deep convolution neural networks, out of the various models that were generated there are a few landmark models that were able to achieve high accuracies, including AlexNet \cite{krizhevsky2012imagenet}, ResNet-152 \cite{resnet-152}, EfficientNet \cite{efficientnet}.
These models can be used in two ways in the system, they can be used to generate the models via transfer learning and by using the model structure as a basis to generate a complete new model.
AlexNet \cite{krizhevsky2012imagenet} is a deep convolution neural network that participated in the ImageNet ILSVRC-2010 contest, it achieved a top-1 error rate of $37.5\%$, and a top-5 error rate of $37.5\%$. A variant of this model participated in the ImageNet LSVRC-2012 contest and achieved a top-5 error rate of $15.3\%$. The architecture of AlexNet consists of 5 convolution layers that are run separately followed by 3 dense layers, some layers are followed by Max pooling. The training the that was done using multiple GPUs, one GPU would run the part of each layer, and some layers are connected between GPUs. The model during training also contained data argumentation techniques such as label preserving data augmentation and dropout.
While using AlexNet would probably yield desired results, it would complicate the other parts of the service. As a platform as a service, the system needs to manage the number of resources available, and requiring to use 2 GPUs to train a model would limit the number of resources available to the system by 2-fold.
ResNet \cite{resnet} is a deep convolution neural network that participated in the ImageNet ILSVRC-2015 contest, it achieved a top-1 error rate of $21.43\%$ and a top-5 error rate of $5.71\%$. ResNet was created to solve a problem, the problem of degradation of training accuracy when using deeper models. Close to the release of the ResNet paper, there was evidence that deeper networks result in higher accuracy results, \cite{going-deeper-with-convolutions, very-deep-convolution-networks-for-large-scale-image-recognition}. but the increasing the depth of the network resulted in training accuracy degradation.
ResNet works by creating shortcuts between sets of layers, the shortcuts allow residual values from previous layers to be used on the upper layers. The hypothesis being that it is easier to optimize the residual mappings than the linear mappings.
The results proved that the using the residual values improved training of the model, as the results of the challenge prove.
It's important to note that using residual networks tends to give better results, the more layers the model has. While this could have a negative impact on performance, the number of parameters per layer does not grow that steeply in ResNet when comparing it with other architectures as it uses other optimizations such as $1x1$ kernel sizes, which are more space efficient. Even with these optimizations, it can still achieve incredible results. Which might make it a good contender to be used in the service as one of the predefined models to use to try to create the machine learning models.
EfficientNet \cite{efficient-net} is a deep convolution neural network that was able to achieve $84.3\%$ top-1 accuracy while ``$8.4x$ smaller and $6.1x$ faster on inference than the best existing ConvNet''. EfficientNets \footnote{the family of models that use the thecniques that described in \cite{efficient-net}} are models that instead of the of just increasing the depth or the width of the model, we increase all the parameters at the same time by a constant value. By not scaling only depth, EfficientNets can acquire more information about the images, specially the image size is considered.
To test their results, the EfficientNet team created a baseline model which as a building block used the mobile inverted bottleneck MBConv \cite{inverted-bottleneck-mobilenet}. The baseline model was then scaled using the compound method, which resulted in better top-1 and top-5 accuracy.
While EfficientNets are smaller than their non-EfficientNet counterparts, they are more computational intensive, a ResNet-50 scaled using the EfficientNet compound scaling method is $3\%$ more computational intensive than a ResNet-50 scaled using only depth while improving the top-1 accuracy by $0.7\%$.
And as the model will be trained and run multiple times decreasing the computational cost might be a better overall target for sustainability then being able to offer higher accuracies.
Even though scaling using the EfficientNet compound method might not yield the best results using some EfficientNets what were optimized by the team to would be optimal, for example, EfficientNet-B1 is both small and efficient while still obtaining $79.1\%$ top-1 accuracy in ImageNet, and realistically the datasets that this system will process will be smaller and more scope specific than ImageNet.
% The models that I will be creating will be Convolutional Neural Network(CNN) \cite{lecun1989handwritten,fukushima1980neocognitron}.
% The system will be creating two types of models that cannot be expanded and models that can be expanded. For the models that can be expanded, see the section about expandable models.
% The models that cannot be expanded will use a simple convolution blocks, with a similar structure as the AlexNet \cite{krizhevsky2012imagenet} ones, as the basis for the model. The size of the model will be controlled by the size of the input image, where bigger images will generate more deep and complex models.
% The models will be created using TensorFlow \cite{tensorflow2015-whitepaper} and Keras \cite{chollet2015keras}. These theologies are chosen since they are both robust and used in industry.
% The current most used approach for expanding a CNN model is to retrain the model. This is done by, recreating an entire new model that does the new task, using the older model as a base for the new model \cite{amazon-rekognition}, or using a pretrained model as a base and training the last few layers.
% There are also unsupervised learning methods that do not have a fixed number of classes. While this method would work as an expandable model method, it would not work for the purpose of this project. This project requires that the model has a specific set of labels which does not work with unsupervised learning which has unlabelled data. Some technics that are used for unsupervised learning might be useful in the process of creating expandable models.
The technical review of current systems reveals that there are current systems that exist that can perform image classification tasks, but they are not friendly in ways to easily expand currently existing models.
The current methods that exist for image classification seem to have reached a classification accuracy and efficiency that make a project like this feasible.
Understanding the project that is being built is critical in the software deployment process, this section will look into the required parts for the project to work.
The service should be able to respond to any load that is given to it. This will require the ability to scale depending on the number of requests that the service is receiving.
Therefore, the service requires some level of distributivity.
It would be unwise to perform machine learning training on the same machine that the main web server is running, as it would starve that server of resources.
As the service contains more than one resource to manage, it should be able to track what are the resources it has available and distribute the load accordingly.
The user of the application should be able to interact with the platform using a graphical user interface(GUI).
There are multiple possible ways for the user to interact with services like web, mobile or desktop applications.
A web application is the most reasonable solution for this service.
The main way to interact with this service would be via an API, the API that the system will provide would be an HTTPS API \ref{sec:anal-api}, since the service already has a web oriented API, it makes the most sense for the GUI to be a web based as well.
The user should be able to access the web app and use it to:
\begin{itemize}
\item{Configure model}
\item{Manage datasets}
\item{Configure API tokens}
\item{See API usage}
%TODO write more
\end{itemize}
For administrator purposes, the web application should also allow the management of available compute resources to the system.
\subsection{API}\label{sec:anal-api}
As a software as a service platform, the users of the platform will mainly interact via the API.
The user would set up the machine learning model using the web interface and then configure their application, to use a token, to securely interact with the API.
There exists multiple architectural styles for APIs, using a REST API would be the proper architectural style as it is the most common \cite{json-api-usage-stats}, allowing for the most compatibility with other services.
The API should allow users to the most used features of the app, such as:
This separation of compute resources is required because machine learning is computed and memory intensive.
Running this resource intensive operations on the same server that is running the main API could cause increase latency or downtime in the API, which would not be ideal.
The service should be able to decide where to distribute tasks.
The tasks should be distributed according to the resources that the task needs.
The tasks need to be submitted to servers in an organized manner.
Repeated tasks should be sent to the same server to optimize the usage of the resources, as this would improve the efficiency of the service by preventing, for example, reload of data.
For example, sending a training workload to a server that more GPU resources available to it while allowing slower GPU servers to run the models for prediction.
The service should also keep tract of the space available to it.
The service must decide which images, that it manages, to keep and which ones to delete.
It should also keep track of other services images, and control the access to them, and guarantee that the server that is closeted to the recourses is that has priority on tasks related to those recourses.
\subsection{Data Management}
The service needs to manage various kinds of data.
The first kind of data the service needs to manage is user data.
This is data that identifies a user and allows the user to authenticate with the service.
A future version of this service could possibly also store payment information.
This information would be used to charge for the usage of the service, although this is outside the scope of this project.
The second kind of data that has to be managed is the user images.
These images could be either uploaded to the service, or stored on the users' devices.
The service should manage access to remote images, and keep track of local images.
The last kind of data that the service has to keep track of are model definitions and model weights.
These can be sizable files, which makes it important for the system to distribute them precisely, allowing the files to be closer to the servers that need them the most.
This section shows that there are requirements that need to be met for the system to work as indented. These requirements range from usability requirements, to system-level resource management requirements.
The service needs to be easy to use by the user, while being able to handle loads from both the website and API requests.
The service requires the ability to be able to scale up to the loads that is being provided with and keep track and manage resources that the user or the service created.
It also requires keeping track of computational resources that are available to it, so it does not cause deadlocks. For example, using all of its GPU recourses to train a model while there are classification tasks to be done.
The next section will go thought the process of the implementation of an application that implements a subset of these design requirements, with some limitations that will be explained.
The design proposed in this section can be viewed as a scope version of this project, and the \hyperref[sec:si]{Service Implementation} section will discuss how the scope was limited so that the service would achieve the primary goals of the project while following the design.
The presentation can be either implemented as a webpage working directly on the server that is running the API, or it can be implemented as a separate web app that uses the API to interface directly.
The API should be consistent and easy to use, information on how to use the API should also be available to possible users.
As it was mention in \ref{sec:anal-api} most services use REST JSON APIs to communicate with each other. Therefore, to make this service as compatible with each other services, this service should also implement an REST JSON API.
\subsection{Generation of Models}
The service should use any means available to generate models, such means can be:
\begin{multicols}{2}
\begin{itemize}
\item Templating.
\item Transfer Learning.
\item Database Search.
\item Pretrained Models with classification heads.
\end{itemize}
\end{multicols}
\subsection{Models Training}
% The Training process follows % TODO have a flow diagram
Model Training should be independent of image classification. A model training should not affect any current classification. The system could use multiple ways to achieve this, such as:
\item Separating the training to different machines.
\item Control the number of resources that training machine can utilize
\item Control the time when the shared training and inference machine can be used for training.
\item Allow users to have their own ``Runners'' where the training tasks can happen.
\end{itemize}
\end{multicols}
\subsection{Conclusion}
This section introduced multiple possible designs options for a service, that intends to achieve automated image classification, can follow to implement a robust system.
The next section will be discussing how the system was implemented and which of the possible design options was chosen when implementing the system.
\pagebreak
\section{Service Implementation}\label{sec:si}
This section will discuss how the service followed some possible designs to achieve a working system.
The design path that was decided matches what made sense for the scale and needs of the project.
I selected Svelte because it's been one of the most liked frameworks to work with in the last years, accordingly to the State of JS survey \cite{state-of-js-2022}.
It's also one of the best performant frameworks that is currently available that has extremity good performance \cite{js-frontend-frameworks-performance}.
This process was original slow as the system did not have the capabilities to parallelize the process of importing the images, but this was implemented, and the import process was improved.
The improved process now takes a few seconds to process and verify the entirety of the dataset, making the experience for the end user better.
Alternatively, the user can use the API to create new classes and upload images.
This step will appear both in the main tab of model page. Once the user instructs the system to start training, the model page will become the training page, and it will show progress of the training of the model.
During the entire process of creating new classes in the model and retraining the model, the user can still perform all the classifications tasks they desire.
Task management is the section of the website is where uses can manage their tasks. This includes training and classification tasks.
Users in this tab can see what is the progress, and results of their tasks.
The webpage also provides nice, easy to see statistics on the task results, allowing the user to see how the model is performing.
\textbf{TODO add image}
On the administrator, users should be able to change the status of tasks as well as see a more comprehensive view on how the tasks are being performed.
Administrator users can see the current status of runners, as well as which task the runners are doing.
The API was implemented as multithreaded go \cite{go} server.
The application on launch loads a configuration file, connects to the database.
After connecting to the database, the application performs pre-startup checks to make sure no tasks that were interrupted via a server restart and were not left in an unrecoverable state.
Once the checks are done, the application creates workers (which will be covered in the next subsection), which when completed the API server is finally started up.
Information about the API is shown around the web page so that the user can see information about the API right next to where the user would normally do the action, providing a good user interface.
Models generation happens in the API server, the API server analyses what the image that provided and generates several model candidates accordingly.
The number of model candidates is user defined.
The model generation subsystem decides the structure of the model candidates based on the image size, it prioritizes the smaller models for smaller images and convolution networks with bigger images.
The depth is controlled both by the image size and number of outputs, models candidates that need to be expanded are generated with bigger values to account for possible new values.
It tries to generate the optimal size if only one model is requested.
If more than one is requested then the generator tries to generate models of various types and sizes, so if there is possible smaller model it will also be tested.
The runner when it needs to perform training it generates a python script tailored to the model candidate that needs to be trained then runs the that python script, and monitors the result of the python script.
While the python script is running, it takes use of the API to inform the runner of epoch and accuracy changes.
The during train, the runner takes a round-robin approach.
It trains every model candidate for a few epochs, then compares the different models candidates.
If there is too much operation from the best model to the worst model, then the system might decide not to continue training a certain candidate and focus the training resources on candidates that are performing better.
Once one candidate archives the target accuracy, which is user defined, the training system stops training the models candidates.
The model candidate that achieved the target accuracy is then promoted to the model, the other candidates are removed.
The model now can be used to predict the labels for any image that the user decides to upload.
Expandable models follow mostly the same process as the normal models.
First, bigger model candidates are generated.
Then the models are training using the same technic.
At the end, after the model candidate has been promoted to the full model, the system starts another python process that loads the just generated model and splits into a base model and a head model.
During the expanding process, the generation system creates a new head candidate that matches the newly added classes.
The base model, that was created in the original training process, is used with all available data to train the head candidate to perform the classification tasks.
The training process is similar to the normal training system, but this uses a different training script.
Once the model has finished training and the system meets the accuracy requirements, then makes the new head available for classification.
This section discussed the design and implementation specifications for the system.
While there were some areas where the requirements were not met completely, due to scope problems, the implementation allows for the missing designed sections to be implemented at a later time.
The implementation follows the requirements with the adjusted scope.
The results of the implementation will be tested in a future section.
Legal issues might occur due to image uploaded images. For example, those images could be copyrighted, or the images could be confidential. The service is designed to provide ways to allow users to host their images without having to host the images itself, moving the legal requirement to the management of the data to the user of the system.
The General Data Protection Regulation (GDPR) (GDPR, 2018) is a data protection and privacy law in the European Union and the European Economic Area, that has also been implemented into British law.
The main objective of the GDPR is to minimise the data collected by the application for purposes that are not the used in the application, as well as giving users the right to be forgotten.
Once the there is no more work that requires the data being done, the system will remove all relevant identifiable references to that data.
\subsection{Social Issues}
The web application was designed to be easy to use and there tries to consider all accessibility requirements.
% TODO talk about this
% The service itself could raise issues of taking jobs that are currently done by humans.
% This is less problematic as time has shown that the jobs just change, instead of manually classifying the images, the job transforms from the classifying all the images that are needed to maintain and verifying that the data being input to the model is correct.
\subsection{Ethical Issues}
While the service itself does not raise any ethical concerns. The data that the service will process could raise ethical complications.
The datasets were selected to represent different possible sizes of models, and sizes of output labels.
The ImageNet\cite{imagenet} was not selected as one of the datasets that will be tested, as it does not represent the target problem that this project is trying to tackle.
The tests will measure:
\begin{itemize}
\item Time to process and validate the entire dataset upon upload
\item Time to train the dataset
\item Time to classify the image once the dataset has been trained
The MNIST \cite{mnist} dataset was selected due to its size. It's a small dataset that can be trained quickly and can be used to verify other internal systems of the service.
The service can create models that represent what the users want in a reasonable amount of time without much interaction from the user.
The models created have the target accuracy required by the users, and the amount of time it takes for the models to train and expand is reasonable and within the margins that meet the success criteria for the project.