This commit is contained in:
parent
80012a14cd
commit
b43abe74d9
27
diagrams/models_advanced_flow.d2
Normal file
27
diagrams/models_advanced_flow.d2
Normal file
@ -0,0 +1,27 @@
|
||||
direction: right
|
||||
|
||||
User: "Authenticated User" {
|
||||
shape: Person
|
||||
}
|
||||
|
||||
ModelCreation: "Model Creation"
|
||||
ImageUpload: "Image Upload"
|
||||
TrainModel: "Train Model"
|
||||
EvaluateModel: "Infer an Image"
|
||||
Expand: "Expand a model"
|
||||
|
||||
User->ModelCreation: "User creates the model"
|
||||
|
||||
ModelCreation->ImageUpload
|
||||
User->ImageUpload: "User uploads Images"
|
||||
|
||||
ImageUpload->TrainModel
|
||||
User->TrainModel: "Requests the training of the model"
|
||||
TrainModel->TrainModel: Failed to train
|
||||
|
||||
TrainModel->EvaluateModel
|
||||
User->EvaluateModel: "Request class for an image"
|
||||
|
||||
EvaluateModel->Expand
|
||||
User->Expand: "User uploads new images"
|
||||
Expand->TrainModel
|
@ -7,7 +7,7 @@ User: "Authenticated User" {
|
||||
ModelCreation: "Model Creation"
|
||||
ImageUpload: "Image Upload"
|
||||
TrainModel: "Train Model"
|
||||
EvaluateModel: "Image Upload"
|
||||
EvaluateModel: "Infer an Image"
|
||||
|
||||
User->ModelCreation: "User creates the model"
|
||||
|
||||
|
@ -12,6 +12,9 @@
|
||||
\graphicspath{ {../images for report/} }
|
||||
\usepackage[margin=2cm]{geometry}
|
||||
|
||||
\usepackage{datetime}
|
||||
\newdateformat{monthyeardate}{\monthname[\THEMONTH], \THEYEAR}
|
||||
|
||||
\usepackage{hyperref}
|
||||
\hypersetup{
|
||||
colorlinks,
|
||||
@ -44,7 +47,7 @@
|
||||
\title{Classify: Image Classification as a Software Platform}
|
||||
|
||||
% Write your full name, as in University records
|
||||
\author{Andre Henriques, 6644818}
|
||||
\author{Andre Henriques\\Univerity of surrey}
|
||||
|
||||
\date{}
|
||||
|
||||
@ -60,7 +63,7 @@
|
||||
\end{center}
|
||||
|
||||
\begin{center}
|
||||
\today
|
||||
\monthyeardate\today
|
||||
\end{center}
|
||||
|
||||
\NewPage
|
||||
@ -121,7 +124,7 @@
|
||||
\item a platform where the users can create and manage their models.
|
||||
\item a system to automatically create and train models.
|
||||
\item a system to automatically expand and reduce models without fully retraining the models.
|
||||
\item an API so that users can interact programmatically with the system.
|
||||
\item an API that users can interact programmatically with the system.
|
||||
\end{itemize}
|
||||
This project extended objectives are to:
|
||||
\begin{itemize}
|
||||
@ -379,7 +382,7 @@
|
||||
It allows for the fast transitions between pages without having a full reload of the browser happening.
|
||||
|
||||
Since this in this project the API and the Web APP are separated, it makes the use of server-side rendering more complex and less efficient.
|
||||
As the server would have to first request the API for information to build the web page and then send it to the users' device.
|
||||
As, the server would have to first request the API for information to build the web page and then send it to the users' device.
|
||||
Therefore, the system will use client-side rendering only, allowing for the users' device to request the API directly for more information.
|
||||
|
||||
There exist currently many frameworks to create SPAs.
|
||||
@ -426,10 +429,10 @@
|
||||
\label{fig:simplified_model_diagram}
|
||||
\end{figure}
|
||||
|
||||
As the diagram, \ref{fig:simplified_model_diagram} shows the steps that the user takes to use a model.
|
||||
The diagram \ref{fig:simplified_model_diagram} shows the steps that the user takes to use a model.
|
||||
|
||||
First, the user creates the model.
|
||||
In this step, theImplementation Details user uploads a sample image of what the model will be handling.
|
||||
In this step, the user uploads a sample image of what the model will be handling.
|
||||
This image is used to define what the kinds of images the model will be able to intake.
|
||||
|
||||
Currently, the system does not support resizing of images that are different from the one uploaded at this step during evaluation.
|
||||
@ -449,6 +452,22 @@
|
||||
|
||||
When the model is finished training, the user can use the model to run inference tasks on images.
|
||||
|
||||
\subsubsection*{Advanced Model Management}
|
||||
|
||||
\begin{figure}[h!]
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{models_advanced_flow}
|
||||
\caption{Simplified Diagram of Advanced Model management}
|
||||
\label{fig:simplified_model_advanced_diagram}
|
||||
\end{figure}
|
||||
|
||||
The diagram \ref{fig:simplified_model_advanced_diagram} shows the steps that the user takes to use a model.
|
||||
|
||||
The steps are very similar to the normal model management.
|
||||
There exists a new step where the user can upload new images and create new classes, then the user can request the retraining of the model.
|
||||
|
||||
During the expanding and training of new classes, the user can still use the inference step.
|
||||
|
||||
\subsection{API}
|
||||
|
||||
As a software as a service, one of the main requirements is to be able to communicate with other services.
|
||||
@ -466,90 +485,91 @@
|
||||
Either of those options is extremely inefficient.
|
||||
Therefore, there is a need to use multipart form requests are required to allow the early uploading of binary files.
|
||||
|
||||
Go was selected as the langunage to implement the backend due various of its' advantages.
|
||||
Go has extermily fsat compilations which allows for rapid development, and iteration.
|
||||
Go has a very mininal runtime which allows it to be faster, than hevy runtime languages such as javascript.
|
||||
It is also a very simple language which helps maintain the codebase.
|
||||
Go was selected as the language to implement the backend due to various of its advantages.
|
||||
Go has extremely fast compilations which allows for rapid development, and iteration.
|
||||
It has a very minimal runtime which allows it to be faster, than heavy runtime languages such as JavaScript.
|
||||
It is also a simple language, which helps maintain the codebase.
|
||||
|
||||
The Go languange integrates well with C libraries which allows it access to machine learning libraries like TensorFlow.
|
||||
The Go language integrates well with C libraries, which allows it access to machine learning libraries like TensorFlow.
|
||||
|
||||
\subsubsection*{Authentication}
|
||||
|
||||
For an user to be authenticated with the server it must first login.
|
||||
During the login process the service checks to see if the user is regsitered and if the password provided during the login matches the stored hash.
|
||||
For a user to be authenticated with the server, it must first log in.
|
||||
During the login process, the service checks to see if the user is registered and if the password provided during the login matches the stored hash.
|
||||
|
||||
Upon veryifying the user a token is emmited.
|
||||
That token can be used on the header ``token'' as proff that the user is authenticated.
|
||||
Upon verifying the user, a token is emitted.
|
||||
That token can be used as the header ``token'' as proof that the user is authenticated.
|
||||
|
||||
|
||||
\subsection{Generation of Models}
|
||||
The system requires the generation of models \ref{fig:expandable_models_generator}. Generating all models based on one single model would decrease the complexity of the system, but it would not guarantee success.
|
||||
|
||||
The system needs to generate successful models, to achieve this, the system will be performing two approaches:
|
||||
\begin{itemize}
|
||||
\item{Database search}
|
||||
\item{AutoML (secondary goal)}
|
||||
\end{itemize}
|
||||
The service requires the generation of models \ref{fig:expandable_models_generator}.
|
||||
|
||||
The database search will consist of trying both previous models that are known to work to similar inputs, either by models that were previously generated by the system or known good models; base known architectures that are modified to match the size of the input images.
|
||||
\subsubsection*{Implementation Details}
|
||||
|
||||
An example of the first approach would be to try the ResNet model, while the second approach would be using the architecture of ResNet and configuring the architecture so it is more optimized for the input images.
|
||||
The model definitions are generated in the go API and then stored in the database.
|
||||
The runner then loads the definition from the API and creates a model based on that.
|
||||
|
||||
AutoML approach would consist of using an AutoML system to generate new models that match the task at hand.
|
||||
\subsubsection*{Model Generation}
|
||||
|
||||
Since the AutoML approach would be more computational intensive, it would be less desirable to run. Therefore, the approach would be for the database search to happen first, where known possibly good models would be first tested. If a good model is found, then the search stops and if no model is found, the system would resort to AutoML to find a suitable model.
|
||||
Generating all models based on one single model would decrease the complexity of the system, but it would not guarantee success.
|
||||
|
||||
The system needs to generate successful models, to achieve this, the system will be performing two approaches:
|
||||
\begin{itemize}
|
||||
\item{Database search}
|
||||
\item{AutoML (secondary goal)}
|
||||
\end{itemize}
|
||||
|
||||
The database search will consist of trying both previous models that are known to work to similar inputs, either by models that were previously generated by the system or known good models; base known architectures that are modified to match the size of the input images.
|
||||
|
||||
An example of the first approach would be to try the ResNet model, while the second approach would be using the architecture of ResNet and configuring the architecture so it is more optimized for the input images.
|
||||
|
||||
AutoML approach would consist of using an AutoML system to generate new models that match the task at hand.
|
||||
|
||||
Since the AutoML approach would be more computational intensive, it would be less desirable to run. Therefore, the approach would be for the database search to happen first, where known possibly good models would be first tested. If a good model is found, then the search stops and if no model is found, the system would resort to AutoML to find a suitable model.
|
||||
|
||||
\subsection{Models Training}
|
||||
% The Training process follows % TODO have a flow diagram
|
||||
|
||||
The training of the models happens in a secondary Training Process(TP).
|
||||
|
||||
Once a model candidate is generated, the main process informs the TP of the new model. The TP obtains the dataset and starts training. Once the model finished training, it reports to the main process with the results. The main process then decides if the model matches the requirements. If that the case, then the main process goes to the next steps; otherwise, the system goes for the next model that requires training.
|
||||
Once a model candidate is generated, the main process informs the TP of the new model.
|
||||
The TP obtains the dataset and starts training.
|
||||
Once the model finished training, it reports to the main process with the results.
|
||||
The main process then decides if the model matches the requirements.
|
||||
If that the case, then the main process goes to the next steps; otherwise, the service goes for the next model that requires training.
|
||||
|
||||
The TP when training the model decides when the training is finished, this could be when the training time has finished or if the model accuracy is not substantially increasing within the last training rounds.
|
||||
|
||||
During the training process, the TP needs to cache the dataset being used, this is because to create one model, the system might have to generate and train more than one model, during this process, if the dataset is not cached then time is spent reloading the dataset into memory.
|
||||
|
||||
|
||||
During the training process, the TP needs to cache the dataset being use.
|
||||
This is because to create one model, the service might have to generate and train more than one model, during this process, if the dataset is not cached then time is spent reloading the dataset into memory.
|
||||
|
||||
\pagebreak
|
||||
\section{Legal and Ethical Issues}
|
||||
|
||||
\pagebreak
|
||||
|
||||
\section{Results} % TODO change this
|
||||
|
||||
As it was stated during the introduction, this project has multiple objectives.
|
||||
|
||||
\subsection{Platform where users can manage their models}
|
||||
|
||||
This goal was achieved there a web-based platform was created to manage and control the models.
|
||||
|
||||
\subsection{A system to automatically train and create models}
|
||||
|
||||
This goal was achieved, there is currently a system to automatically create and train models.
|
||||
|
||||
The system that trains models needs some improvement, as it still is partially unefficient when managing the system loads while training.
|
||||
|
||||
\subsection{An API that users can interact programmatically}
|
||||
|
||||
% technological free overview
|
||||
% \subsection{Web Interface}
|
||||
% The user will interact with the platform form via a web portal. % why the web portal
|
||||
% The web platform will be designed using HTML and a JavaScript library called HTMX \cite{htmx} for the reactivity that the pagers requires.
|
||||
% The web server that will act as controller will be implemented using go \cite{go}, due to its ease of use.
|
||||
% Go was chosen has the programming language used in the server due to its performance, i.e. \cite{node-to-go}, and ease of implementation. As compiled language go, outperforms other server technologies such as Node.js.
|
||||
% Go also has easy support for C ABI, which might be needed if there is a need to interact with other tools that are implemented using C.
|
||||
% The web server will also interact with python to create models. Then to run the models, it will use the libraries that are available to run TensorFlow \cite{tensorflow2015-whitepaper} models for that in go.
|
||||
This goal was achieved and there is currently a working API that users can use to control the models and do inference tasks.
|
||||
|
||||
% \subsection{Creating Models}
|
||||
% The models will be created using TensorFlow \cite{tensorflow2015-whitepaper}.
|
||||
% TensorFlow was chosen because, when using frameworks like Keras \cite{chollet2015keras}, it allows the easy development of machine learning models with little code. While tools like PyTorch might provide more advanced control options for the model, like dynamic graphs, it comes at the cost of more complex python code. Since that code is generated by the go code, the more python that needs to be written, the more complex the overall program gets, which is not desirable.
|
||||
% The original plan was to use go and TensorFlow, but the go library was lacking that ability. Therefore, I chose to use python to create the models.
|
||||
% The go server starts a new process, running python, that creates and trains the TensorFlow model. Once the training is done, the model is saved to disk which then can be loaded by the go TensorFlow library.
|
||||
|
||||
% \subsection{Expandable Models}
|
||||
% The approach would be based on multiple models. The first model is a large model that will work as a feature traction model, the results of this model are then given to other smaller models. These model's purpose is to classify the results of the feature extraction model into classes.
|
||||
% The first model would either be an already existent pretrained model or a model that is automatically created by the platform.
|
||||
% The smaller models would all be all generated by the platform, this model's purpose would be actually classification.
|
||||
% This approach would offer a lot of expandability, as it makes the addition of a new class as easy as creating a new small model.
|
||||
|
||||
\pagebreak
|
||||
\section{Appendix}
|
||||
\begin{figure}
|
||||
\begin{figure}[h!]
|
||||
\begin{center}
|
||||
\includegraphics[height=0.8\textheight]{expandable_models_simple}
|
||||
\end{center}
|
||||
|
Loading…
Reference in New Issue
Block a user