From a1f7365c3ff0cb0f5846a2f6c641db84297e8aaa Mon Sep 17 00:00:00 2001 From: Andre Henriques Date: Thu, 2 May 2024 01:19:25 +0100 Subject: [PATCH] more work on report --- report/design.tex | 120 +++++++ report/intro.tex | 73 ++++ report/lit.tex | 98 ++++++ report/report.tex | 840 ++++++++++++-------------------------------- report/sanr.tex | 156 ++++++++ report/settings.tex | 70 ++++ report/start.tex | 60 ++++ 7 files changed, 798 insertions(+), 619 deletions(-) create mode 100644 report/design.tex create mode 100644 report/intro.tex create mode 100644 report/lit.tex create mode 100644 report/sanr.tex create mode 100644 report/settings.tex create mode 100644 report/start.tex diff --git a/report/design.tex b/report/design.tex new file mode 100644 index 0000000..f27f6e0 --- /dev/null +++ b/report/design.tex @@ -0,0 +1,120 @@ +\section{Service Design} \label{sec:sd} + This section will discuss the design of the service. + The design on this section is an ideal design solution, where no time limitations or engineering limitations were considered. + This section tries to provide a description of a designed solution that would allow for the best user experience possible. + + The design proposed in this section can be viewed as a scoped version of this project, and the \hyperref[sec:si]{Service Implementation} section will discuss how the scope was limited so that the service would achieve the primary goals of the project while following the design, within the time frame of this project. + + \subsection{Structure of the Service} + + The service is designed to be a 4 tier structure: + \begin{itemize} + \item{Presentation Layer} + \item{API Layer} + \item{Worker Layer} + \item{Data Layer} + \end{itemize} + + This structure was selected because it allows separation of concerns. + The layers were separated based on what resources are required by that layer. + + The presentation layer requires interactivity of the user, and therefore it needs to be accessible from the outside, and be simple to use. + The presentation layer was limited from being any interaction method to be a web page. + The web page can a separate server, or as part of the main API application, if it is in the same. + + The API layer, is one of the most important parts of the service. As it's going to be the most used way to interact with the service. + The user can use the API to control their entire model process from importing, to classification of images. + + The Worker layer, consists of a set of servers available to perform GPU loads. + + The Data layer, consists of stored images, models, and user data. + + + \subsection{Interacting with the service} + As a software platform, this project requires a way for users to interact with the service. + This interface is mainly intended to be as a control and monitoring interface. + The user would use the interface to set up and manage the models, then most of the interactions would happen via the API. + + While there were no specific restrictions on what the interface can be, it makes most sense for it to be a web page. + This is because most software as a service applications are controlled with web pages, and the API is already a web-based application. + + Independently of the kind of the application is, it needs to allow users to fully control their data in an easy to use and understand way. + The application should allow users to: + %TODO add more + \begin{multicols}{2} + \begin{itemize} + \item Manage access tokens. + \item Upload images for training. + \item Delete images. + \item Request training model. + \item Delete models. + \item Classify Images. + \item See previous classification results + \item Keep track of model accuracy + \end{itemize} + \end{multicols} + + Aside from being able to perform the above tasks, there are no restrictions on how the application needs to be architected. + + \subsection{API} + As a software as a service, one of the main requirements is to be able to communicate with other services. + The API provides the simplest way for other services to interact with this service. + + The API needs to be able to perform all the tasks that the application can do, which include: + % TODO maybe remove + \begin{multicols}{2} + \begin{itemize} + \item Manage access tokens. + \item Upload images for training. + \item Delete images. + \item Request training model. + \item Delete models. + \item Classify Images. + \item See previous classification results + \item Keep track of model accuracy + \end{itemize} + \end{multicols} + + While implementing all the features that mentioned above, the API has to handle multiple simultaneous requests. + Ideally, those requests should be handled as fast as possible. + + The API should be implemented such that it can be easily expandable and maintainable, so that future improvements can happen. + + The API should be consistent and easy to use, information on how to use the API should also be available to possible users. + + The API should be structured as a REST JSON API, per the requirements. + The API should only accept inputs via the URL parameters of GET requests or via JSON on POST requests. + Binary formats can also be used to handle file upload and downloads, as transferring files via JSON extremely inefficient. + + \subsection{Generation of Models} + The service should use any means available to generate models, such means can be: + \begin{multicols}{2} + \begin{itemize} + \item Templating. + \item Transfer Learning. + \item Database Search. + \item Pretrained Models with classification heads. + \end{itemize} + \end{multicols} + + + \subsection{Models Training} + % The Training process follows % TODO have a flow diagram + + Model Training should be independent of image classification. A model training should not affect any current classification. The system could use multiple ways to achieve this, such as: + \begin{multicols}{2} % TODO think of more ways + \begin{itemize} + \item Separating the training to different machines. + \item Control the number of resources that training machine can utilize + \item Control the time when the shared training and inference machine can be used for training. + \item Allow users to have their own ``Runners'' where the training tasks can happen. + \end{itemize} + \end{multicols} + + + \subsection{Conclusion} + This section introduced multiple possible designs options for a service, that intends to achieve automated image classification, can follow to implement a robust system. + + The next section will be discussing how the system was implemented and which of the possible design options were chosen when implementing the system. + +\pagebreak diff --git a/report/intro.tex b/report/intro.tex new file mode 100644 index 0000000..489b213 --- /dev/null +++ b/report/intro.tex @@ -0,0 +1,73 @@ +\section{Introduction} \label{sec:introduction} + This section will introduce the project: background, motives, aims, goals, and success criteria. + The section will end with this report structure. + + \subsection{Project Background} + There are many automated tasks that being done manually. + If those tasks can be done automatically, a lot of productivity could be gained from as human doing those tasks can do tasks that only humans can. + + This project aims to provide a software platform, where users with no experience in machine learning, data analysis could create machine learning models to process their data. + In this project, the platform will be scoped to image classification. + As an easy-to-use platform needs to be able to handle: image uploads, processing, and verification; model creation, management, and expansion; and image classification. + + % This report will do a brief analysis of current image classification systems, followed by an overview of the design of the system, and implementation details. The report will finish with analysis of legal, ethical and societal issues, and evaluation of results, and objectives. + + \subsection{Project Motivations} + + Currently, there are many classification tasks that are being done manually. + Thousands of man-hours are used to classify images, this task can be automated. + There are a few easy-to-use image classification systems that require low to no knowledge of image classification. + This project aims to fill that role and provide a complete image classification service. + While still been user-friendly, where a user who has never done any kind of user classification still could get good results, by simply using this service. + + \subsection{Project Aim} + The project aims to create an easy-to-use software platform, where users can create image classification models without having prior knowledge about machine learning. + The user should only need to upload the images and confirm, and the system should be able to perform all the steps necessary to create and manage the machine learning model. + + \subsection{Project Objectives} + This project will have two different objectives. + Primary objectives are objectives that are required for the project to be considered a success. + Secondary objectives are objectives that are not required for the project to be considered a success, but they would provide a better experience for the user of the service. + + This project's primary objectives are to design and implement: + \begin{itemize} + \item a system to upload images that will be assigned to a model + \item a system to automatically create and train models. + \item a platform where users can manage their models. + % \item a system to automatically expand and reduce models without fully retraining the models. + \item a system to automatically expand models without fully retraining the models. + \item an Application Programming Interface(API) that users can interact programmatically with the service. + \end{itemize} + + This project's secondary objectives are to: + \begin{itemize} + % \item Create a system to automatically to merge modules to increase efficiency. + \item Create a system to distribute the load of training the model among multiple servers. + \end{itemize} + + \subsection{Success Criteria} + As it was mentioned before, the project can be considered a success when the primary objectives have been completed. + + Therefore, the success criteria of this project can be defined as: + + \begin{itemize} + \item A user can upload images, train a model on those images, and evaluate images using a user interface. + \item A user can perform the same tasks, via the API service. + \end{itemize} + + \subsection{Project Structure} + The report on the project shows the development and designs stages of the project. With each section addressing a part of the design and development process. + + \renewcommand*{\arraystretch}{2} + \begin{longtable}{p{7cm} p{8cm}} + \hyperref[sec:introduction]{Introduction} & The introduction section will do a brief introduction of the project and its objectives. \\ + \hyperref[sec:lit-tech-review]{Literature and Technical Review} & The Literature and Technical Review section will introduce some current existing projects that are similar to this one, and introduce some technologies that can be used to implement this project. \\ + \hyperref[sec:sanr]{Service Analysis and Requirements} & This section will analyse the project requirements. The section will define design requirements that the service will need to implement to be able to achieve the goals that were set up. \\ + \hyperref[sec:sd]{Service Design} & This section will discuss how a service could be designed that it matches the requirements of the service. \\ + \hyperref[sec:sd]{Service Implementation} & Information on how the design of the system was turned into software is in this section. \\ + \hyperref[sec:lsec]{Legal, Societal, Ethical, Professional Considerations} & This section will cover potential legal, societal, ethical and professional, issues that might arise from the service and how they are mitigated. \\ + \hyperref[sec:se]{Service Evaluation} & In this section, the model will be tested and the results of the tests will be analysed. \\ + \hyperref[sec:crpo]{Critical Review of Project Outcomes} & In this section, will compare the project goals with what was achieved. Then, according to the results, the project will either be deemed successful or not. + + \end{longtable} +\pagebreak diff --git a/report/lit.tex b/report/lit.tex new file mode 100644 index 0000000..008cb89 --- /dev/null +++ b/report/lit.tex @@ -0,0 +1,98 @@ +\section{Literature and Technical Review} \label{sec:lit-tech-review} + This section reviews existing technologies in the market that do image classification. It also reviews current image classification technologies, which meet the requirements for the project. This review also analyses methods that are used to distribute the learning between various physical machines, and how to spread the load so minimum reloading of the models is required when running the model. + + \subsection{Existing Classification Platforms} + There are currently some existing software as a service (SaaS) platforms that do provide similar services to the ones this will project will be providing. + + %Amazon provides bespoque machine learning services that if were contacted would be able to provide image classification services. Amazon provides general machine learning services \cite{amazon-machine-learning}. + + Amazon provides an image classification service called ``Rekognition'' \cite{amazon-rekognition}. This service provides multiple services from face recognition, celebrity recognition, object recognition and others. One of these services is called custom labels \cite{amazon-rekognition-custom-labels} that provides the most similar service, to the one this project is about. The custom labels service allows the users to provide custom datasets and labels and using AutoML the Rekognition service would generate a model that allows the users to classify images according to the generated model. + + The models generated using Amazon's Rekognition do not provide ways to update the number of labels that were created, without generating a new project. This will involve retraining a large part of the model, which would involve large downtime between being able to add new classes. Training models also could take 30 minutes to 24 hours, \cite{amazon-rekognition-custom-labels-training}, which could result in up to 24 hours of lag between the need of creating a new label and being able to classify that label. A problem also arises when the uses need to add more than one label at the same time. For example, the user sees the need to create a new label and starts a new model training, but while the model is training a new label is also needed. The user now either stops the training of the new model and retrains a new one, or waits until the one currently running stops and trains a new one. If new classification classes are required with frequency, this might not be the best platform to choose. + + %https://aws.amazon.com/machine-learning/ml-use-cases/ + + %https://aws.amazon.com/rekognition/image-features/ + + Similarly, Google also has ``Cloud Vision API'' \cite{google-vision-api} which provides similar services to Amazon's Rekognition. But Google's Vision API appears to be more targeted at videos than images, as indicated by their price sheet \cite{google-vision-price-sheet}. They have tag and product identifiers, where every image only has one tag or product. The product identifier system seams to work differently than the Amazon's Rekognition and worked based on K neighbouring giving the user similar products on not classification labels \cite{google-vision-product-recognizer-guide}. + + This method is more effective at allowing users to add new types of products, but as it does not give defined classes as the output, the system does not give the target functionality that this project is aiming to achieve. + + \subsection{Requirements of Image Classification Models} + + The of the main objectives of this project are to be able to create models that can give a class given an image for any dataset. Which means that there will be no ``one solution fits all to the problem''. While the most complex way to solve a problem would most likely result in success, it might not be the most efficient way to achieve the results. + + This section will analyse possible models that would obtain the best results. The models for this project have to be the most efficient as possible while resulting in the best accuracy as possible. + + A classical example is the MNIST Dataset \cite{mnist}. Models for the classification of the MNIST dataset can be both simple or extremely complex and achieve different levels of complexity. + For example, in \cite{mist-high-accuracy} an accuracy $99.91\%$, by combining 3 Convolutional Neural Networks (CNNs), with different kernel sizes and by changing hyperparameters, augmenting the data, and in \cite{lecun-98} an accuracy of $95\%$ was achieved using a 2 layer neural network with 300 hidden nodes. Both these models achieve the accuracy that is required for this project, but \cite{mist-high-accuracy} are more computational intensive to run. When deciding when to choose what models they create, the system should choose to create the model that can achieve the required accuracy while taking the leas amount of effort to train. + + % TODO fix the inglish in these sentance + The models for this system to work as indented should be as small as possible while obtaining the required accuracy required to achieve the task of classification of the classes. + + As the service might need to handle many requests, it needs to be able to handle as many requests as possible. This would require that the models are easy to run, and smaller models are easier to run; therefore the system requires a balance between size and accuracy. + + % TODO talk about storage + + \subsection{Method of Image Classification Models} + + There are all multiple ways of achieving image classification, the requirements of the system are that the system should return the class that an image that belongs to. Which means that we will be using supervised classification methods, as these are the ones that meet the requirements of the system. + + % TODO find some papers to proff this + + The system will use supervised models to classify images, using a combination of different types of models, using neural networks, convolution neural networks, deed neural networks and deep convolution neural networks. + + These types were decided as they have had a large success in the past in other image classification challenges, for example in the ImageNet challenges \cite{imagenet}, which has ranked different models in classifying a 14 million images. The contest has been running since 2010 to 2017. + + The models that participated in the contest tended to use more and more Deep convolution neural networks, out of the various models that were generated there are a few landmark models that were able to achieve high accuracies, including AlexNet \cite{krizhevsky2012imagenet}, ResNet-152 \cite{resnet-152}, EfficientNet \cite{efficientnet}. + % TODO find vgg to cite + + These models can be used in two ways in the system, they can be used to generate the models via transfer learning and by using the model structure as a basis to generate a complete new model. + + \subsection{Well-known models} + % TODO compare the models + + This section will compare the different models that did well in the image net challenge. + + AlexNet \cite{krizhevsky2012imagenet} is a deep convolution neural network that participated in the ImageNet ILSVRC-2010 contest, it achieved a top-1 error rate of $37.5\%$, and a top-5 error rate of $37.5\%$. A variant of this model participated in the ImageNet LSVRC-2012 contest and achieved a top-5 error rate of $15.3\%$. The architecture of AlexNet consists of 5 convolution layers that are run separately followed by 3 dense layers, some layers are followed by Max pooling. The training the that was done using multiple GPUs, one GPU would run the part of each layer, and some layers are connected between GPUs. The model during training also contained data argumentation techniques such as label preserving data augmentation and dropout. + While using AlexNet would probably yield desired results, it would complicate the other parts of the service. As a platform as a service, the system needs to manage the number of resources available, and requiring to use 2 GPUs to train a model would limit the number of resources available to the system by 2-fold. + % TODO talk more about this + + ResNet \cite{resnet} is a deep convolution neural network that participated in the ImageNet ILSVRC-2015 contest, it achieved a top-1 error rate of $21.43\%$ and a top-5 error rate of $5.71\%$. ResNet was created to solve a problem, the problem of degradation of training accuracy when using deeper models. Close to the release of the ResNet paper, there was evidence that deeper networks result in higher accuracy results, \cite{going-deeper-with-convolutions, very-deep-convolution-networks-for-large-scale-image-recognition}. but the increasing the depth of the network resulted in training accuracy degradation. + % This needs some work in terms of gramar + ResNet works by creating shortcuts between sets of layers, the shortcuts allow residual values from previous layers to be used on the upper layers. The hypothesis being that it is easier to optimize the residual mappings than the linear mappings. + The results proved that the using the residual values improved training of the model, as the results of the challenge prove. + It's important to note that using residual networks tends to give better results, the more layers the model has. While this could have a negative impact on performance, the number of parameters per layer does not grow that steeply in ResNet when comparing it with other architectures as it uses other optimizations such as $1x1$ kernel sizes, which are more space efficient. Even with these optimizations, it can still achieve incredible results. Which might make it a good contender to be used in the service as one of the predefined models to use to try to create the machine learning models. + + % MobileNet + + % EfficientNet + EfficientNet \cite{efficient-net} is a deep convolution neural network that was able to achieve $84.3\%$ top-1 accuracy while ``$8.4x$ smaller and $6.1x$ faster on inference than the best existing ConvNet''. EfficientNets \footnote{the family of models that use the thecniques that described in \cite{efficient-net}} are models that instead of the of just increasing the depth or the width of the model, we increase all the parameters at the same time by a constant value. By not scaling only depth, EfficientNets can acquire more information about the images, specially the image size is considered. + To test their results, the EfficientNet team created a baseline model which as a building block used the mobile inverted bottleneck MBConv \cite{inverted-bottleneck-mobilenet}. The baseline model was then scaled using the compound method, which resulted in better top-1 and top-5 accuracy. + While EfficientNets are smaller than their non-EfficientNet counterparts, they are more computational intensive, a ResNet-50 scaled using the EfficientNet compound scaling method is $3\%$ more computational intensive than a ResNet-50 scaled using only depth while improving the top-1 accuracy by $0.7\%$. + And as the model will be trained and run multiple times decreasing the computational cost might be a better overall target for sustainability then being able to offer higher accuracies. + Even though scaling using the EfficientNet compound method might not yield the best results using some EfficientNets what were optimized by the team to would be optimal, for example, EfficientNet-B1 is both small and efficient while still obtaining $79.1\%$ top-1 accuracy in ImageNet, and realistically the datasets that this system will process will be smaller and more scope specific than ImageNet. + + + % \subsection{Efficiency of transfer learning} + +% \subsection{Creation Models} +% The models that I will be creating will be Convolutional Neural Network(CNN) \cite{lecun1989handwritten,fukushima1980neocognitron}. +% The system will be creating two types of models that cannot be expanded and models that can be expanded. For the models that can be expanded, see the section about expandable models. +% The models that cannot be expanded will use a simple convolution blocks, with a similar structure as the AlexNet \cite{krizhevsky2012imagenet} ones, as the basis for the model. The size of the model will be controlled by the size of the input image, where bigger images will generate more deep and complex models. +% The models will be created using TensorFlow \cite{tensorflow2015-whitepaper} and Keras \cite{chollet2015keras}. These theologies are chosen since they are both robust and used in industry. + +% \subsection{Expandable Models} +% The current most used approach for expanding a CNN model is to retrain the model. This is done by, recreating an entire new model that does the new task, using the older model as a base for the new model \cite{amazon-rekognition}, or using a pretrained model as a base and training the last few layers. + + % There are also unsupervised learning methods that do not have a fixed number of classes. While this method would work as an expandable model method, it would not work for the purpose of this project. This project requires that the model has a specific set of labels which does not work with unsupervised learning which has unlabelled data. Some technics that are used for unsupervised learning might be useful in the process of creating expandable models. + + + \subsection{Conclusion} + The technical review of current systems reveals that there are current systems that exist that can perform image classification tasks, but they are not friendly in ways to easily expand currently existing models. + + The current methods that exist for image classification seem to have reached a classification accuracy and efficiency that make a project like this feasible. + + % TODO talk about web serving thechnlogies + +\pagebreak diff --git a/report/report.tex b/report/report.tex index 1b72943..f73b35f 100644 --- a/report/report.tex +++ b/report/report.tex @@ -1,74 +1,8 @@ %%% Preamble \documentclass[11pt, a4paper, twoside]{article} -\usepackage[english]{babel} % English language/hyphenation -\usepackage{url} -\usepackage{tabularx} -\usepackage{pdfpages} -\usepackage{float} -\usepackage{longtable} -\usepackage{multicol} -\usepackage{graphicx} -\usepackage{svg} -\graphicspath{ {../images for report/} } -\usepackage[margin=2cm]{geometry} - -\usepackage{datetime} -\newdateformat{monthyeardate}{\monthname[\THEMONTH], \THEYEAR} - -\usepackage{hyperref} -\hypersetup{ - colorlinks, - citecolor=black, - filecolor=black, - linkcolor=black, - urlcolor=black -} - -\usepackage{cleveref} - -%%% Custom headers/footers (fancyhdr package) -\usepackage{fancyhdr} -\pagestyle{fancyplain} - -% \fancyhead{} - -\fancypagestyle{my_empty}{% - \fancyhf{} - \renewcommand{\headrulewidth}{0pt} - \renewcommand{\footrulewidth}{0pt} -} - -\fancypagestyle{simple}{% - \fancyhf{} - \renewcommand{\headrulewidth}{0pt} - \renewcommand{\footrulewidth}{0pt} - \fancyfoot[L]{} % Empty - \fancyfoot[C]{\thepage} % Pagenmbering - \fancyfoot[R]{} % Empty -} - -\fancypagestyle{full}{% - \fancyhf{} - \renewcommand{\headrulewidth}{0.5pt} - \renewcommand{\footrulewidth}{0pt} - \fancyfoot[L]{} % Empty - \fancyfoot[C]{\thepage} % Pagenmbering - \fancyfoot[R]{} % Empty - - \fancyhead[RO,LE]{Andre Henriques} -} - -\renewcommand{\headrulewidth}{0pt} % Remove header underlines -\renewcommand{\footrulewidth}{0pt} % Remove footer underlines -\setlength{\headheight}{13.6pt} - -\newcommand*\NewPage{\newpage\null\thispagestyle{empty}\newpage} - -% numeric -\usepackage[bibstyle=ieee, citestyle=numeric, sorting=none,backend=biber]{biblatex} -\addbibresource{../main.bib} +\include{settings.tex} % Write the approved title of your dissertation \title{Image Classification as a Software Platform} @@ -80,500 +14,11 @@ %%% Begin document \begin{document} - - \pagenumbering{gobble} - - \maketitle - \pagestyle{my_empty} - - \begin{center} - \includegraphics[height=0.5\textheight]{uni_surrey} - \end{center} - - \begin{center} - \monthyeardate\today - \end{center} - - \NewPage - \pagenumbering{arabic} - - \pagestyle{simple} - - \begin{center} - \vspace*{\fill} - \section*{Declaration of Originality} - I confirm that the submitted work is my own work and that I have clearly identified and fully - acknowledged all material that is entitled to be attributed to others (whether published or - unpublished) using the referencing system set out in the programme handbook. I agree that the - University may submit my work to means of checking this, such as the plagiarism detection service - Turnitin® UK. I confirm that I understand that assessed work that has been shown to have been - plagiarised will be penalised. - \vspace*{\fill} - \end{center} - \NewPage - - \begin{center} - \vspace*{\fill} - \section*{Acknowledgements} - I would like to take this opportunity to thank my supervisor, Rizwan Asghar that helped me from the - start of the project until the end. - I am honestly thankful to him for sharing his honest and educational views on several issues related - to this report. - Additionally, I would like to thank my parents and friends for their continued support and - encouragement from the first day of the university. - \vspace*{\fill} - \end{center} - \NewPage - - \begin{center} - \vspace*{\fill} - \section*{Abstract} - Currently there are few automatic image classification platforms. - This project hopes to work as a guide for the creating a new image automatic classification platform. - The project goes through all the requirements for creating a platform service, and all of its needs. - \vspace*{\fill} - \end{center} - \NewPage - - \tableofcontents - \newpage - - \pagestyle{full} - - \section{Introduction} \label{sec:introduction} - % This section should contain an introduction to the problem aims and obectives (0.5 page) - - This project is to design and create a new software as a service platform, where users with no experience in machine learning, data analysis could create machine learning models to process their data. - In this project, the platform will be scoped to image classification, with the ability to be updated later with more model types. - As an easy-to-use platform needs to be able to handle: image uploads, processing, and verification; model creation, management, and expansion; and image classification. - - This report will do a brief analysis of current image classification systems, followed by an overview of the design of the system, and implementation details. The report will finish with analysis of legal, ethical and societal issues, and evaluation of results, and objectives. - - \subsection{Project Motivations} - - Currently, there are many classification tasks that are being done manually. - Thousands of man-hours are used to classify images, this task can be automated. - There are a few easy-to-use image classification systems that require low to no knowledge of image classification. - This project aims to fill that role and provide an easy-to-use system that anyone without knowledge of image classification could use. - - % These tasks could be done more effectively if there was tooling that would allow the easy creation of classification models, without the knowledge of data analysis and machine learning models creation. - % The aim of this project is to create a classification service that requires zero user knowledge about machine learning, image classification or data analysis. - % The system should allow the user to create a reasonable accurate model that can satisfy the users' need. - % The system should also allow the user to create expandable models; models where classes can be added after the model has been created. % hyperparameters, augmenting the data. - - \subsection{Project Aim} - The project aims to create a platform an easy to use where users can create different types of classification models without the users having any knowledge of image classification. - - \subsection{Project Objectives} - This project's primary objectives are to design and implement: - \begin{itemize} - \item a platform where the users can create and manage their models. - \item a system to automatically create and train models. - % \item a system to automatically expand and reduce models without fully retraining the models. - \item a system to automatically expand models without fully retraining the models. - \item an API that users can interact programmatically with the system. - \end{itemize} - - This project extended objectives are to: - \begin{itemize} - % \item Create a system to automatically to merge modules to increase efficiency. - \item Create a system to distribute the load of training the model's among multiple services. - \end{itemize} - - \subsection{Success Criteria} - This project can be considered successful when: - \begin{itemize} - \item A user can upload images, train a model on those images, and evaluate images using the web interface. - \item A user can perform the same tasks, via the API service. - \end{itemize} - - \subsection{Project Structure} - The report on the project shows the development and designs stages of the project. With each section addressing a part of the design and development process. - - \begin{longtable}{p{7cm} p{8cm}} - \hyperref[sec:introduction]{Introduction} & The introduction section will do a brief introduction of the project and its objectives. \\ - \hyperref[sec:lit-tech-review]{Literature and Technical Review} & The Literature and Technical Review section will introduce some current existing projects that are similar to this one, and introduce some technologies that can be used to implement this project. \\ - \hyperref[sec:sanr]{Service Analysis and Requirements} & This section will analyse the project requirements. The section will define design requirements that the service will need to implement to be able to achieve the goals that were set up. \\ - \hyperref[sec:sd]{Service Design} & This section will discuss how a service could be designed that it matches the requirements of the service. \\ - \hyperref[sec:sd]{Service Implementation} & Information on how the design of the system was turned into software is in this section. \\ - \hyperref[sec:lsec]{Legal, Societal, and Ethical Considerations} & This section will cover potential legal societal and ethical issues that might arise from the service and how they are mitigated. \\ - \hyperref[sec:crpo]{Critical Review of Project Outcomes} & In this section, the project goals will compare to what was achieved. Then according to the results, the project will either be deemed successful or not. - - \end{longtable} - - - - \pagebreak - - \section{Literature and Technical Review} \label{sec:lit-tech-review} - This section reviews existing technologies in the market that do image classification. It also reviews current image classification technologies, which meet the requirements for the project. This review also analyses methods that are used to distribute the learning between various physical machines, and how to spread the load so minimum reloading of the models is required when running the model. - - \subsection{Existing Classification Platforms} - There are currently some existing software as a service (SaaS) platforms that do provide similar services to the ones this will project will be providing. - - %Amazon provides bespoque machine learning services that if were contacted would be able to provide image classification services. Amazon provides general machine learning services \cite{amazon-machine-learning}. - - Amazon provides an image classification service called ``Rekognition'' \cite{amazon-rekognition}. This service provides multiple services from face recognition, celebrity recognition, object recognition and others. One of these services is called custom labels \cite{amazon-rekognition-custom-labels} that provides the most similar service, to the one this project is about. The custom labels service allows the users to provide custom datasets and labels and using AutoML the Rekognition service would generate a model that allows the users to classify images according to the generated model. - - The models generated using Amazon's Rekognition do not provide ways to update the number of labels that were created, without generating a new project. This will involve retraining a large part of the model, which would involve large downtime between being able to add new classes. Training models also could take 30 minutes to 24 hours, \cite{amazon-rekognition-custom-labels-training}, which could result in up to 24 hours of lag between the need of creating a new label and being able to classify that label. A problem also arises when the uses need to add more than one label at the same time. For example, the user sees the need to create a new label and starts a new model training, but while the model is training a new label is also needed. The user now either stops the training of the new model and retrains a new one, or waits until the one currently running stops and trains a new one. If new classification classes are required with frequency, this might not be the best platform to choose. - - %https://aws.amazon.com/machine-learning/ml-use-cases/ - - %https://aws.amazon.com/rekognition/image-features/ - - Similarly, Google also has ``Cloud Vision API'' \cite{google-vision-api} which provides similar services to Amazon's Rekognition. But Google's Vision API appears to be more targeted at videos than images, as indicated by their price sheet \cite{google-vision-price-sheet}. They have tag and product identifiers, where every image only has one tag or product. The product identifier system seams to work differently than the Amazon's Rekognition and worked based on K neighbouring giving the user similar products on not classification labels \cite{google-vision-product-recognizer-guide}. - - This method is more effective at allowing users to add new types of products, but as it does not give defined classes as the output, the system does not give the target functionality that this project is aiming to achieve. - - \subsection{Requirements of Image Classification Models} - - The of the main objectives of this project are to be able to create models that can give a class given an image for any dataset. Which means that there will be no ``one solution fits all to the problem''. While the most complex way to solve a problem would most likely result in success, it might not be the most efficient way to achieve the results. - - This section will analyse possible models that would obtain the best results. The models for this project have to be the most efficient as possible while resulting in the best accuracy as possible. - - A classical example is the MNIST Dataset \cite{mnist}. Models for the classification of the MNIST dataset can be both simple or extremely complex and achieve different levels of complexity. - For example, in \cite{mist-high-accuracy} an accuracy $99.91\%$, by combining 3 Convolutional Neural Networks (CNNs), with different kernel sizes and by changing hyperparameters, augmenting the data, and in \cite{lecun-98} an accuracy of $95\%$ was achieved using a 2 layer neural network with 300 hidden nodes. Both these models achieve the accuracy that is required for this project, but \cite{mist-high-accuracy} are more computational intensive to run. When deciding when to choose what models they create, the system should choose to create the model that can achieve the required accuracy while taking the leas amount of effort to train. - - % TODO fix the inglish in these sentance - The models for this system to work as indented should be as small as possible while obtaining the required accuracy required to achieve the task of classification of the classes. - - As the service might need to handle many requests, it needs to be able to handle as many requests as possible. This would require that the models are easy to run, and smaller models are easier to run; therefore the system requires a balance between size and accuracy. - - % TODO talk about storage - - \subsection{Method of Image Classification Models} - - There are all multiple ways of achieving image classification, the requirements of the system are that the system should return the class that an image that belongs to. Which means that we will be using supervised classification methods, as these are the ones that meet the requirements of the system. - - % TODO find some papers to proff this - - The system will use supervised models to classify images, using a combination of different types of models, using neural networks, convolution neural networks, deed neural networks and deep convolution neural networks. - - These types were decided as they have had a large success in the past in other image classification challenges, for example in the ImageNet challenges \cite{imagenet}, which has ranked different models in classifying a 14 million images. The contest has been running since 2010 to 2017. - - The models that participated in the contest tended to use more and more Deep convolution neural networks, out of the various models that were generated there are a few landmark models that were able to achieve high accuracies, including AlexNet \cite{krizhevsky2012imagenet}, ResNet-152 \cite{resnet-152}, EfficientNet \cite{efficientnet}. - % TODO find vgg to cite - - These models can be used in two ways in the system, they can be used to generate the models via transfer learning and by using the model structure as a basis to generate a complete new model. - - \subsection{Well-known models} - % TODO compare the models - - This section will compare the different models that did well in the image net challenge. - - AlexNet \cite{krizhevsky2012imagenet} is a deep convolution neural network that participated in the ImageNet ILSVRC-2010 contest, it achieved a top-1 error rate of $37.5\%$, and a top-5 error rate of $37.5\%$. A variant of this model participated in the ImageNet LSVRC-2012 contest and achieved a top-5 error rate of $15.3\%$. The architecture of AlexNet consists of 5 convolution layers that are run separately followed by 3 dense layers, some layers are followed by Max pooling. The training the that was done using multiple GPUs, one GPU would run the part of each layer, and some layers are connected between GPUs. The model during training also contained data argumentation techniques such as label preserving data augmentation and dropout. - While using AlexNet would probably yield desired results, it would complicate the other parts of the service. As a platform as a service, the system needs to manage the number of resources available, and requiring to use 2 GPUs to train a model would limit the number of resources available to the system by 2-fold. - % TODO talk more about this - - ResNet \cite{resnet} is a deep convolution neural network that participated in the ImageNet ILSVRC-2015 contest, it achieved a top-1 error rate of $21.43\%$ and a top-5 error rate of $5.71\%$. ResNet was created to solve a problem, the problem of degradation of training accuracy when using deeper models. Close to the release of the ResNet paper, there was evidence that deeper networks result in higher accuracy results, \cite{going-deeper-with-convolutions, very-deep-convolution-networks-for-large-scale-image-recognition}. but the increasing the depth of the network resulted in training accuracy degradation. - % This needs some work in terms of gramar - ResNet works by creating shortcuts between sets of layers, the shortcuts allow residual values from previous layers to be used on the upper layers. The hypothesis being that it is easier to optimize the residual mappings than the linear mappings. - The results proved that the using the residual values improved training of the model, as the results of the challenge prove. - It's important to note that using residual networks tends to give better results, the more layers the model has. While this could have a negative impact on performance, the number of parameters per layer does not grow that steeply in ResNet when comparing it with other architectures as it uses other optimizations such as $1x1$ kernel sizes, which are more space efficient. Even with these optimizations, it can still achieve incredible results. Which might make it a good contender to be used in the service as one of the predefined models to use to try to create the machine learning models. - - % MobileNet - - % EfficientNet - EfficientNet \cite{efficient-net} is a deep convolution neural network that was able to achieve $84.3\%$ top-1 accuracy while ``$8.4x$ smaller and $6.1x$ faster on inference than the best existing ConvNet''. EfficientNets \footnote{the family of models that use the thecniques that described in \cite{efficient-net}} are models that instead of the of just increasing the depth or the width of the model, we increase all the parameters at the same time by a constant value. By not scaling only depth, EfficientNets can acquire more information about the images, specially the image size is considered. - To test their results, the EfficientNet team created a baseline model which as a building block used the mobile inverted bottleneck MBConv \cite{inverted-bottleneck-mobilenet}. The baseline model was then scaled using the compound method, which resulted in better top-1 and top-5 accuracy. - While EfficientNets are smaller than their non-EfficientNet counterparts, they are more computational intensive, a ResNet-50 scaled using the EfficientNet compound scaling method is $3\%$ more computational intensive than a ResNet-50 scaled using only depth while improving the top-1 accuracy by $0.7\%$. - And as the model will be trained and run multiple times decreasing the computational cost might be a better overall target for sustainability then being able to offer higher accuracies. - Even though scaling using the EfficientNet compound method might not yield the best results using some EfficientNets what were optimized by the team to would be optimal, for example, EfficientNet-B1 is both small and efficient while still obtaining $79.1\%$ top-1 accuracy in ImageNet, and realistically the datasets that this system will process will be smaller and more scope specific than ImageNet. - - - % \subsection{Efficiency of transfer learning} - - % \subsection{Creation Models} - % The models that I will be creating will be Convolutional Neural Network(CNN) \cite{lecun1989handwritten,fukushima1980neocognitron}. - % The system will be creating two types of models that cannot be expanded and models that can be expanded. For the models that can be expanded, see the section about expandable models. - % The models that cannot be expanded will use a simple convolution blocks, with a similar structure as the AlexNet \cite{krizhevsky2012imagenet} ones, as the basis for the model. The size of the model will be controlled by the size of the input image, where bigger images will generate more deep and complex models. - % The models will be created using TensorFlow \cite{tensorflow2015-whitepaper} and Keras \cite{chollet2015keras}. These theologies are chosen since they are both robust and used in industry. - - % \subsection{Expandable Models} - % The current most used approach for expanding a CNN model is to retrain the model. This is done by, recreating an entire new model that does the new task, using the older model as a base for the new model \cite{amazon-rekognition}, or using a pretrained model as a base and training the last few layers. - - % There are also unsupervised learning methods that do not have a fixed number of classes. While this method would work as an expandable model method, it would not work for the purpose of this project. This project requires that the model has a specific set of labels which does not work with unsupervised learning which has unlabelled data. Some technics that are used for unsupervised learning might be useful in the process of creating expandable models. - - - \subsection{Conclusion} - The technical review of current systems reveals that there are current systems that exist that can perform image classification tasks, but they are not friendly in ways to easily expand currently existing models. - - The current methods that exist for image classification seem to have reached a classification accuracy and efficiency that make a project like this feasible. - - % TODO talk about web serving thechnlogies - - \pagebreak - - - - - - - - - - - \section{Service Analysis and Requirements} \label{sec:sanr} - Understanding the project that is being built is critical in the software deployment process, this section will look into the required parts for the project to work. - - As a SaaS project, there are some required parts that the project needs to have: - \begin{itemize} - \item{Web App} - \item{API} - \item{Server Management} - \item{Dataset Management} - \item{Model Management} - \end{itemize} - - \subsection{Service Structure} - The service should be able to respond to any load that is given to it. This will require the ability to scale depending on the number of requests that the service is receiving. - Therefore, the service requires some level of distributivity. - - The service, because of the machine learning tasks, also requires being able to have access to machines that can use GPUs. - As the machines that have. - - The service needs to have some level of distributivity, this requirement exists because of the expensive nature of machine learning training. - It would be unwise to perform machine learning training on the same machine that the main web server is running, as it would starve that server of resources. - - For a separation of concerns, data should also be on a different server. - - \subsection{Resources} - - As the service contains more than one resource to manage, it should be able to track what are the resources it has available and distribute the load accordingly. - - One example of this would be the service has two servers with GPU available to them. - One of the servers contains a more capable GPU, that server should be used to train models as that requires more computational power. - - Storage is another resource that the service will have to handle. - The service needs to keep track of the model files and uploaded files. - Alternatively, the service should be able to mount other servers disks and get the images directly from the other service. - - - \subsection{Web App} - - The user of the application should be able to interact with the platform using a graphical user interface(GUI). - There are multiple possible ways for the user to interact with services like web, mobile or desktop applications. - - A web application is the most reasonable solution for this service. - The main way to interact with this service would be via an API, the API that the system will provide would be an HTTPS API \ref{sec:anal-api}, since the service already has a web oriented API, it makes the most sense for the GUI to be a web based as well. - - The web app is where users can interact with the service. - Users should be able to manage models, model data, API keys, API usage. - - The user should be able to access the web app and use it to: - \begin{itemize} - \item{Configure model} - \item{Manage datasets} - \item{Configure API tokens} - \item{See API usage} - %TODO write more - \end{itemize} - - For administrator purposes, the web application should also allow the management of available compute resources to the system. - - \subsection{API} \label{sec:anal-api} - - As a software as a service platform, the users of the platform will mainly interact via the API. - The user would set up the machine learning model using the web interface and then configure their application, to use a token, to securely interact with the API. - - There exists multiple architectural styles for APIs, using a REST API would be the proper architectural style as it is the most common \cite{json-api-usage-stats}, allowing for the most compatibility with other services. - - The API should allow users to the most used features of the app, such as: - \begin{itemize} - \item{Uploading new images for the dataset} - \item{Request training of the model} - \item{Running an image in the model} - \item{Marking previous predictions as incorrect} - %TODO write more - \end{itemize} - - \subsection{Resource Management} - - For optimal functionality, the service requires the management of various compute resources. - - This separation of compute resources is required because machine learning is computed and memory intensive. - Running this resource intensive operations on the same server that is running the main API could cause increase latency or downtime in the API, which would not be ideal. - - The service should be able to decide where to distribute tasks. - The tasks should be distributed according to the resources that the task needs. - The tasks need to be submitted to servers in an organized manner. - Repeated tasks should be sent to the same server to optimize the usage of the resources, as this would improve the efficiency of the service by preventing, for example, reload of data. - For example, sending a training workload to a server that more GPU resources available to it while allowing slower GPU servers to run the models for prediction. - - The service should also keep tract of the space available to it. - The service must decide which images, that it manages, to keep and which ones to delete. - It should also keep track of other services images, and control the access to them, and guarantee that the server that is closeted to the recourses is that has priority on tasks related to those recourses. - - \subsection{Data Management} - - The service needs to manage various kinds of data. - - The first kind of data the service needs to manage is user data. - This is data that identifies a user and allows the user to authenticate with the service. - - A future version of this service could possibly also store payment information. - This information would be used to charge for the usage of the service, although this is outside the scope of this project. - - The second kind of data that has to be managed is the user images. - These images could be either uploaded to the service, or stored on the users' devices. - The service should manage access to remote images, and keep track of local images. - - The last kind of data that the service has to keep track of are model definitions and model weights. - These can be sizable files, which makes it important for the system to distribute them precisely, allowing the files to be closer to the servers that need them the most. - - \subsection{Conclusion} - This section shows that there are requirements that need to be met for the system to work as indented. These requirements range from usability requirements, to system-level resource management requirements. - - The service needs to be easy to use by the user, while being able to handle loads from both the website and API requests. - The service requires the ability to be able to scale up to the loads that is being provided with and keep track and manage resources that the user or the service created. - - It also requires keeping track of computational resources that are available to it, so it does not cause deadlocks. For example, using all of its GPU recourses to train a model while there are classification tasks to be done. - - The next section will go thought the process of the implementation of an application that implements a subset of these design requirements, with some limitations that will be explained. - - - \pagebreak - - - - - - - - - - - - - - - \section{Service Design} \label{sec:sd} - - This section will discuss the design of the service. - The design on this section is an ideal design solution, where no time limitations or engineering limitations were considered. - This section tries to provide a description of a designed solution that would allow for the best user experience possible. - - The design proposed in this section can be viewed as a scope version of this project, and the \hyperref[sec:si]{Service Implementation} section will discuss how the scope was limited so that the service would achieve the primary goals of the project while following the design. - - \subsection{Structure of the Service} - - The service is designed to be a 4 tier structure: - \begin{itemize} - \item{Presentation Layer} - \item{API Layer} - \item{Worker Layer} - \item{Data Layer} - \end{itemize} - - This structure was selected because it allows separation of concerns, to happen based on the resources required by that layer. - - The presentation layer requires interactivity of the user, and therefore it needs to be accessible from the outside, and be simple to use. - The presentation can be either implemented as a webpage working directly on the server that is running the API, or it can be implemented as a separate web app that uses the API to interface directly. - - The API layer, is one of the most important part of the service. As it's going to be the most used way to interact with the service. - The user can use the API to control their entire model process from importing, to classification of images. - - The Worker layer, consists of a set of servers available to perform GPU loads. - - The Data layer, consists of stored images, models, and user data. - - - \subsection{Interacting with the service} - - As a software platform, this project requires a way for users to interact with the service. - This interface is mainly intended to be as control and monitoring interface. - The user would use the interface to set up and manage the models, then most of the interactions would happen via the API. - - There are no specific restrictions on what the interface can be. - The interface can either be a webpage, a desktop application or a CLI tool. - It makes the most sense for the application to be a WEB application, since this project is software as a service. - Most of the interactions will be made by the users' services programmatically via the API, which will be an HTTPS REST API. - - Independently of the kind of the application is, it needs to allow users to fully control their data in an easy to use and understand way. - The application should allow users to: - %TODO add more - \begin{multicols}{2} - \begin{itemize} - \item Manage access tokens. - \item Upload images for training. - \item Delete images. - \item Request training model. - \item Delete models. - \item Classify Images. - \item See previous classification results - \item Keep track of model accuracy - \end{itemize} - \end{multicols} - - - \subsection{API} - - As a software as a service, one of the main requirements is to be able to communicate with other services. - The API provides the simplest way for other services to interact with this service. - - The API needs to enable users to perform any task that are required for the service is to work. - It needs to be able to: - \begin{multicols}{2} - \begin{itemize} - \item Upload images for training. - \item Delete images. - \item Classify Images. - \item See previous classification results - \end{itemize} - \end{multicols} - - The API should be implemented to be able to handle large amounts of simultaneous requests, and respond to those requests as fast as possible. - The API should be implemented such that it can be expanded easily, so that future improvements can happen. - - The API should be consistent and easy to use, information on how to use the API should also be available to possible users. - - As it was mention in \ref{sec:anal-api} most services use REST JSON APIs to communicate with each other. Therefore, to make this service as compatible with each other services, this service should also implement an REST JSON API. - - \subsection{Generation of Models} - - The service should use any means available to generate models, such means can be: - \begin{multicols}{2} - \begin{itemize} - \item Templating. - \item Transfer Learning. - \item Database Search. - \item Pretrained Models with classification heads. - \end{itemize} - \end{multicols} - - - \subsection{Models Training} - % The Training process follows % TODO have a flow diagram - - Model Training should be independent of image classification. A model training should not affect any current classification. The system could use multiple ways to achieve this, such as: - - \begin{multicols}{2} % TODO think of more ways - \begin{itemize} - \item Separating the training to different machines. - \item Control the number of resources that training machine can utilize - \item Control the time when the shared training and inference machine can be used for training. - \item Allow users to have their own ``Runners'' where the training tasks can happen. - \end{itemize} - \end{multicols} - - - \subsection{Conclusion} - - This section introduced multiple possible designs options for a service, that intends to achieve automated image classification, can follow to implement a robust system. - - The next section will be discussing how the system was implemented and which of the possible design options was chosen when implementing the system. - - \pagebreak - - - - - - - - - - + \include{start} + \include{intro} + \include{lit} + \include{sanr} + \include{design} \section{Service Implementation} \label{sec:si} @@ -599,7 +44,7 @@ \subsection{Web Application} \label{web-app-design} - The web application (WEB App) is the chosen GUI to control the service. + The web application (WEB App) is the chosen user interface to control the service. This subsection discusses details of the user flows and implementation of the application. @@ -607,9 +52,9 @@ The Web APP is a single-page application (SPA). The SPA architecture is one of the most prevalent architectures that exists nowadays. - It allows for the fast transitions between pages without having a full reload of the browser happening. + It allows for the fast transitions between pages without having a full reload of the website. - Since this in this project the API and the Web APP are separated, it makes the use of server-side rendering more complex and less efficient. + Since this implementation separated the API and the Web App, it makes the use of server-side rendering more complex and less efficient. As, the server would have to first request the API for information to build the web page and then send it to the users' device. Therefore, the system will use client-side rendering only, allowing for the users' device to request the API directly for more information. @@ -621,12 +66,12 @@ I will be using Svelte with the SvelteKit framework \cite{svelte-kit} which greatly improves the developer experience. - SvelteKit allows for the early creating for SPA with a good default web router. + SvelteKit allows for the easy creation of SPAs with a good default web router. The static adapter will be used to generate a static HTML and JavaScript files, and they will be hosted by an NGINX proxy \cite{nginx}. - The web application uses the API to control the functionality of the service, this design is advantages. - It allows users of the application to do everything that the application does with the API, which is ideal in a SaaS project. + The web application uses the API to control the functionality of the service. + This implementation allows users of the application to do everything that the application does with the API, which is ideal in a SaaS project. \subsubsection*{Service authentication} \label{sec:impl:service-auth} @@ -642,7 +87,6 @@ The Password is stored hashed using bcrypt \cite{bycrpt}. In the future, other methods of authentication might be provided; like using Googles' OAuth. - Once logged In, the user will be able to use the application and manage tokens that were emitted for this application. This allows the user to manage what services have access to the account. % and the usage that those services have used. @@ -671,27 +115,26 @@ \textbf{TODO add image} The user is then shown the model page, which contains all the information about a model. - This page will contain some tabs, each page gives different inside about the model. + This page will contain some tabs, each page gives different insight about the model. The main page is designed to contain only actions relevant to the task it is trying to achieve. For example, if there are no images added to the model, the user will be prompted to add an image. - Or if the model has been trained and the user can submit webpages, then the user will have an option to submit and image. + Or if the model has been trained and the user can submit images, then the user will have an option to submit and image. \textbf{TODO add image} - Currently, the system does not support resizing of images that are different from the one uploaded at this step during evaluation. + Currently, the system does not support resizing of images that are different from the one uploaded at the creation step. This was done to guarantee that the image that the user wants to classify is unmodified. Moving the responsibility of cropping and resizing to the user. In the future, systems could be implemented that allow the user to select how an image can be cropped. The second step is uploading the rest of the dataset. This can be done via the main page or via the dataset tab that becomes available when the data of the model is first uploaded. - In this tab, the user can add and remove images, as well as creating new classes for the model. - The page also shows some useful information, such as the distribution. + In this tab, the user can add and remove images, as well as create new classes for the model. + The page also shows some useful information, such as the distribution of the dataset. - This information can be useful to more useful users that might decide to gather more data to balance the dataset. + This information can be useful to more advanced users that might decide to gather more data to balance the dataset. - The next step in the training process is for the user to upload the rest of the dataset. - The user uploads a zip file that contains a set of classes and images corresponding to that class. + To upload the reset oof the data set, the user can upload a zip file that contains a set of classes and images corresponding to that class. That zip file is processed and images and classes are created. This process was original slow as the system did not have the capabilities to parallelize the process of importing the images, but this was implemented, and the import process was improved. @@ -702,24 +145,25 @@ \textbf{TODO add image} After all the images that are required for training are uploaded, the user can go to the training step. - This step will appear both in the main tab of model page. Once the user instructs the system to start training, the model page will become the training page, and it will show progress of the training of the model. + This step will appear both in the main tab of the model page and in the dataset tab. Once the user instructs the system to start training, the model page will become the training page, and it will show the progress of the training of the model. During this step, the system automatically trains the model. After the system trains a model that meets the specifications set by the user, the service will make the model available for the user to use. When the model is finished training, the user can use the model to run inference tasks on images. To achieve this, the user can either use the API to submit a classification task or use the tasks tab in the web platform. - In the tasks tab, the user can see current, previous tasks. + In the tasks tab, the user can see current and previous tasks. The users can see what tasks were performed and their results. The user can also inform the service if the task that was performed did return the correct results. This information can be used to keep track of the real accuracy of the model. The information can be used to see if the model needs refinement. - The regiment would use the new data that was uploaded with the new data that was uploaded. + The system can add the classes that failed to return the correct result to a list of the original data, to be used in case of retraining the model. + \textbf{TODO add image} \subsubsection*{Advanced Model Management} - \begin{figure}[h!] + \begin{figure}[H] \centering \includegraphics[width=\textwidth]{models_advanced_flow} \caption{Simplified Diagram of Advanced Model management} @@ -732,7 +176,7 @@ The user would follow all the steps that are required for normal model creation and training. At the end of the process, the user will be able to add new data to the model and retrain it. - To achieve that, the user would simply to the data tab and create a new class. + To achieve that, the user would simply go to the data tab and create a new class. Once a new class is added, the webpage will inform the user that the model can be retrained. The user might choose to retrain the model now or more new classes and retrain later. @@ -757,10 +201,10 @@ \subsection{API} - The API was implemented as multithreaded go \cite{go} server. - The application on launch loads a configuration file, connects to the database. + The API was implemented as a multithreaded go \cite{go} server. + The application, on launch, loads a configuration file and connects to the database. After connecting to the database, the application performs pre-startup checks to make sure no tasks that were interrupted via a server restart and were not left in an unrecoverable state. - Once the checks are done, the application creates workers (which will be covered in the next subsection), which when completed the API server is finally started up. + Once the checks are done, the application creates workers, which will be explained in section \ref{impl:runner}, which when completed the API server is finally started up. Information about the API is shown around the web page so that the user can see information about the API right next to where the user would normally do the action, providing a good user interface. As the user can get information about right where they would normally do the action. @@ -784,23 +228,23 @@ \subsubsection*{Authentication} - The API allows users to login, which emits a token, and manually create tokens. + The API allows users to login, which emits a token, and once logged in, manually create tokens. While using the web application, this is done transparently, but it can also be manually done via the respective API calls. During the login process, the service checks to see if the user is registered and if the password provided during the login matches the stored hash. Upon verifying the user, a token is emitted. - Once a user is logged in they can then create mode tokens as seen in \ref{sec:impl:service-auth}. + Once a user is logged in they can then create more tokens as seen in the section \ref{sec:impl:service-auth}. While using the API the user should only use created tokens in the settings page as those tokens are named, and have controllable expiration dates. - This is advantageous from a security perspective as the user can manage who has access to the API. - If the token gets leaked, the user can then process to delete the named token, to guarantee the safety of his access. + This is advantageous from a security perspective, as the user can manage who has access to the API. + If the token gets leaked, the user can then delete the named token, to guarantee the safety of his access. The token can then be used in the ``token'' header as proof to the API that the user is authenticated. \subsection{Generation and Training of Models} - Models generation happens in the API server, the API server analyses what the image that provided and generates several model candidates accordingly. + Model generation happens on the API server, the API server analyses what the image that was provided and generates several model candidates accordingly. The number of model candidates is user defined. The model generation subsystem decides the structure of the model candidates based on the image size, it prioritizes the smaller models for smaller images and convolution networks with bigger images. The depth is controlled both by the image size and number of outputs, models candidates that need to be expanded are generated with bigger values to account for possible new values. @@ -810,18 +254,18 @@ Model training happens in a runner, more information about runners will be explained in section \ref{impl:runner}. % TODO explore this a bit more - Model training was implemented using TensorFlow \cite{tensorflow}. - Normally when using go with machine learning only the prediction is run in go and the training happens in python. + Model training was implemented using TensorFlow. % \cite{tensorflow}. + Normally, when using go with machine learning, only the prediction is run in go and the training happens in python. The training system was implemented that way. - The runner when it needs to perform training it generates a python script tailored to the model candidate that needs to be trained then runs the that python script, and monitors the result of the python script. + The runner, when it needs to perform training it generates a python script tailored to the model candidate that needs to be trained, then runs the that python script, and monitors the result of the python script. While the python script is running, it takes use of the API to inform the runner of epoch and accuracy changes. The during train, the runner takes a round-robin approach. It trains every model candidate for a few epochs, then compares the different models candidates. - If there is too much operation from the best model to the worst model, then the system might decide not to continue training a certain candidate and focus the training resources on candidates that are performing better. + If there is too much difference in accuracy, from the best model to the worst model, then the system might decide not to continue training a certain candidate and focus the training resources on candidates that are performing better. Once one candidate archives the target accuracy, which is user defined, the training system stops training the models candidates. - The model candidate that achieved the target accuracy is then promoted to the model, the other candidates are removed. + The model candidate that achieved the target accuracy is then promoted to the model, and the other candidates are removed. The model now can be used to predict the labels for any image that the user decides to upload. \subsubsection*{Expandable Models} @@ -831,7 +275,7 @@ Then the models are training using the same technic. At the end, after the model candidate has been promoted to the full model, the system starts another python process that loads the just generated model and splits into a base model and a head model. - With this two separate models, the system is now ready to start classifying new images. + With these two separate models, the system is now ready to start classifying new images. \subsubsection*{Expanding Expandable Models} @@ -844,24 +288,63 @@ \subsection{Model Inference} - \subsubsection*{Implementation Details} + Model inference also runs inside a runner. + However, inference runs internally on go, instead of creating tailored scripts for training using python. - The model definitions are generated in the go API and then stored in the database. - The runner then loads the definition from the API and creates a model based on that. + Once a classification request is received by the API, the uploaded image is checked to see if the model will accept it. + If the model is capable of accepting the image, it is temporarily saved to disk and then a classification task is created. - \subsubsection*{Inferencing images} + Eventually, a runner will pick up the classification task. + Upon pickup, the runner will load the model and run the image through it, the results are then matched with the stored classes in the database. + The system then stores the class with the highest probability of matching the image, according to the model, in the results of the task. - TODO + The user then can finally use the API to obtain the results of the model. + + \subsubsection*{Expandable Models} + + For expandable models, the inference step is very similar to the normal models. + The runner first loads the base model and runs the image through the model, the resultant features are then stored. + Once the features are obtained, the system then runs those features to the various possible heads, and the results are then matched with the stored classes, and the one with the height probability is then selected. \subsection{Runner} \label{impl:runner} + + Runners are the name used to reference to the software that can perform CPU or GPU intensive tasks without halting the main API server. + + Architecturally, they were implemented as a controller and worker pattern. + When the application that runs the main application starts, the system creates an orchestrator, this orchestrator is a piece of software that decides what work each runner is doing. + The orchestrator runs on go routine created at startup. + During the startup, the orchestrator by obtaining values from the configuration file. + Those values define the number of local runners. + + These runners, which are started up by the orchestrator, act as local runners, runners that are running on the same machine as the main server. + Local runners are useful when running the main server on a machine that also has GPU power available to it, and in testing. + + Local runners, run inside a go routine, this allows the runners and the orchestrator to communicate using go channels, which are the easiest way to communicate between two go routines. + The orchestrator is constantly waiting to receive either for a timer-based event or a runner-based event. + + Timer-based events happen when the orchestrator internal clock informs it of needing to check if tasks are available to run. + The time at which this clock ticks in configured in the settings of the app. + Upon receiving a timer-based event, the orchestrator then checks if there is a new task available for it to run and if there are any runners available for the task to run on. + If there are tasks available, then the orchestrator instructors the runner to pick up the task and run it. + + Runner-based events happen when a runner finishes running a task or crashes while trying to do it. + Upon receiving a runner event, the orchestrator checks if it is a success or a failure message. + + If it is a failure message and the runner is a local runner, then the orchestrator just restarts the runner. + Upon restart, it adds the runner to the list of available runners. + If the runner is a remote runner, the orchestrator marks the runner as failed and stops sending messages to the runner until the runner informs the service again that is available. + + If the message is of success, then the orchestrator just adds the runner to the list of viable runners, independently if the runner is remote or not. + + % TODO talk more about remote runners + % TODO talk about how the runner loads images + \subsection{Conclusion} - This section discussed the design and implementation specifications for the system. - - While there were some areas where the requirements were not met completely, due to scope problems, the implementation allows for the missing designed sections to be implemented at a later time. - - The implementation follows the requirements with the adjusted scope. - The results of the implementation will be tested in a future section. + This section went into the details of how the designed was implemented. + The design was envisioned to be the best possible version of this service, but scope was restrained to the necessities of the system while it was being developed. + And possible features that would make the implemented application closer to the ideal design could have been implemented if there was higher need during the development timeline. + This will be more discussed in the section about a critical review of the work. \pagebreak @@ -877,8 +360,8 @@ - \section{Legal, Societal, and Ethical Considerations} \label{sec:lsec} - This section will address possible legal, societal, ethical issues that might arise from the deployment of the software being designed. + \section{Legal, Societal, Ethical and Professional Considerations} \label{sec:lsec} + This section will address possible legal, societal, ethical and professional issues that might arise from the deployment of the software being designed. The Self-Assessment for Governance and Ethics (SAGE) form has addressed, and it is submitted along with the report. @@ -887,7 +370,7 @@ Legal issues can occur due to the data being stored by the service. - The service collect, the least amount of sensitive information, from the users who directly use the service. + The service collects, the least amount of sensitive information, from the users who directly use the service. That data that is collected while being sensitive is required to be able to authenticate the user, such as name, email, and password. To safeguard that information, the system will be using industry standards to guarantee data security of that data. @@ -917,6 +400,25 @@ For example, if the service gets acquired by a company that also wants to use the data provided to the system for other reasons. + \subsection{Professional Issues} + As a member of the British Computer Society (BCS), it is important to follow the Code of Conduct practices. The code of conduit contains 4 key principles. + + \subsubsection*{Public interest} + This project tries to consider the public health, privacy, and security of third parties and therefore follows the principle of public interest. + + \subsubsection*{Professional Competence and Integrity} + This project has been an enormous undertaking that pushed the limits of my capabilities. + I am glad that I was able to use this opportunity to learn about distributed systems, image classification, go, and Svelte. + During this project, I also followed the best practices of software development such as using source control software and having an audit to tasks and issues. + + \subsubsection*{Duty to Relevant Authority} + For the duration of the project, all the guidelines provided by the University of Surrey were followed. + + \subsubsection*{Duty to the Profession} + During the research, design, and implementation, and report state all interactions with the supervisor of the project have been professional, respectful, and honest. + To the best of my ability, I tried to design a system that would contribute to the field. + + \pagebreak @@ -930,7 +432,7 @@ - \section{Evaluating the Service} + \section{Service Evaluation} \label{sec:se} This section will discuss how the service can be evaluated from a technical standpoint and its results. With the goals of the project, there are two kinds of tests that need to be accounted for. @@ -959,12 +461,20 @@ The MNIST \cite{mnist} dataset was selected due to its size. It's a small dataset that can be trained quickly and can be used to verify other internal systems of the service. + \textbf{TODO add image} + + \textbf{TODO add more datasets} + \subsubsection{Results} + + \textbf{TODO add more data} + + \begin{longtable}{ | c | c | c | c | c | c |} \hline Dataset & Import Time & Train Time & Classification Time & Extend Time & Accuracy \\ \hline - MNIST & 0ms & 0ms & 0ms & 0ms & $98\%$ \\ \hline + MNIST & 0ms & 0ms & 0ms & 0ms & $98\%$ \\ \hline \caption{Evaluation Results} \label{tab:eval-results} \end{longtable} @@ -988,21 +498,113 @@ \section{Critical Review of Project Outcomes} \label{sec:crpo} - As it was stated during the introduction, this project has multiple objectives. + This section will go into details to see if the project was able to achieve the goals set forth in the introduction. - \subsection{Platform where users can manage their models} + The section will be analysing if the goals of the project were met, then shortcomings and improvements off the implementation will be discussed. After analysing shortcomings and improvements, possible future work that be done to the project will be discussed. + The section will end with a general statement about the state of the project. - This goal was achieved there a web-based platform was created to manage and control the models. + \subsection{Project Objectives} - \subsection{A system to automatically train and create models} + In the introduction section of this project, some objectives were set for this project. - This goal was achieved, there is currently a system to automatically create and train models. + By the end of the project, the developed solution can achieve the goals set forth. - The system that trains models needs some improvement, as it still is partially inefficient when managing the system loads while training. + \subsubsection*{A system to upload images that will be assigned to a model} - \subsection{An API that users can interact programmatically} + This goal was achieved. + One of the abilities of both the API and the webpage are to be able to upload images to the service. + Which means that a system was created that allows users to upload images that will be linked with a model. - This goal was achieved and there is currently a working API that users can use to control the models and do inference tasks. + \subsubsection*{A system to automatically train and create models} + + This goal was achieved. + The designed server can create models based only using the data provided by the user without any human interaction. + The model creation system is not as efficient, this inefficient will be discussed more in a future subsection it could be but can still achieve the desired goal. + + \subsubsection*{Platform where users can manage their models} + + This goal was achieved. + A web-based platform was developed where users can manage all the data related to machine learning models that were created. + The platform that was implemented allows users to create models, upload images related to the model, and then manage the submitted classification tasks. + + The platform allows managing any models easily they create with within, meaning that the developed solution can achieve the first goal of the project. + + \subsubsection{A system to automatically expand models without fully retraining the models} + + This goal was achieved. + A system was created that allows users to add more images and classes to models that were previously created. + And this is done without having to fully retrain the model. + + \subsubsection*{An API that users can interact programmatically} + + This goal was achieved. + The API implementation allows users to programmatically access the system. + The efficacy of the API is proved by its use in the front end application. + The front end application uses the API to fully control the service. + This means that everything that can be done in the frontend can be done via the API. + Which means that the API can satisfy every need that a possible user might have; therefore this goal was accomplished. + + \subsection{Project Shortcomings and Improvements} + + Although the project was able to achieve the desired goals, the project has some shortcomings that can be improved upon in future iterations. + This section will analyse some of those shortcoming and ways to improve the service. + + \subsubsection*{Model Generation} + The model generation system is a complex, and due to all the moving parts that make the system work, it requires a large amount of to work to maintain. + It is also very inefficient due to the having to generate custom tailored python scripts, that cause the data to be reloaded every time a new a round-robin round needs to happen. + + A way more efficient way is to perform all the training directly in go server. + Running the training directly in go would allow the service to be able to keep track of memory and GPU usage, move data from the GPU and CPU effortlessly between runs, and would remove uncertainty from the training system. + + The model generation was originally implemented with TensorFlow, this ended up limiting the generation of the models in go as the bindings for TensorFlow were lacking in the tools used to train the model. + Using Lib Torch libraries would allow more control over data, and allow that control to be done in go, which would improve both control and speed of the process. + Unfortunately, when a version of the service was attempted to be implemented using Lib Torch, the system was too unstable. + Problems were encountered with the go bindings for the Lib Torch library or, the Lib Torch library was causing inconsistent behaviour with between runs. + That compounded with time limitations make it impossible for a Lib Torch implementation to come to fruition. + + Having a full go implementation would make the system more maintainable and fast. + + + \subsubsection*{Image storage} + + The image storage is all local, while this does not currently affect how remote runner works. + %TODO improve this + This is less problematic when the runner is on the same network as the main server, but if a possible user would like to provide their runners. + This would require a lot of bandwidth for the images to be transferred over the network every time the model needs to run. + + A better solution for image storage would allow user provided runners to store images locally. + During the upload time, the API, instead of storing the images locally, would instruct the users' runner to store the images locally, therefore when the runner would need to perform any training tasks with local data instead of remote data. + + This would not also not require modification of the current system. + The system was designed and implemented to be expanded. + The dataset system was designed to be able to handle different kinds of storage methods in the future, such as remote storage and Object Buckets, like Amazon S3. + + \subsection{Future Work} + This section will consider possible future work that can be built upon this project. + + \subsubsection*{Image Processing Pipelines} + The current system does not allow for images of different sizes to be uploaded to the system, an interesting project would be to create a new subsystem that would allow the user to create image processing pipelines. + + This new system would allow users to create a set of instructions that images would go through to be added to the system. + For example, automatically cropping, scaling, or padding the image. + + A system like this would add versatility to the system and remove more work from the users of the service as they don't have to worry about handling the image processing on their side. + + \subsubsection*{Different Kinds of Models} + The runner system could be used to train and manage different kinds of models, not just image classification models. + + If the system was modified to have different kinds of models, it would allow the users to run different kinds of models. + Such as Natural Language Processing Models, or Multi Model Models. + This would increase the versatility of the service, and it would allow users to automate more tasks. + + + \subsection{Conclusion} + + With the increase in automation recently, having a system that allows users to quickly build classification models for their tasks, would be incredibly useful. + This project provides exactly that, a simple-to-use system that allows the user to create models with ease. + + There are more features to be added to the service, that would improve the quality of the project. + The service is in a state that it would be possible to run it in a production environment, making this project successful. \pagebreak diff --git a/report/sanr.tex b/report/sanr.tex new file mode 100644 index 0000000..f51b587 --- /dev/null +++ b/report/sanr.tex @@ -0,0 +1,156 @@ +\section{Service Analysis and Requirements} \label{sec:sanr} + Understanding the project that is being built is a critical step in the software deployment process. + This section will discuss what are the requirements that a service needs to implement for the project to be considered a success. + + As a software as a service project, there are some required parts that the project needs to have: + \begin{itemize} + \item A way for the user to interact with the system + \item A way for programs to interact with the system + \item Management of images + \item Management of models + \item Management of compute resources + \end{itemize} + + \subsection{Service Structure} + The service has to be structured so that users can interact with in two ways. + + The first way is for the user to directly interface with the system using a user interface. + This interface does not have any strict form requirements, it could either be a desktop application, web application or even a command line application. + The main objective of this interface is for the user to quickly understand what the system is doing with their data, and if they can use the model they created to evaluate images. + + The second way for the user to interface with the system needs to be an API. + This is required as it would not make sense for users to be able to quickly generate image classification models if they still had to evaluate all the images manually. + Therefore, there needs to be away for the user product to connect with the system, the API provides exactly that. + + The system should also be structured in a way that allows easy scalability. + So that it can handle many requests at the same time. + The system should be able to scale, this could be achieved in many ways. + One way is by allowing the service to act as a cluster, where the same application is running multiple times and a load balancer, balances the load between the systems. + Another way is for the service to behave as a distributed system, where the services are split into smaller modules and those modules can be replicated. + Independently of how the system scales, it requires the ability to handle the fact that the data that the system uses not be available everywhere. + + As a machine learning solution, the service requires the necessary computational power to handle the training of the models. + This means that the system needs to be structured in an away that it can decouple the training process from the main process. + Which guarantees that the compute requirements for training the model do not affect the main server running. + Ideally, the service should be able to divide the tasks from tasks that would require the GPU, and tasks that would require the CPU. + + \subsection{Resources} + As a machine learning image classification service, the service has to manage various types of resources. + + \subsubsection{Compute Resources} + As mentioned before, the service needs to be able to manage its compute resources. + This is required because, for example, if the system starts training a new model and that training uses all the GPU resources, it would impact the ability of the service to be able to evaluate images for other users. + As this example demonstrated, the system needs to keep track of the amount of GPU power available, so it can manage the actions it has to take accordingly. + + Therefore, for optimal functionality, the service requires the management of various compute resources. + + There should be a separation of the different kinds of compute power. + The two types of compute power are: CPU and GPU. + The CPU is needed to handle the multiple requests that the API might answer at the same time. + And the GPU resources are required to train models and evaluate the images. + + As a result, the service needs a system to distribute these compute tasks. + The tasks have to be distributed between the application that is running the API and the various other places where that compute can happen. + + An ideal system would distribute the tasks intelligently, to allow the maximization of resources. + An example of this would be running image classification, on the same model, on the same place twice, this would allow the model to stay in memory and not need to be reloaded again from disk. + These kinds of optimizations would help the system to be more efficient and less wasteful. + + Another way to reduce the load that the system goes through is to allow users to add their own compute power to the system. + That compute power would only use images and models that are owned by the user. + While allowing the compute power to run any image or model in the system would allow for an even more scalable system, it would be an incredible violation of privacy and security. + As it allows outsiders access to possible sensitive information. + Which makes the idea of a complete distributed network of user provided compute power not viable. + + \subsubsection{Storage} + Another resource that it has to handle is storage. + As the service accepts user uploaded images, the service has to monitor how much storage those images take. + The service will need systems to handle when the user uploaded images take too much space. + There are many ways of handling this, such as allowing the user to store their images, compacting the images, deleting images that the system might no longer need, or allowing dynamic storage services such as Object Buckets. + + If there is not enough space to store all the images from all the models, and the service needs to delete images. + There should be a system that removes the images in a manner that causes the less harm. + An example of this would be deleting images in a way that keeps the dataset balanced. + + If there is a discrepancy of where compute and storage happen, the system needs to be able to handle that. + This can be accomplished in various methods. + The most aggressive one is not allowing to compute resources to access data that is far away. + The less aggressive and smarter way is to allow the system to move data to the optimal place. + + \subsection{User interface} + A service such as this requires a way for the users to quickly get an understating of how their data is being used and how they can perform the actions they want. + + As previously mentions, this application can take multiple forms, from web apps, to command line applications. + As long as the application is easy to use, and allows the user to perform the required tasks: + \begin{itemize} + \item{Configure model} + \item{Upload images} + \item{Manage images} + \item{Request model training} + \item{Request image evaluation} + \item{Configure access} + % \item{See API usage} + %TODO write more + \end{itemize} + + The way that the application communicates with the service should be done, via the API. + If there was a requirement to physical access the computer that the service is running on, it would defeat the purpose of this project. + Therefore, being able to control the service via the API makes the most reasonable sense. + A second system could be developed that allows the application to control the service, but that would be terribly inefficient. + Allowing the application to control the system via the API, also improves the API, as the API now gets more features. + + The application should also allow administrators of the service to control the resources that are available to the system, to see if there is any requirement to add more resources. + + \subsection{API} \label{sec:anal-api} + As a software as a service platform, most of the requests made to the service would be made via the API, not the user interface. + This is the case because the users that would need this service would set up the model using the web interface and then do the image classifications requests via the API. + + While there are no hard requirements for the user interface, that is not the case for the API. + The API must be implemented as an HTTPS REST API, this is because the most of the APIs that currently exist online are HTTPS REST APIs \cite{json-api-usage-stats}. + If the service wants to be easy to use, it needs to be implemented in away such that it has the lowest barrier to entry. + Making the type of the API a requirement would guarantee that the application would be the most compatible with other systems that already exist. + + The API needs to be able to do all the tasks that the application can do. + + The API also requires authentication. + This is needed to prevent users from: + \begin{itemize} + \item{Modifying systems settings} + \item{Accessing other users' data} + \end{itemize} + The API must implement authentication methods to prevent those kinds of actions from happening. + + \subsection{Data Management} + The service will store a large amount of user data. + This includes: user information, user images, user models. + + \subsubsection*{User Information} + There are no hard requirements on how the user information needs to be stored, as long as it is done securely. + User information includes personal identifiable information such as username and email, and secret information such as passwords, and access tokens. + + Future versions of the service could possible also store more sensitive information about the user, such, as payment information and addresses. + Such information is required if the user needs to be charged, but payment for the services provided is outside the scope of this project. + + \subsubsection*{User Images} + + Images are another kind of information that has to be stored. + As it was mentioned before, the system has to keep track of the images and the space they use. + The system should also guarantee that there is some level of security in accessing the images that were uploaded to the service. + + \subsubsection*{Models} + The last kind of data that the service has to keep track of is model data. + Once the model is trained, it has to be saved on disk. + The service should implement a system that manages where the models are stored. + This is similar to the image situation, where the model should be as close as possible to the compute resource that is going to utilize it, even if this requires copying the model. + + \subsection{Conclusion} + This section shows that there are requirements that need to be met for the system to work as indented. These requirements range from usability requirements, implementation details, to system-level resource management requirements. + + The most important requirement is for the system to be easy to use by the user. + As if it's difficult to use, then the service already fails in one of its objectives. + + The other requirements are significant as well, as without them, the quality of the service would be very degraded. + And even if the service was effortless to use, it is as bad as being difficult to use if it could not process the images quickly in a reasonable amount of time. + + The next section will describe a design that matches a subset of the requirements. +\pagebreak diff --git a/report/settings.tex b/report/settings.tex new file mode 100644 index 0000000..f648325 --- /dev/null +++ b/report/settings.tex @@ -0,0 +1,70 @@ +\usepackage[english]{babel} % English language/hyphenation +\usepackage{url} +\usepackage{tabularx} +\usepackage{pdfpages} +\usepackage{float} +\usepackage{longtable} +\usepackage{multicol} + +\usepackage{graphicx} +\usepackage{svg} +\graphicspath{ {../images for report/} } +\usepackage[margin=2cm]{geometry} + +\usepackage{datetime} +\newdateformat{monthyeardate}{\monthname[\THEMONTH], \THEYEAR} + +\usepackage{hyperref} +\hypersetup{ + colorlinks, + citecolor=black, + filecolor=black, + linkcolor=black, + urlcolor=black +} + +\usepackage{cleveref} + +%%% Custom headers/footers (fancyhdr package) +\usepackage{fancyhdr} +\pagestyle{fancyplain} + +% \fancyhead{} + +\fancypagestyle{my_empty}{% + \fancyhf{} + \renewcommand{\headrulewidth}{0pt} + \renewcommand{\footrulewidth}{0pt} +} + +\fancypagestyle{simple}{% + \fancyhf{} + \renewcommand{\headrulewidth}{0pt} + \renewcommand{\footrulewidth}{0pt} + \fancyfoot[L]{} % Empty + \fancyfoot[C]{\thepage} % Pagenmbering + \fancyfoot[R]{} % Empty +} + +\fancypagestyle{full}{% + \fancyhf{} + \renewcommand{\headrulewidth}{0.5pt} + \renewcommand{\footrulewidth}{0pt} + \fancyfoot[L]{} % Empty + \fancyfoot[C]{\thepage} % Pagenmbering + \fancyfoot[R]{} % Empty + + \fancyhead[RO,LE]{Andre Henriques} +} + +\renewcommand{\headrulewidth}{0pt} % Remove header underlines +\renewcommand{\footrulewidth}{0pt} % Remove footer underlines +\setlength{\headheight}{13.6pt} + +\newcommand*\NewPage{\newpage\null\thispagestyle{empty}\newpage} + +% numeric +\usepackage[bibstyle=ieee, citestyle=numeric, sorting=none,backend=biber]{biblatex} +\addbibresource{../main.bib} + +\raggedbottom diff --git a/report/start.tex b/report/start.tex new file mode 100644 index 0000000..033854b --- /dev/null +++ b/report/start.tex @@ -0,0 +1,60 @@ +\pagenumbering{gobble} + +\maketitle +\pagestyle{my_empty} + +\begin{center} + \includegraphics[height=0.5\textheight]{uni_surrey} +\end{center} + +\begin{center} + \monthyeardate\today +\end{center} + +\NewPage +\pagenumbering{arabic} + +\pagestyle{simple} + +\begin{center} + \vspace*{\fill} + \section*{Declaration of Originality} + I confirm that the submitted work is my own work and that I have clearly identified and fully + acknowledged all material that is entitled to be attributed to others (whether published or + unpublished) using the referencing system set out in the programme handbook. I agree that the + University may submit my work to means of checking this, such as the plagiarism detection service + Turnitin® UK. I confirm that I understand that assessed work that has been shown to have been + plagiarised will be penalised. + \vspace*{\fill} +\end{center} +\NewPage + +\begin{center} + \vspace*{\fill} + \section*{Acknowledgements} + I would like to take this opportunity to thank my supervisor, Rizwan Asghar that helped me with this project from the start of the until the end. + His help with the report was incredibly useful. + + I would like to thank my family and friends for their support and encouragement from the beginning. + \vspace*{\fill} +\end{center} +\NewPage + +\begin{center} + \vspace*{\fill} + \section*{Abstract} + Currently there is a log of man-hours used performing tasks that can be done by automated systems. + If a user, without any knowledge of image classification, can create an image classification model with ease, it would allow those man-hours to be used for more productive. + + This project aims to develop a classification platform where users can create image classification models with as few clicks as possible. + The project will create multiple systems that allow: model creation, model raining, and model inference. + + This report will guide the reader through the ideas and designs that were implemented. + \vspace*{\fill} +\end{center} +\NewPage + +\tableofcontents +\newpage + +\pagestyle{full}