wrote more on efficientnet
All checks were successful
continuous-integration/drone/push Build is passing
All checks were successful
continuous-integration/drone/push Build is passing
This commit is contained in:
parent
e0d93ea366
commit
16aafe65b8
11
main.bib
11
main.bib
@ -259,3 +259,14 @@ year = 1998
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.LG}
|
||||
}
|
||||
@INPROCEEDINGS{inverted-bottleneck-mobilenet,
|
||||
author={Sandler, Mark and Howard, Andrew and Zhu, Menglong and Zhmoginov, Andrey and Chen, Liang-Chieh},
|
||||
booktitle={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
||||
title={MobileNetV2: Inverted Residuals and Linear Bottlenecks},
|
||||
year={2018},
|
||||
volume={},
|
||||
number={},
|
||||
pages={4510-4520},
|
||||
keywords={Manifolds;Neural networks;Computer architecture;Standards;Computational modeling;Task analysis},
|
||||
doi={10.1109/CVPR.2018.00474}
|
||||
}
|
||||
|
@ -115,6 +115,8 @@
|
||||
|
||||
As the service might need to handle a large number of requests, it needs to be able to handle as many requests as possible. This would require that the models are easy to run, and smaller models are easier to run, therefore the system requires a balance between size and accuracy.
|
||||
|
||||
% TODO talk about storage
|
||||
|
||||
\subsection{Method of Image Classification Models}
|
||||
|
||||
There are all multiple ways of achieving image classification, the requirements of the system are that the system should return the class that an image that belongs to. Which means that we will be using supervised classification methods, as these are the ones that meet the requirements of the system.
|
||||
@ -147,8 +149,11 @@
|
||||
|
||||
|
||||
% EfficientNet
|
||||
EfficientNet \cite{efficient-net} is deep convulution neural network that was able to achieve $84.3\%$ top-1 accuracy while ``$8.4$x smaller and $6.1$x faster on inference thatn the best existing ConvNet''. EfficientNets\footnote{the family of models that use the thecniques that described in \cite{efficient-net}} are models that instead of the of just increasing the depth or the width of the model, we increase all the parameters at the same time by a constant value.
|
||||
|
||||
EfficientNet \cite{efficient-net} is deep convulution neural network that was able to achieve $84.3\%$ top-1 accuracy while ``$8.4$x smaller and $6.1$x faster on inference thatn the best existing ConvNet''. EfficientNets\footnote{the family of models that use the thecniques that described in \cite{efficient-net}} are models that instead of the of just increasing the depth or the width of the model, we increase all the parameters at the same time by a constant value. By not scaling only depth EfficientNets are able to aquire more information about the images specialy the image size is taken into acount.
|
||||
To test their results the EfficientNet team created a baseline model which as a building block used the mobile inverted bottleneck MBConv \cite{inverted-bottleneck-mobilenet}. The baseline model was then scaled using the compound method which resulted in better top-1 and top-5 accuracy.
|
||||
While EfficientNets are smaller than their non-EfficientNet conterparts they are more computational intencive, a ResNet-50 scaled using the EfficientNet compound scaling method is $3\%$ more computational intencive then a ResNet-50 scaled using only depth while only improving the top-1 accuracy by $0.7\%$, and as the model will be trained and run multiple times decreasing the computational cost might be a better overall target for sustainability then being able to offer higher accuracies.
|
||||
Eventhough scaling using the EfficientNet compound method might not yield the best results using some of the EfficientNets what were obtimized by the team to would be optimal, for example, EfficientNet-B1 is both small and efficient while still obtaining $79.1\%$ top-1 accuracy in ImageNet, and realistically the datasets that this system will process will be smaller and more scope specific then ImageNet.
|
||||
|
||||
|
||||
% \subsection{Efficiency of transfer learning}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user