7.1 KiB
7.1 KiB
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]
[Nofix]
- ctype
long
caused compiling error in MacOS as noted on #44. Not working on linux box.
[0.6.0]
- Added subpackage
pickle
. Now we can load directly Python Pytorch pretrained model without any Python script conversion. - Added
gotch.CachePath()
andgotch.ModelUrls
- Remove Travis CI for now.
- fixed
tensor.OfSlice()
throw error due to "Unsupported Go type" (e.g. []float32) - added
nn.Path.Paths()
method - added
nn.VarStore.Summary()
method - fixed incorrect tensor method
ts.Meshgrid
->Meshgrid
- added new API
ConstantPadNdWithVal
ato_constant_pad_nd
with padding value. - fixed "nn/rnn NewLSTM() clashed weight names"
- fixed some old API at
vision/aug/function.go
- fixed
tensor.OfSlice()
not supporting[]int
data type - fixed make
tensor.ValueGo()
returning[]int
instead of[]int32
- added more building block modules: Dropout, MaxPool2D, Parameter, Identity
- added nn.BatchNorm.Forward() with default training=true
- added exposing
tensor.Ctensor()
- added API
tensor.FromCtensor()
- [#67]: fixed incorrect type casting at
atc_cuda_count
[0.5.0]
- Upgraded to libtorch 1.10
- #58 Fixed incorrect converting IValue from CIValue case 1 (Tensor).
[0.4.5]
- Added Conv3DConfig and Conv3DConfig Option
- Added missing Tensor methods APIs those return multiple tensors (e.g.
tensor.Svd
).
[0.4.4]
- Dropped libtch
dummy_cuda_dependency()
andfake_cuda_dependency()
as libtorch ldd linking Okay now.
[0.4.3]
- Export nn/scheduler DefaultSchedulerOptions()
[0.4.2]
- Added nn/scheduler NewLRScheduler()
- Added nn/conv config options
[0.4.1]
- fixed cuda error
undefined reference to 'at::cuda::warp_size()'
[0.4.0]
- Update libtorch to 1.9. Generated 1716 APIs. There are APIs naming changes ie.
Name1
change toNameDim
orNameTensor
.
[0.3.14]
- Fixed temporary fix huge number of learning group returned from C at
libtch/tensor.go AtoGetLearningRates
- Fixed incorrect
nn.AdamWConfig
and some documentation. - Fixed - reworked on
vision.ResNet
andvision.DenseNet
to fix incorrect layers and memory leak - Changed
dutil.DataLoader.Reset()
to reshuffle when resetting DataLoader if flag is true - Changed
dutil.DataLoader.Next()
. Deleted case batch size == 1 to make consistency by always returning items in a slice[]element dtype
even with batchsize = 1. - Added
nn.CrossEntropyLoss
andnn.BCELoss
- Fixed
tensor.ForwardIs
returnTuple
andTensorList
instead of always returningTensorList
- Changed exporting augment options and make ColorJitter forward output dtype
uint8
for chaining with other augment options. - #45 Fixed
init/RandInt
incorrect initialization - #48 Fixed
init/RandInit
when init with mean = 0.0.
[0.3.13]
- Fixed multiple memory leakage at
vision/image.go
- Fixed memory leakage at
dutil/dataloader.go
- Fixed multiple memory leakage at
efficientnet.go
- Added
dataloader.Len()
method - Fixed deleting input tensor inside function at
tensor/other.go
tensor.CrossEntropyForLogits
andtensor.AccuracyForLogits
- Added warning to
varstore.LoadPartial
when mismatched tensor shapes between source and varstore. - Fixed incorrect message mismatched tensor shape at
nn.Varstore.Load
- Fixed incorrect y -> x at
vision/aug/affine.go
getParam func - Fixed double free tensor at
vision/aug/function.go
Equalize func. - Changed
vision/aug
all input image should beuint8
(Byte) dtype and transformed output has the same dtype (uint8) so thatCompose()
can compose any transformer options. - Fixed wrong result of
aug.RandomAdjustSharpness
- Fixed memory leak at
aug/function.getAffineGrid
- Changed
vision/aug
and correct ColorJitter - Changed
vision/aug
and correct Resize - Changed
dutil/sampler
to accept batchsize from 1. - Fixed double free in
vision/image.go/resizePreserveAspectRatio
[0.3.12]
Skip this tag
[0.3.11]
Same as [0.3.10]
[0.3.10]
- Update installation at README.md
- [#38] fixed JIT model
- Added Optimizer Learning Rate Schedulers
- Added AdamW Optimizer
[0.3.9]
- #24, #26: fixed memory leak.
- #30: fixed varstore.Save() randomly panic - segmentfault
- #32: nn.Seq Forward return nil tensor if length of layers = 1
- [#36]: resolved image augmentation
[0.3.8]
Fixed
- #20: fixed IValue.Value() method return
[]interface{}
instead of[]Tensor
[0.3.7]
Added
- Added trainable JIT Module APIs and example/jit-train. Now, a Python Pytorch model
.pt
can be loaded then continue training/fine-tuning in Go.
[0.3.6]
Added
- Added
dutil
sub-package that serves PytorchDataSet
andDataLoader
concepts
[0.3.5]
Added
- Added function
gotch.CudaIfAvailable()
. NOTE that:device := gotch.NewCuda().CudaIfAvailable()
will throw error if CUDA is not available.
Changed
- Switched back to install libtorch inside gotch library as go init() function is triggered after cgo called.
[0.3.4]
Added
- #4 Automatically download and install Libtorch and setup environment variables.
[0.3.2]
Added
- #6: Go native tensor print using
fmt.Formatter
interface. Now, a tensor can be printed out like:fmt.Printf("%.3f", tensor)
(for float type)
[0.3.3]
Fixed
- nn/sequential: fixed missing case number of layers = 1 causing panic
- nn/varstore: fixed(nn/varstore): fixed nil pointer at LoadPartial due to not break loop
[0.3.1]
Changed
- Changed to use
map[string]*Tensor
atnn/varstore.go
- Changed to use
*Path
argument ofNewLayerNorm
method atnn/layer-norm.go
- Lots of clean-up return variables i.e. retVal, err
[0.3.0]
Changed
- Updated to Pytorch C++ APIs v1.7.0
- Switched back to
lib.AtoAddParametersOld
as theato_add_parameters
has not been implemented correctly. Using the updated API will cause optimizer stops working.
[0.2.0]
Changed
- Convert all APIs to using Pointer Receiver
Added
- Added drawing image label at
example/yolo
example - Added some example images and README files for
example/yolo
andexample/neural-style-transfer
[0.1.10]
Added
- Added
tensor.SaveMultiNew
[0.1.9]
Changed
- Reverse changes #10 to original.
[0.1.8]
Changed
- #10:
ts.Drop()
andts.MustDrop()
now can call multiple times without panic