You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
The key has expired.
Added
New analog devices:
A new abstract device (MixedPrecisionCompound) implementing an SGD
optimizer that computes the rank update in digital (assuming digital
high precision storage) and then transfers the matrix sequentially to
the analog device, instead of using the default fully parallel pulsed
update. (#159)
A new device model class PowStepDevice that implements a power-exponent
type of non-linearity based on the Fusi & Abott synapse model. (#192)
New parameterization of the SoftBoundsDevice, called SoftBoundsPmaxDevice. (#191)
Analog devices and tiles improvements:
Option to choose deterministic pulse trains for the rank-1 update of
analog devices during training. (#99)
More noise types for hardware-aware training for inference
(polynomial). (#99)
Additional bound management schemes (worst case, average max, shift).
(#99)
Cycle-to-cycle output referred analog multiply-and-accumulate weight
noise that resembles the conductance dependent PCM read noise
statistics. (#99)
C++ backend improvements (slice backward/forward/update, direct
update). (#99)
Option to excluded bias row for hardware-aware training noise. (#99)
Option to automatically scale the digital weights into the full range of
the simulated crossbar by applying a fixed output global factor in
digital. (#129)
Optional power-law drift during analog training. (#158)
Cleaner setting of dw_min using device granularity. (#200)
PyTorch interface improvements:
Two new convolution layers have been added: AnalogConv1d and AnalogConv3d, mimicking their digital counterparts. (#102, #103)
The .to() method can now be used in AnalogSequential, along with .cpu() methods in analog layers (albeit GPU to CPU is still not
possible). (#142, #149)
New modules added:
A library of device presets that are calibrated to real hardware data,
namely ReRamESPresetDevice, ReRamSBPresetDevice, ECRamPresetDevice, CapacitorPresetDevice, and device presets that are based on models in the
literature, e.g. GokmenVlasovPresetDevice and IdealizedPresetDevice.
They can be used defining the device field in the RPUConfig. (#144)
A library of config presets, such as ReRamESPreset, Capacitor2Preset, TikiTakaReRamESPreset, and many more. These can be used for tile
configuration (rpu_config). They specify a particular device and optimizer
choice. (#144)
Utilities for visualization the pulse response properties of a given
device configuration. (#146)
A new aihwkit.experiments module has been added that allows creating and
running specific high-level use cases (for example, neural network training)
conveniently. (#171, #172)
A CloudRunner class has been added that allows executing experiments in
the cloud. (#184)
Changed
The minimal PyTorch version has been bumped to 1.7+. Please recompile your
library and update the dependencies accordingly. (#176)
Default value for TransferCompound for transfer_every=0 (#174).
Fixed
Issue of number of loop estimations for realistic reads. (#192)
Fixed small issues that resulted in warnings for windows compilation. (#99)
Faulty backward noise management error message removed for perfect backward
and CUDA. (#99)
Fixed segfault when using diffusion or reset with vector unit cells for
CUDA. (#129)
Fixed random states mismatch in IoManager that could cause crashed in same
network size and batch size cases for CUDA, in particular for TransferCompound. (#132)
Fixed wrong update for TransferCompound in case of transfer_every smaller
than the batch size. (#132, #174)
Period in the modulus of TransferCompound could become zero which
caused a floating point exception. (#174)
Ceil instead of round for very small transfers in TransferCompound
(to avoid zero transfer for extreme settings). (#174)
Removed
The legacy NumpyAnalogTile and NumpyFloatingPointTile tiles have been
finally removed. The regular, tensor-powered aihwkit.simulator.tiles tiles
contain all their functionality and numerous additions. (#122)