-
-
Notifications
You must be signed in to change notification settings - Fork 349
Building NUT for in‐place upgrades or non‐disruptive tests
NOTE: Since PR https://github.com/networkupstools/nut/pull/1845 a variant of this article is tracked in https://github.com/networkupstools/nut/blob/master/INSTALL.nut document. That copy is anticipated to more closely reflect the procedures for accompanying NUT source code revision, than this version-agnostic Wiki article.
Since late 2022/early 2023 NUT codebase supports "in-place" builds which try their best to discover the configuration of an earlier build (configuration and run-time paths and OS accounts involved, maybe an exact configuration if stored in deployed binaries).
This optional mode is primarily intended for several use-cases:
- Test recent GitHub "master" branch or proposed PR to see if it solves a practical problem for a particular user;
- Replace an existing deployment, e.g. if OS-provided packages deliver obsolete code, to use newer NUT locally in "production mode". (In such cases ideally get your distribution, NAS vendor, etc. to provide current NUT -- and benefit from a better integrated and tested product)
Note that "just testing" often involves building the codebase and new drivers or tools in question, and running them right from the build workspace (without installing into the system and so risking an unpredictable-stability state). In case of testing new drivers, note you would need to stop the normally running instances to free up the communications resources (USB/serial ports, etc.), run the new driver in data-dump mode, and restart the normal systems operations. Such tests still benefit from matching the build configuration to what is already deployed, in order to request same configuration files and system access permissions (e.g. to device nodes for physical-media ports involved, and to read the production configuration files).
The https://github.com/networkupstools/nut/blob/master/docs/config-prereqs.txt document (also as a rendered page on NUT website) details tools and dependencies that were added on NUT CI build environments, which now cover many operating systems. This should provide a decent starting point for the build on yours (PRs to update the document are welcome!)
Note that unlike distribution tarballs, Git sources do not include a configure
script and some other files -- these should be generated by running autogen.sh
(or ci_build.sh
that calls it).
NOTE: Builds on MacOS were explored with HomeBrew so far. For ci_build.sh
to use it properly, please export a HOMEBREW_PREFIX
to let the script fit that build system. For example, put this into your shell profile:
eval "$(brew shellenv)"
For platforms already supported by NUT CI builds it can be helpful to read corresponding recipes in NUT source revision you are trying to build, to see how those workers are configured from scratch to provide everything their job requires.
To build the current tip of development iterations (usually after PR merges that passed CI, reviews and/or other tests), just clone the NUT repository and "master" branch should get checked out by default (also can request that explicitly, per example posted below).
If you want to quickly test a particular pull request, see the link on top of the PR page that says "...wants to merge ... from :" and copy the proposed-source URL of that "from" part. For example, this says "jimklimov:issue-1234" and links to "https://github.com/jimklimov/nut/tree/issue-1234". For git-cloning, just paste into the shell and replace the /tree/
with "-b
" CLI option like this:
:; cd /tmp
### Checkout https://github.com/jimklimov/nut/tree/issue-1234
:; git clone https://github.com/jimklimov/nut -b issue-1234
NOTE: this uses the ci_build.sh
script to arrange some rituals and settings, in this case primarily to default the choice of drivers to auto-detection of what can be built, and to skip building documentation. Also note that this script supports many other scenarios for CI and developers, managed by BUILD_TYPE
and other environment variables, which are not explored here.
Keep in mind that if your system already has NUT installed and running, you would have to stop its original drivers first. Check nut-driver-enumerator page if your installed NUT is v2.8.0 or newer on Linux or Solaris/illumos, so it may have automatically managed nut-driver
service instances.
An "in-place" testing build would probably go along the lines of:
:; cd /tmp
:; git clone -b master https://github.com/networkupstools/nut
:; cd nut
:; ./ci_build.sh inplace
### Temporarily stop your original drivers
:; ./drivers/nutdrv_qx -a DEVNAME_FROM_UPS_CONF -d1 -DDDDDD # -x override...=... -x subdriver=...
### Can start back your original drivers
### Analyze and/or post back the data-dump
NOTE: to probe a device for which you do not have an ups.conf
section yet, you must specify -s name
and all config options (including port
) on command-line with -x
arguments, e.g.:
:; ./drivers/nutdrv_qx -s tempups \
-d1 -DDDDDD -x port=auto \
-x vendorid=... -x productid=... \
-x subdriver=...
While ci_build.sh inplace
can be a viable option for preparation of local builds, you may want to have precise control over configure
options (e.g. choice of required drivers, or enabled documentation).
A sound starting point would be to track down packaging recipes used by your distribution (e.g. RPM spec or DEB rules files, etc.) to detail the same paths if you intend to replace those, and copy the parameters for configure
script from there -- especially if your system is not currently running NUT v2.8.1 or newer (which embeds this information to facilitate in-place upgrade rebuilds).
Note that the primary focus of in-place automated configuration mode is about critical run-time options, such as OS user accounts, configuration location and state/PID paths, so it alone might not replace your driver binaries that the package would put into an obscure location like /lib/nut
. It would however install init-scripts or systemd units that would refer to new locations specified by the current build, so such old binaries would just consume disk space but not run.
This goes similar to usual build and install from Git:
:; cd /tmp
:; git clone https://github.com/networkupstools/nut
:; cd nut
:; ./autogen.sh
:; ./configure --enable-inplace-runtime # --maybe-some-other-options
:; make -j 4 all && make -j 4 check && sudo make install
Note that make install
does not currently handle all the nuances that packaging installation scripts would, such as customizing filesystem object ownership, daemon restarts, etc. or even creating locations like /var/state/ups
and /var/run/nut
as part of the make
target (but e.g. the delivered systemd-tmpfiles
configuration can handle that for a large part of the audience) => issue #1298
At this point you should revise the locations for PID files (e.g. /var/run/nut
) and pipe files (e.g. /var/state/ups
) that they exist and permissions remain suitable for NUT run-time user selected by your configuration, and typically stop your original NUT drivers, data-server (upsd) and upsmon, and restart them using the new binaries.
Although mentioned in the config-prereqs
document, remember to apt install libsystemd-dev
for the --with-libsystemd
flag to work (notifications and better service readiness integration). With prerequisites in place, it should be automatically detected and enabled by default.
For modern Linux distributions with systemd this could go like below, to re-enable services (creating proper symlinks) and get them started:
:; cd /tmp
:; git clone https://github.com/networkupstools/nut
:; cd nut
:; ./autogen.sh
:; ./configure --enable-inplace-runtime # --with-libsystemd --maybe-some-other-options
:; make -j 4 all && make -j 4 check && \
{ sudo systemctl stop nut-monitor nut-server || true ; } && \
{ sudo systemctl stop nut-driver.service || true ; } && \
{ sudo systemctl stop nut-driver.target || true ; } && \
{ sudo systemctl stop nut.target || true ; } && \
sudo make install && \
sudo systemctl daemon-reload && \
sudo systemd-tmpfiles --create && \
sudo systemctl disable nut.target nut-driver.target nut-monitor nut-server nut-driver-enumerator.path nut-driver-enumerator.service && \
sudo systemctl enable nut.target nut-driver.target nut-monitor nut-server nut-driver-enumerator.path nut-driver-enumerator.service && \
{ sudo systemctl restart udev || true ; } && \
sudo systemctl restart nut-driver-enumerator.service nut-monitor nut-server
Note the several attempts to stop old service units -- naming did change from 2.7.4 and older releases, through 2.8.0, and up to current codebase.
You may also have to restart (or reload if supported) some system services if your updates impact them, like udev
for updates USB support (note also PR #1342 regarding change from udev.rules
to udev.hwdb
file with NUT v2.8.0 or later -- may have to remove the older one manually).
Alternately, if you just want to test a newly built driver -- especially if you added support for new USB VID:PID
pairs -- make sure it starts as root
(sudo DRIVERNAME -u root ...
on command line, or RUN_AS_USER
in ups.conf
), and does not care much about devfs
permissions.
If you are iterating NUT builds from GitHub, or local development branches, you may get away with shorter constructs to just restart the services (if you know there were no changes to unit file definitions), e.g.:
:; cd /tmp
:; git clone -b master https://github.com/networkupstools/nut
:; cd nut
:; git checkout -b issue-1234 ### your PR branch name, arbitrary
:; ./autogen.sh
:; ./configure --enable-inplace-runtime # --maybe-some-other-options
### Iterate your code changes (e.g. PR draft), build and install with:
:; make -j 4 all && make -j 4 check && \
sudo make install && \
sudo systemctl daemon-reload && \
sudo systemd-tmpfiles --create && \
sudo systemctl restart nut-driver-enumerator.service nut-monitor nut-server
As discussed in issue #2559, it may be desired to both to keep NUT installed nominally via packaging (for asset tracking, user account audit, etc.) and to have the actually installed binaries and other files coming from a custom build.
Some of the ways about it include:
- Investigating packaging recipes used by the distribution, and tweaking them to build a package with your chosen custom source of NUT codebase
- (Eventually) Using reference packaging recipes provided by NUT codebase itself.
- Telling the packaging system to not update NUT from distro. Don't forget to make a note for your future self to eventually re-allow this, when the version of NUT officially shipped by your OS distribution would be newer and better than the custom build.
- For Debian/Ubuntu-like systems this may involve
apt-mark hold
onnut
nut-client
nut-monitor
andnut-server
packages.
- For Debian/Ubuntu-like systems this may involve
Hope this helps,
Jim Klimov
Welcome to the Network UPS Tools (NUT) project Wiki, and feel free to contribute tricks and insights.
While there are several good entries in the menu, ones referenced most frequently in issue discussions include:
- Building NUT for in-place upgrades or non-disruptive tests and Using NIT (NUT Integration Test suite) sandbox
- Technicalities: Customizing (NUT) config files and scripts delivered by packaging
- Links to distribution packaging recipes and repository sections
- Troubleshooting
upsdrvctl
drivers not starting ("insufficient permissions on everything") possibly due to nut-driver-enumerator (NDE) services having been there before you with NUT 2.8.x - Changing NUT daemon debug verbosity
- Building NUT integration for Home Assistant
- Running NUT in an LXC container
- Troubleshooting eventual disconnections (Data stale) and CyberPower Systems (CPS) know-how
- NUT for Windows
- NUT HCL and DDL
- Code contributions, PRs, PGP and DCO
- NUT CI farm
Also keep in mind the documentation links from NUT website and the FAQ in particular.