Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update libcoral for Python 3.11 and modern versions of Tensorflow #36

Open
wants to merge 40 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
057f505
Synced with latest commit of libedgetpu
feranick Feb 29, 2024
767c4bc
Remove outdated deb package for python 2 in docker.
feranick Feb 29, 2024
a4e6408
Update version fo bazel to 6.1.0 in docker needed for TF2.15.0
feranick Feb 29, 2024
a858b8d
Update WROKSPACE to allow compilatin of current version of TF
feranick Feb 29, 2024
28226de
Fix FTFS due to change in Eigen3 API
feranick Feb 29, 2024
d88f35e
Replace signature_def_names with signature_keys
feranick Feb 29, 2024
5797c5e
Fix ambiguous call leading to FTFS, by overriding TfLiteFloatArrayCopy
feranick Feb 29, 2024
3a45b84
Update Dockerfile.windows
feranick Feb 29, 2024
f36e4f5
Updated README.md
feranick Feb 29, 2024
accbdbc
Fix additional ambiguous call leading to FTFS, by overriding TfLiteFl…
feranick Feb 29, 2024
1d5222d
Fix additional FTFS due to change in Eigen3 API
feranick Feb 29, 2024
6f46ccf
Use std=c++17 not c++14
feranick Mar 1, 2024
743e804
Fix position of comment on TF version in WORKSPACE
feranick Mar 1, 2024
aae943b
Updated rules-python to v0.31.0 and simplification of WORKSPACE
feranick Mar 1, 2024
5d59e6f
Temporarily use fork of libedgetpu with initial support for TF2.16.
feranick Mar 1, 2024
9638480
Sync with latest feranick/libedgetpu
feranick Mar 1, 2024
a676b35
Add support for TF 2.16.0-rc0
feranick Mar 1, 2024
5d42cc8
Sync with latest libedgetpu commit
feranick Mar 1, 2024
87812af
Revert commit aae943b5eca
feranick Mar 1, 2024
437af95
Sync with libedgetpu
feranick Mar 1, 2024
1c87cc1
Use more modern version fo debian in docker.mk
feranick Mar 3, 2024
f778cb8
Update distributions in build script.
feranick Mar 3, 2024
cd9d98f
Sync with latest libedgetpu
feranick Mar 4, 2024
e5331a7
Build against TF2.17.0-dev to resolve visibility issue in SCHEMA
feranick Mar 5, 2024
139442b
Add arm-specific compiler flags to fix compilation due to undedefined…
feranick Mar 5, 2024
7c32c20
Revert previous commit for aarch64
feranick Mar 5, 2024
98a9f2b
Revert commit 139442b also for armhf
feranick Mar 5, 2024
d92c702
Use bazel 6.5.0 for TF 2.17.0
feranick Mar 6, 2024
08d1c66
Sync with libedgetpu
feranick Mar 6, 2024
e49f8f1
Reinstate revised version of commit 139442bf1
feranick Mar 6, 2024
bc7ee28
Resync with libedgetpu
feranick Mar 8, 2024
16f6bfa
Update stable TF version required to 2.16.1
feranick Mar 8, 2024
7b421d0
Sync with libedge for TF 2.17.0-rc0
feranick Jun 21, 2024
501030f
Updated dockerfile.windows with latest dependencies
feranick Jun 21, 2024
3bd6aa0
Added support for temsorflow 2.17.0-rc1
feranick Jul 7, 2024
576612c
Add support for TF 2.17.0 stable
feranick Jul 12, 2024
4e096ec
Added support for Ubuntu:24.04
feranick Aug 8, 2024
139a39c
Change su user to root to allow compilation in Ubuntu 24.04
feranick Aug 9, 2024
b74ede2
Pass correct TF_PYTHON_VERSION during build.
feranick Aug 10, 2024
d7054de
Updated third party libraries in Docker.windows
feranick Aug 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,9 @@ build --enable_platform_specific_config

build:linux --crosstool_top=@crosstool//:toolchains
build:linux --compiler=gcc
build:linux --cxxopt=-std=c++17

build:macos --cxxopt=-std=c++14
build:macos --cxxopt=-std=c++17

build:windows --incompatible_restrict_string_escapes=false
build:windows --cxxopt=/std:c++latest
2 changes: 1 addition & 1 deletion .gitmodules
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[submodule "libedgetpu"]
path = libedgetpu
url = https://github.com/google-coral/libedgetpu
url = https://github.com/feranick/libedgetpu
[submodule "test_data"]
path = test_data
url = https://github.com/google-coral/test_data
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ BAZEL_BUILD_FLAGS := --compilation_mode=$(COMPILATION_MODE) \
ifeq ($(CPU),aarch64)
BAZEL_BUILD_FLAGS += --copt=-ffp-contract=off
else ifeq ($(CPU),armv7a)
BAZEL_BUILD_FLAGS += --copt=-ffp-contract=off
BAZEL_BUILD_FLAGS += --copt=-ffp-contract=off --copt=-mfp16-format=ieee
endif

# $(1): pattern, $(2) destination directory
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ You need to install the following software:
1. Bazel for macOS from https://github.com/bazelbuild/bazel/releases
1. MacPorts from https://www.macports.org/install.php
1. Ports of `python` interpreter and `numpy` library: `sudo port install
python35 python36 python37 py35-numpy py36-numpy py37-numpy`
python38 python39 python310 python311 py38-numpy py39-numpy py310-numpy python311-numpy`
1. Port of `libusb` library: `sudo port install libusb`

Right after that all normal `make` commands should work as usual. You can run
Expand All @@ -88,6 +88,6 @@ Docker allows to avoid complicated environment setup and build binaries for
Linux on other operating systems without complicated setup, e.g.,

```
make DOCKER_IMAGE=debian:buster DOCKER_CPUS="k8 armv7a aarch64" DOCKER_TARGETS=tests docker-build
make DOCKER_IMAGE=ubuntu:18.04 DOCKER_CPUS="k8 armv7a aarch64" DOCKER_TARGETS=tests docker-build
make DOCKER_IMAGE=debian:bookworm DOCKER_CPUS="k8 armv7a aarch64" DOCKER_TARGETS=tests docker-build
make DOCKER_IMAGE=ubuntu:22.04 DOCKER_CPUS="k8 armv7a aarch64" DOCKER_TARGETS=tests docker-build
```
88 changes: 84 additions & 4 deletions WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,86 @@ local_repository(
path = "libedgetpu",
)

# ==================================================================

# Add definition of tensorflow version 2.17.0 stable.
#http_archive(
# name = "org_tensorflow",
# urls = [
# "https://github.com/tensorflow/tensorflow/archive/ad6d8cc177d0c868982e39e0823d0efbfb95f04c.tar.gz",
# ],
# sha256 = "75b8dc9b33afff6f2e2d2e2dacc26dd0973bdcee94eec2af290828c1bc574bdc,
# strip_prefix = "tensorflow-" + "ad6d8cc177d0c868982e39e0823d0efbfb95f04c",
# )

http_archive(
name = "bazel_skylib",
sha256 = "74d544d96f4a5bb630d465ca8bbcfe231e3594e5aae57e1edbf17a6eb3ca2506",
urls = [
"https://storage.googleapis.com/mirror.tensorflow.org/github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz",
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz",
],
)

http_archive(
name = "rules_python",
sha256 = "9d04041ac92a0985e344235f5d946f71ac543f1b1565f2cdbc9a2aaee8adf55b",
strip_prefix = "rules_python-0.26.0",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.26.0/rules_python-0.26.0.tar.gz",
)

load("@rules_python//python:repositories.bzl", "py_repositories")

py_repositories()

load("@rules_python//python:repositories.bzl", "python_register_toolchains")
load(
"@org_tensorflow//tensorflow/tools/toolchains/python:python_repo.bzl",
"python_repository",
)

python_repository(name = "python_version_repo")

load("@python_version_repo//:py_version.bzl", "HERMETIC_PYTHON_VERSION")

python_register_toolchains(
name = "python",
ignore_root_user_error = True,
python_version = HERMETIC_PYTHON_VERSION,
)

load("@python//:defs.bzl", "interpreter")
load("@rules_python//python:pip.bzl", "package_annotation", "pip_parse")

NUMPY_ANNOTATIONS = {
"numpy": package_annotation(
additive_build_content = """\
filegroup(
name = "includes",
srcs = glob(["site-packages/numpy/core/include/**/*.h"]),
)
cc_library(
name = "numpy_headers",
hdrs = [":includes"],
strip_include_prefix="site-packages/numpy/core/include/",
)
""",
),
}

pip_parse(
name = "pypi",
annotations = NUMPY_ANNOTATIONS,
python_interpreter_target = interpreter,
requirements = "@org_tensorflow//:requirements_lock_" + HERMETIC_PYTHON_VERSION.replace(".", "_") + ".txt",
)

load("@pypi//:requirements.bzl", "install_deps")

install_deps()

# ==================================================================

load("@libedgetpu//:workspace.bzl", "libedgetpu_dependencies")
libedgetpu_dependencies()

Expand All @@ -37,7 +117,7 @@ load("@org_tensorflow//tensorflow:workspace0.bzl", "tf_workspace0")
tf_workspace0()

load("@coral_crosstool//:configure.bzl", "cc_crosstool")
cc_crosstool(name = "crosstool", cpp_version = "c++14")
cc_crosstool(name = "crosstool", cpp_version = "c++17")

# External Dependencies
http_archive(
Expand All @@ -57,10 +137,10 @@ glog_library(with_gflags=0)

http_archive(
name = "com_github_google_benchmark",
sha256 = "6e40ccab16a91a7beff4b5b640b84846867e125ebce6ac0fe3a70c5bae39675f",
strip_prefix = "benchmark-16703ff83c1ae6d53e5155df3bb3ab0bc96083be",
sha256 = "8e7b955f04bc6984e4f14074d0d191474f76a6c8e849e04a9dced49bc975f2d4",
strip_prefix = "benchmark-344117638c8ff7e239044fd0fa7085839fc03021",
urls = [
"https://github.com/google/benchmark/archive/16703ff83c1ae6d53e5155df3bb3ab0bc96083be.tar.gz"
"https://github.com/google/benchmark/archive/344117638c8ff7e239044fd0fa7085839fc03021.tar.gz"
],
)

Expand Down
8 changes: 4 additions & 4 deletions coral/detection/adapter.cc
Original file line number Diff line number Diff line change
Expand Up @@ -78,11 +78,11 @@ std::vector<Object> GetDetectionResults(const tflite::Interpreter& interpreter,
// If a model has signature, we use the signature output tensor names to parse
// the results. Otherwise, we parse the results based on some assumption of
// the output tensor order and size.
if (!interpreter.signature_def_names().empty()) {
CHECK_EQ(interpreter.signature_def_names().size(), 1);
VLOG(1) << "Signature name: " << *interpreter.signature_def_names()[0];
if (!interpreter.signature_keys().empty()) {
CHECK_EQ(interpreter.signature_keys().size(), 1);
VLOG(1) << "Signature name: " << *interpreter.signature_keys()[0];
const auto& signature_output_map = interpreter.signature_outputs(
interpreter.signature_def_names()[0]->c_str());
interpreter.signature_keys()[0]->c_str());
CHECK_EQ(signature_output_map.size(), 4);
count = TensorData<float>(
*interpreter.tensor(signature_output_map.at("output_0")));
Expand Down
2 changes: 1 addition & 1 deletion coral/learn/backprop/layers.cc
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ MatrixXf CrossEntropyGradient(const MatrixXf& c, const MatrixXf& p) {
MatrixXf FullyConnected(const MatrixXf& mat_x, const MatrixXf& mat_w,
const MatrixXf& mat_b) {
MatrixXf mat_y = mat_x * mat_w;
mat_y.array().rowwise() += mat_b.array()(0, Eigen::all);
mat_y.array().rowwise() += mat_b.array()(0, Eigen::indexing::all);
return mat_y;
}

Expand Down
2 changes: 1 addition & 1 deletion coral/learn/backprop/softmax_regression_model.cc
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ void SoftmaxRegressionModel::Train(const TrainingData& data,
const auto& batch_indices =
GetBatchIndices(data.training_data, train_config.batch_size);
MatrixXf train_batch, labels_batch;
train_batch = data.training_data(batch_indices, Eigen::all);
train_batch = data.training_data(batch_indices, Eigen::indexing::all);

// Create one-hot label vectors
labels_batch = MatrixXf::Zero(train_config.batch_size, num_classes_);
Expand Down
10 changes: 5 additions & 5 deletions coral/learn/backprop/test_utils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ TrainingData ShuffleAndSplitData(const MatrixXf& data_matrix,
std::mt19937(rd()));
MatrixXf shuffled_data =
MatrixXf::Zero(data_matrix.rows(), data_matrix.cols());
shuffled_data = data_matrix(shuffled_indices, Eigen::all);
shuffled_data = data_matrix(shuffled_indices, Eigen::indexing::all);
std::vector<int> shuffled_labels(total_rows, -1);
for (int i = 0; i < total_rows; ++i) {
shuffled_labels[i] = labels_vector[shuffled_indices[i]];
Expand All @@ -50,9 +50,9 @@ TrainingData ShuffleAndSplitData(const MatrixXf& data_matrix,
// Eigen::seq boundaries are inclusive on both sides.
TrainingData fake_data;
fake_data.training_data =
shuffled_data(Eigen::seq(0, num_train - 1), Eigen::all);
shuffled_data(Eigen::seq(0, num_train - 1), Eigen::indexing::all);
fake_data.validation_data =
shuffled_data(Eigen::seq(num_train, Eigen::last), Eigen::all);
shuffled_data(Eigen::seq(num_train, Eigen::placeholders::last), Eigen::indexing::all);

fake_data.training_labels.assign(shuffled_labels.begin(),
shuffled_labels.begin() + num_train);
Expand Down Expand Up @@ -105,7 +105,7 @@ TrainingData GenerateMvnRandomData(const std::vector<int>& class_sizes,
MultiVariateNormalDistribution dist(means[i], cov_mats[i]);
MatrixXf samples = dist.Sample(n);
// Eigen::seq boundaries are inclusive on both sides.
data_matrix(Eigen::seq(start_index, start_index + n - 1), Eigen::all) =
data_matrix(Eigen::seq(start_index, start_index + n - 1), Eigen::indexing::all) =
samples.transpose();
labels_vector.insert(labels_vector.end(), n, i);
start_index += n;
Expand All @@ -127,7 +127,7 @@ TrainingData GenerateUniformRandomData(const std::vector<int>& class_sizes,
int n = class_sizes[i];
MatrixXf samples = MatrixXf::Random(total_cols, n);
// Eigen::seq boundaries are inclusive on both sides.
data_matrix(Eigen::seq(start_index, start_index + n - 1), Eigen::all) =
data_matrix(Eigen::seq(start_index, start_index + n - 1), Eigen::indexing::all) =
samples.transpose();
labels_vector.insert(labels_vector.end(), n, i);
start_index += n;
Expand Down
2 changes: 1 addition & 1 deletion coral/pipeline/internal/segment_runner.cc
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ absl::Status SegmentRunner::SetExternalTensorBuffer(const char* buffer,
// its memory here.
auto* quant_params_clone = reinterpret_cast<TfLiteAffineQuantization*>(
malloc(sizeof(TfLiteAffineQuantization)));
quant_params_clone->scale = TfLiteFloatArrayCopy(quant_params->scale);
quant_params_clone->scale = coral::internal::TfLiteFloatArrayCopy(quant_params->scale);
CHECK(quant_params_clone->scale);
quant_params_clone->zero_point =
TfLiteIntArrayCopy(quant_params->zero_point);
Expand Down
2 changes: 1 addition & 1 deletion coral/tflite_utils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ TfLiteAffineQuantization* TfLiteAffineQuantizationCopy(
auto* copy = static_cast<TfLiteAffineQuantization*>(
malloc(sizeof(TfLiteAffineQuantization)));
CHECK(copy);
copy->scale = TfLiteFloatArrayCopy(src->scale);
copy->scale = coral::TfLiteFloatArrayCopy(src->scale);
copy->zero_point = TfLiteIntArrayCopy(src->zero_point);
copy->quantized_dimension = src->quantized_dimension;
return copy;
Expand Down
21 changes: 15 additions & 6 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
sudo \
debhelper \
python \
python3-all \
python3-numpy \
python3-setuptools \
Expand Down Expand Up @@ -52,10 +51,20 @@ RUN if grep 'Bionic Beaver' /etc/os-release > /dev/null; then \

# On older Ubuntu these packages can't be installed in a multi-arch fashion.
# Instead we download the debs and extract them for build time linking.
RUN mkdir /debs && chmod a=rwx /debs && cd /debs && apt-get update && apt-get download \
libglib2.0-0 \
libglib2.0-0:armhf \
libglib2.0-0:arm64 \

RUN if grep 'Noble Numbat' /etc/os-release > /dev/null; then \
mkdir /debs && chmod a=rwx /debs && cd /debs && apt-get update && apt-get download \
libglib2.0-0t64 \
libglib2.0-0t64:armhf \
libglib2.0-0t64:arm64; \
else \
mkdir /debs && chmod a=rwx /debs && cd /debs && apt-get update && apt-get download \
libglib2.0-0 \
libglib2.0-0:armhf \
libglib2.0-0:arm64; \
fi

RUN cd /debs && apt-get update && apt-get download --ignore-missing \
libglib2.0-dev \
libglib2.0-dev:armhf \
libglib2.0-dev:arm64 \
Expand All @@ -78,7 +87,7 @@ RUN git clone https://github.com/raspberrypi/tools.git && \
cd tools && \
git reset --hard 4a335520900ce55e251ac4f420f52bf0b2ab6b1f

ARG BAZEL_VERSION=4.0.0
ARG BAZEL_VERSION=6.5.0
RUN wget -O /bazel https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-installer-linux-x86_64.sh && \
bash /bazel && \
rm -f /bazel
83 changes: 83 additions & 0 deletions docker/Dockerfile.orig
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
ARG IMAGE
FROM ${IMAGE}

COPY update_sources.sh /
RUN /update_sources.sh

RUN dpkg --add-architecture armhf
RUN dpkg --add-architecture arm64
RUN echo 'APT::Immediate-Configure false;' >> /etc/apt/apt.conf

RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
libc6-dev:arm64 \
libc6-dev:armhf \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
sudo \
debhelper \
python3-all \
python3-numpy \
python3-setuptools \
python3-six \
python3-wheel \
libpython3-dev \
libpython3-dev:armhf \
libpython3-dev:arm64 \
build-essential \
crossbuild-essential-armhf \
crossbuild-essential-arm64 \
libusb-1.0-0-dev \
libusb-1.0-0-dev:arm64 \
libusb-1.0-0-dev:armhf \
zlib1g-dev \
zlib1g-dev:armhf \
zlib1g-dev:arm64 \
pkg-config \
p7zip-full \
zip \
unzip \
curl \
wget \
git \
vim \
mc \
software-properties-common

# Bionic Beaver == Ubuntu 18.04
RUN if grep 'Bionic Beaver' /etc/os-release > /dev/null; then \
add-apt-repository ppa:ubuntu-toolchain-r/test \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y gcc-9 g++-9; \
fi

# On older Ubuntu these packages can't be installed in a multi-arch fashion.
# Instead we download the debs and extract them for build time linking.
RUN mkdir /debs && chmod a=rwx /debs && cd /debs && apt-get update && apt-get download \
libglib2.0-0 \
libglib2.0-0:armhf \
libglib2.0-0:arm64 \
libglib2.0-dev \
libglib2.0-dev:armhf \
libglib2.0-dev:arm64 \
libgstreamer1.0-0 \
libgstreamer1.0-0:armhf \
libgstreamer1.0-0:arm64 \
libgstreamer1.0-dev \
libgstreamer1.0-dev:armhf \
libgstreamer1.0-dev:arm64 \
libgstreamer-plugins-base1.0-0 \
libgstreamer-plugins-base1.0-0:armhf \
libgstreamer-plugins-base1.0-0:arm64 \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-base1.0-dev:armhf \
libgstreamer-plugins-base1.0-dev:arm64

RUN for d in /debs/*.deb; do dpkg -x $d /usr/system_libs; done

RUN git clone https://github.com/raspberrypi/tools.git && \
cd tools && \
git reset --hard 4a335520900ce55e251ac4f420f52bf0b2ab6b1f

ARG BAZEL_VERSION=6.5.0
RUN wget -O /bazel https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-installer-linux-x86_64.sh && \
bash /bazel && \
rm -f /bazel
Loading