You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As commented with @oleg-nenashev after his What's new in LibreCores CI? talk at ORConf2019, I believe there is room for contribution between ghdl/docker and librecores. I did not bring this issue before, because of librecores/ci.librecores.org#5. Now that the lack of contribution guidelines is acknowledged, and since I could gain some first-hand insight about what to expect from them, I think we can discuss technical details.
Context
ghdl is an open-source analyzer, compiler and simulator for VHDL. It does also have experimental support for synthesis (which generates a VHDL netlist). Moreover, tgingold/ghdlsynth-beta allows to use GHDL as a frontend for YosysHQ/yosys. Along with YosysHQ/SymbiYosys, formal verification with VHDL is possible. Precisely, Open Source Formal Verification in VHDL was a talk by @pepijndevos at ORConf2019.
ghdl/docker is the repo were all the ghdl/* docker images are defined, built and published. On the one hand, GHDL is tested on Debian, Fedora, Ubuntu, Windows and macOs. On the other hand, GHDL provides multiple optionally but very useful features, such as a language server with plugins for vscode/emacs, or the already mentioned ghdlsynth-beta plugin. As a result, we currently maintain ~100 images.
We are not completely happy with maintaining a subset (~6) of those images, which do not contain any dependencies specific to GHDL. Those exist only because upstream projects at YosysHQ do not provide official docker images. We tried to contribute to those projects, but maintainers seem not to be interested on providing/maintaining docker images. See YosysHQ/yosys#1152, YosysHQ/yosys#1285, YosysHQ/yosys#1287, YosysHQ/sby#58, YosysHQ/icestorm#77, etc.
The list of images that I'd like to migrate from GHDL to librecores is the following:
ghdl/cache:formal: contains a tarball with YosysHQ/SymbiYosys (master) and Z3Prover/z3 (master) prebuilt for images based on Debian Buster.
ghdl/cache:gtkwave: contains a tarball with GtkWave (gtkwave3-gtk3) prebuilt for images based on Debian Buster.
ghdl/synth:yosys: includes YosysHQ/yosys (master).
ghdl/synth:symbiyosys: includes the tarball from ghdl/cache:formal and Python3 on top of ghdl/synth:yosys.
ghdl/synth:nextpnr: includes YosysHQ/nextpnr (master).
ghdl/synth:icestorm: includes cliffordwolf/icestorm (master).
Since Ubuntu is based on Debian, it is possible to use the same dockerfiles with one or more --build-arg, to build multiple images with the same features/tools but with different bases. However, maintenance effort is slightly increased.
IMHO, keeping latest Ubuntu LTS (Bionic 18.04) and Debian stable (Buster 10) is worth it. This is because Debian Buster is used as a robust base in many companies, and because available SDCard images for boards (such as PYNQ) are based on Ubuntu LTS.
monoimage vs per tool image
The current approach in this repo is to install all the tools in a single image (see https://github.com/librecores/docker-images/blob/master/librecores-ci/Dockerfile#L23). This makes images easier to distribute/use, since all users need to follow exactly the same instructions. However, on the one hand, the size of the image is larger than required for users looking for a single tool/feature. This is specially relevant in CI environments, where images need to be constantly pulled. On the other hand, it is difficult to put a limit on which tools should/shouldn't be included.
To be precise, librecores-ci includes fusesoc, iverilog, verilator, yosys and cocotb, but neither of gtkwave, symbiyosys, nextpnr, icestorm, GHDL or VUnit. A single image containing all of them would be too large. That's why I suggest a modular approach. There should be an image for each tool, with just minimum dependencies for it to work. Of course, some images can be based on others. For example, symbiyosys requires yosys.
Mentioned images from ghdl/docker are an example of this approach. There is an snippet in https://github.com/tgingold/ghdlsynth-beta#docker, which shows how to use beta, nextpnr and icestorm images to synthesize and program an icestick.
Moreover, by enabling DOCKER_BUILDKIT when images are built, intermediate but non-required images are ignored. On the one hand, this speeds up the build time, while allowing to have unrelated tools/steps defined in the same file. On the other hand, this is useful to share a single dockerfile for multiple archs/OS.
multiach images and manifests
Combining Docker and QEMU, it is possible to build docker images for foreign architectures (e.g. arm32v7 or arm64v8). Project dbhi/qus provides a lightweight ready-to-use image that allows to configure the kernel on Docker Desktop, Travis CI, GitHub Actions, etc. dbhi/docker is another project that partially overlaps with ghdl/docker, as it provides multiarch images (amd64, arm32v7 and arm64v8) based on ubuntu:bionic, including GHDL, GtkWave, Python, etc.
Multiple tools (GHDL, icestorm, yosys, verilator, etc.) are supported on amd64, armv7/aarch32 and aarch64. Therefore, it is desirable to provide images for those architectures too.
On the one hand, images for arm/arm64 hosts allow to use open sources EDA tools not only on devices such as Raspberry Pi or ROCK960, but also on ZYNQ/PYNQ/MicroZED/ZEDboard/Ultra96. Precisely, images from dbhi/docker are used on RPi, PYNQ and ROCK960 boards. This is useful to build low-cost jenkins farms, and for software-hardware co-execution on SoCs.
On the other hand, QEMU and Docker can be used on amd64 workstations/servers to avoid cross-compilation and/or for CI testing of apps for foreign architectures. Precisely, binaries built in a arm32v7/ubuntu:bionic image on an amd64 workstation can be copied and successfully executed on a Xilinx board with PYNQ SD image (versions v2.3 or v2.4). I.e., the same build scripts can be used, without any cross-compilation toolchain.
GitHub Actions
Although, GitHub Actions are still in beta, it was announced that they will be public for all users on november. Independently of having any other (external) CI service, I think it'd be desirable to use this feature, since it provides tighter integration with the repo and the timeout is set to 6h. Furthermore, using GitHub's registry instead of or apart from dockerhub might be discussed.
Both ghdl/docker and dbhi/docker include examples of YAML workflows to build and publish docker images. However, versioning/tagging is not implemented in any of them. This is because build scripts are written in bash and they are already hard enough to maintain.
Build toolkit
A relevant issue that I have not solved yet is that the scheme of all the images that are built in librecores, ghdl/docker, dbhi/docker, vunit/docker, etc. is a DAG. For example:
+-------------+ +------------------+ +--------------------+
|ubuntu:bionic| |debian:buster|slim| |other base images...|
+-----+-------+ +--------+---------+ +--------------------+
| |
+-----+----+-----------------------+--------+
| | | | |
| +-----------+-----------+------+--------+ |
v v | | v v | |
build gtkwave | | build yosys | |
+ v v + v v
| runtime gtkwave | runtime yosys
| + | +
v v v v
+-+--------+--------+ +-+---------+-------+
|librecores/gui:base| |librecores/ci:yosys|
+--------+----------+ +---------+---------+
| |
| |
| +--------------------+ |
+----->+librecores/gui:yosys+<---+
+--------------------+
Currently:
Steps/tasks build gtkwave, runtime gtkwave, build yosys and runtime yosys need to be executed once for each base image.
It is not possible to create a new image (librecores/gui:yosys, in the example diagram above) by merging two existing images (librecores/gui:base and librecores/ci:yosys), even though both are based on the same image:
B
/ X
A D
\ X
C
Therefore, the steps to build librecores/gui:yosys (D) on top of librecores/ci:yosys (B) are exactly the same that are required to build librecores/gui:base (C) on top of the base image (A).
There are multiple approaches to handle this complexity:
I think that second generation OCI tools, such as buildkit or buildah, might partially support features such as merging layers. Unfortunately I did not have time to investigate yet.
A custom dockerfile composition tool can be used. Such a tool would take small snippets/recipes to either install build dependencies and actually build a tool, or install runtime dependencies and copy the artifacts from a previous stage/image. Then, provided a base image and a list of EDA tools, the builder would generate/build a dockerfile by picking the recipes from the DAG. As a matter fact, this is very similar to how nixery works. However, nixery is designed for a single host/base image and to install packages from nix, instead of building tools from sources (master).
Such a tool would be not only for internal usage, but also to allow users to generate and build their own dockerfiles. This is because some companies need to have full control over the source of the docker images they use.
As commented with @oleg-nenashev after his What's new in LibreCores CI? talk at ORConf2019, I believe there is room for contribution between ghdl/docker and librecores. I did not bring this issue before, because of librecores/ci.librecores.org#5. Now that the lack of contribution guidelines is acknowledged, and since I could gain some first-hand insight about what to expect from them, I think we can discuss technical details.
Context
ghdl is an open-source analyzer, compiler and simulator for VHDL. It does also have experimental support for synthesis (which generates a VHDL netlist). Moreover, tgingold/ghdlsynth-beta allows to use GHDL as a frontend for YosysHQ/yosys. Along with YosysHQ/SymbiYosys, formal verification with VHDL is possible. Precisely, Open Source Formal Verification in VHDL was a talk by @pepijndevos at ORConf2019.
ghdl/docker is the repo were all the
ghdl/*
docker images are defined, built and published. On the one hand, GHDL is tested on Debian, Fedora, Ubuntu, Windows and macOs. On the other hand, GHDL provides multiple optionally but very useful features, such as a language server with plugins for vscode/emacs, or the already mentioned ghdlsynth-beta plugin. As a result, we currently maintain ~100 images.We are not completely happy with maintaining a subset (~6) of those images, which do not contain any dependencies specific to GHDL. Those exist only because upstream projects at YosysHQ do not provide official docker images. We tried to contribute to those projects, but maintainers seem not to be interested on providing/maintaining docker images. See YosysHQ/yosys#1152, YosysHQ/yosys#1285, YosysHQ/yosys#1287, YosysHQ/sby#58, YosysHQ/icestorm#77, etc.
The list of images that I'd like to migrate from GHDL to librecores is the following:
ghdl/cache:formal
: contains a tarball with YosysHQ/SymbiYosys (master) and Z3Prover/z3 (master) prebuilt for images based on Debian Buster.ghdl/cache:gtkwave
: contains a tarball with GtkWave (gtkwave3-gtk3) prebuilt for images based on Debian Buster.ghdl/synth:yosys
: includes YosysHQ/yosys (master).ghdl/synth:symbiyosys
: includes the tarball from ghdl/cache:formal and Python3 on top of ghdl/synth:yosys.ghdl/synth:nextpnr
: includes YosysHQ/nextpnr (master).ghdl/synth:icestorm
: includes cliffordwolf/icestorm (master).Base image
At ghdl/docker, we use Debian Buster (
debian:buster-slim
) as the base image for all theghdl/synth:*
images. Here, imagelibrecores-ci
is based onubuntu:16.04
(https://github.com/librecores/docker-images/blob/master/librecores-ci/Dockerfile#L23).Since Ubuntu is based on Debian, it is possible to use the same dockerfiles with one or more
--build-arg
, to build multiple images with the same features/tools but with different bases. However, maintenance effort is slightly increased.IMHO, keeping latest Ubuntu LTS (Bionic 18.04) and Debian stable (Buster 10) is worth it. This is because Debian Buster is used as a robust base in many companies, and because available SDCard images for boards (such as PYNQ) are based on Ubuntu LTS.
monoimage vs per tool image
The current approach in this repo is to install all the tools in a single image (see https://github.com/librecores/docker-images/blob/master/librecores-ci/Dockerfile#L23). This makes images easier to distribute/use, since all users need to follow exactly the same instructions. However, on the one hand, the size of the image is larger than required for users looking for a single tool/feature. This is specially relevant in CI environments, where images need to be constantly pulled. On the other hand, it is difficult to put a limit on which tools should/shouldn't be included.
To be precise, librecores-ci includes fusesoc, iverilog, verilator, yosys and cocotb, but neither of gtkwave, symbiyosys, nextpnr, icestorm, GHDL or VUnit. A single image containing all of them would be too large. That's why I suggest a modular approach. There should be an image for each tool, with just minimum dependencies for it to work. Of course, some images can be based on others. For example, symbiyosys requires yosys.
Mentioned images from ghdl/docker are an example of this approach. There is an snippet in https://github.com/tgingold/ghdlsynth-beta#docker, which shows how to use beta, nextpnr and icestorm images to synthesize and program an icestick.
multi-stage builds
Docker's multi-stage builds allow to slim down images by keeping build dependencies explicitly separated from runtime dependencies.
Currently, no clean up is performed in https://github.com/librecores/docker-images/blob/master/librecores-ci/Dockerfile (ref #5). Conversely, in ghdl/docker multi-stage builds are intensively used. For example: https://github.com/ghdl/docker/blob/master/dockerfiles/cache_yosys
Moreover, by enabling
DOCKER_BUILDKIT
when images are built, intermediate but non-required images are ignored. On the one hand, this speeds up the build time, while allowing to have unrelated tools/steps defined in the same file. On the other hand, this is useful to share a single dockerfile for multiple archs/OS.multiach images and manifests
Combining Docker and QEMU, it is possible to build docker images for foreign architectures (e.g.
arm32v7
orarm64v8
). Project dbhi/qus provides a lightweight ready-to-use image that allows to configure the kernel on Docker Desktop, Travis CI, GitHub Actions, etc. dbhi/docker is another project that partially overlaps with ghdl/docker, as it provides multiarch images (amd64
,arm32v7
andarm64v8
) based onubuntu:bionic
, including GHDL, GtkWave, Python, etc.Multiple tools (GHDL, icestorm, yosys, verilator, etc.) are supported on
amd64
,armv7
/aarch32
andaarch64
. Therefore, it is desirable to provide images for those architectures too.On the one hand, images for arm/arm64 hosts allow to use open sources EDA tools not only on devices such as Raspberry Pi or ROCK960, but also on ZYNQ/PYNQ/MicroZED/ZEDboard/Ultra96. Precisely, images from dbhi/docker are used on RPi, PYNQ and ROCK960 boards. This is useful to build low-cost jenkins farms, and for software-hardware co-execution on SoCs.
On the other hand, QEMU and Docker can be used on
amd64
workstations/servers to avoid cross-compilation and/or for CI testing of apps for foreign architectures. Precisely, binaries built in aarm32v7/ubuntu:bionic
image on anamd64
workstation can be copied and successfully executed on a Xilinx board with PYNQ SD image (versions v2.3 or v2.4). I.e., the same build scripts can be used, without any cross-compilation toolchain.GitHub Actions
Although, GitHub Actions are still in beta, it was announced that they will be public for all users on november. Independently of having any other (external) CI service, I think it'd be desirable to use this feature, since it provides tighter integration with the repo and the timeout is set to 6h. Furthermore, using GitHub's registry instead of or apart from dockerhub might be discussed.
Both ghdl/docker and dbhi/docker include examples of YAML workflows to build and publish docker images. However, versioning/tagging is not implemented in any of them. This is because build scripts are written in bash and they are already hard enough to maintain.
Build toolkit
A relevant issue that I have not solved yet is that the scheme of all the images that are built in librecores, ghdl/docker, dbhi/docker, vunit/docker, etc. is a DAG. For example:
Currently:
build gtkwave
,runtime gtkwave
,build yosys
andruntime yosys
need to be executed once for each base image.librecores/gui:yosys
, in the example diagram above) by merging two existing images (librecores/gui:base
andlibrecores/ci:yosys
), even though both are based on the same image:Therefore, the steps to build
librecores/gui:yosys
(D) on top oflibrecores/ci:yosys
(B) are exactly the same that are required to buildlibrecores/gui:base
(C) on top of the base image (A).There are multiple approaches to handle this complexity:
master
)./cc @oleg-nenashev @Nancy-Chauhan @wallento @olofk
The text was updated successfully, but these errors were encountered: