Feedback of someone who is used to manage large (>1500) software stack in C / C++ / Fortran / Python / Rust / etc:
- (1) Provide a way to compile without internet access and specify the associated dependencies path manually. This is absolutely critical.
Most 'serious' multi-language package managers and integration systems are building in a sandbox without internet access for security reasons and reproducibility reasons.
If your build system does not allow to build offline and with manually specified dependencies, you will make life of integrators and package managers miserable and they will avoid your project.
(2) Neverever build in '-03 -march=native' by default. This is always a red flag and a sign of immaturity. People expect code to be portable and shippable.
Good default options should be CMake equivalent of "RelWithDebInfo" (meaning: -O2 -g -DNDEBUG ).
-O3 can be argued. -march=native is always always a mistake.
- (3) Allow your build tool to be built by an other build tool (e.g CMake).
Anybody caring about reproducibility will want to start from sources, not from a pre-compiled binary. This also matter for cross compilation.
They are what will allow interoperability between your system and other build systems.
- (5) last but not least: Consider seriously the cross-compilation use case.
It is common in the world of embedded systems to cross compile. Any build system that does not support cross-compilation will be de facto banned from the embedded domain.
As someone who has also spent two decades wrangling C/C++ codebases, I wholeheartedly agree with every statement here.
I have an even stronger sentiment regarding cross compilation though - In any build system, I think the distinction between “cross” and “non-cross” compilation is an anti-pattern.
Always design build systems assuming cross compilation. It hurts nothing if it just so happens that your host and target platform/architecture end up being the same, and saves you everything down the line if you need to also build binaries for something else.
Shipping anything built with -march=native is a horrible idea. Even on homogeneous targets like one of the clouds, you never know if they'll e.g. switch CPU vendors.
The correct thing to do is use microarch levels (e.g. x86-64-v2) or build fully generic if the target architecture doesn't have MA levels.
Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.
-mtune says "generate code that is optimised for this architecture" but it doesn't trigger arch specific features.
Whether these are right or not depends on what you are doing. If you are building gentoo on your laptop you should absolutely -mtune=native and -march=native. That's the whole point: you get the most optimised code you can for your hardware.
If you are shipping code for a wide variety of architectures and crucially the method of shipping is binary form then you want to think more about what you might want to support. You could do either: if you're shipping standard software pick a reasonable baseline (check what your distribution uses in its cflags). If however you're shipping compute-intensive software perhaps you load a shared object per CPU family or build your engine in place for best performance. The Intel compiler quite famously optimised per family, included all the copies in the output and selected the worst one on AMD ;) (https://medium.com/codex/fixing-intel-compilers-unfair-cpu-d...)
If you use a cloud provider and use a remote development environment (VSCode remote/Jetbrains Gateway) then you’re wrong: cloud providers swap out the CPUs without telling you and can sell newer CPUs at older prices if theres less demand for the newer CPUs; you can’t rely on that.
To take an old naming convention, even an E3-Xeon CPU is not equivalent to an E5 of the same generation. I’m willing to bet it mostly works but your claim “I build on the exact hardware I ship on” is much more strict.
The majority of people I know use either laptops or workstations with Xeon workstation or Threadripper CPUs— but when deployed it will be a Xeon scalable datacenter CPU or an Epyc.
Hell, I work in gamedev and we cross compile basically everything for consoles.
it certainly has scale issues when you need to support larger deployments.
[P.S.: the way I understand the words, "shipping" means "passing it off to someone else, likely across org boundaries" whereas what you're doing I'd call "deploying"]
The only time I used -march=native was for a university assignment which was built and evaluated on the same server, and it allowed juicing an extra bit of performance. Using it basically means locking the program to the current CPU only.
However I'm not sure about -O3. I know it can make the binary larger, not sure about other downsides.
It is completely fine to use -march=native, just do not make it the default for someone building your project.
That should always be something to opt-in.
The main reason is that software are a composite of (many) components. It becomes quickly a pain in the ass of maintainability if any tiny library somewhere try to sneak in '-march=native' that will make the final binary randomly crash with an illegal instruction error if executed on any CPU that is not exactly the same than the host.
When you design a build system configuration, think for the others first (the users of your software), and yourself after.
The reason why I like it (beyond ease-of-use) is that it can spit out CMakeLists.txt and compile_commands.json for IDE/LSP integration and also supports installing Conan/vcpkg libraries or even Git repos.
I would happily switch to it in a heartbeat if it was a lot more well-documented and if it supported even half of what CMake does.
As an example of what I mean, say I want to link to the FMOD library (or any library I legally can't redistribute as an SDK). Or I want to enable automatic detection on Windows where I know the library/SDK is an installer package. My solution, in CMake, is to just ask the registry. In XMake I still can't figure out how to pull this off. I know that's pretty niche, but still.
The documentation gap is the biggest hurtle. A lot of the functions/ways of doing things are poorly documented, if they are at all. Including a CMake library that isn't in any of the package managers for example. It also has some weird quirks: automatic/magic scoping (which is NOT a bonus) along with a hack "import" function instead of using native require.
All of this said, it does work well when it does work. Especially with modules.
Similar to premake I have never been a fan of the global state for defining targets. Give me an object or some handle that I call functions on/pass to functions. CMake at some point ended up somewhat right with that to going to target based defining for its stuff and since I've really learned it I have been kinda happy with it.
Agreed, xmake seems very well-thought-out, and supports the most modern use-cases (C++20 named modules, header unit modules, and `import std`, which CMake still has a lot of ceremony around). I should switch to it.
Meson is a python layer over the ninja builder, like cmake can be. xmake is both a build tool and a package manager fast like ninja and has no DSL, the build file is just lua. It's more like cargo than meson is.
I didn't claim it was a package manager, just that it looked similar. The root post said "build tool", and that's what Meson is as well.
Other than that, both "python layer" and "over the ninja builder" are technically wrong. "python layer" is off since there is now a second implementation, Muon [https://muon.build/], in C. "over the ninja builder" is off since it can also use Visual Studio's build capabilities on Windows.
Interestingly, I'm unaware of other build-related systems that have multiple implementations, except Make (which is in fact part of the POSIX.1 standard.) Curious to know if there are any others.
Anyone can make a tool that solves a tiny part of the problem. however the reason no such tool has caught on is because of all the weird special cases you need to handle before it can be useful. Even if you limit your support to desktop: OS/X and Windows that problem will be hard, adding various linux flavors is even more difficult, not to mention BSD. The above is the common/mainstream choices, there Haiku is going to be very different, and I've seen dozens of others over the years, some of them have a following in their niche. Then there are people building for embedded - QNX, vxworks, or even no OS just bare metal - each adding weirdness (and implying cross compiling which makes everything harder because your assumptions are always wrong).
I'm sorry I have to be a downer, but the fact is if you can use the word "I" your package manager is obviously not powerful enough for the real world.
There are so many reasons why C/C++ build systems struggle, but imo power is the last of them. "Powerful" and "scriptable" build systems are what has gotten us into the swamp!
* Standards committee is allergic to standardizing anything outside of the language itself: build tools, dependency management, even the concept of a "file" is controversial!
* Existing poor state of build systems is viral - any new build system is 10x as complex as a clean room design because you have to deal with all the legacy "power" of previous build tooling. Build system flaws propagate - the moment you need hacks in your build, you start imposing those hacks on downstream users of your library also.
Even CMake should be a much better experience than it is - but in the real world major projects don't maintain their CMake builds to the point you can cleanly depend on them. Things like using raw MY_LIB_DIR variables instead of targets, hacky/broken feature detection flags etc. Microsoft tried to solve this problem via vcpkg, ended up having to patch builds of 90% of the packages to get it to work, and it's still a poor experience where half the builds are broken.
My opinion is that a new C/C++ build/package system is actually a solvable problem now with AI. Because you can point Opus 4.6 or whoever at the massive pile of open source dependencies, and tell it for each one "write a build config for this package using my new build system" which solves the gordian knot of the ecosystem problem.
I will categorize this as a pattern I've seen which leads to stagnation, or is at least aiming for it. Usually these are built on one or more assumption which doesn't hold. The flow of this pattern:
- Problem exists
- Proposals of solutions, (varying quality), or not
- "You can't just solve this. It's complicated! This problem must exist". (The post I'm replying to
- Problem gets solved, hopefully.
Anecdotes I'm choosing based on proximity to this particular problem: uv and cargo. uv because people said the same thing about python packaging, and cargo because its adjacent to C and C++ in terms of being a low-level compiled language used for systems programming, embedded/bare-metal etc.
The world is rich in complexity, subtlety, and exceptions to categorization. I don't think this should block us from solving problems.
I didn't say the problem couldn't be solved. I said the problem can't be solved by one person. There is a difference. (maybe it can be solved by one person over a few decades)
This is true. There is no way I could solve a problem of this scale by myself. That is why this is an open source project and open to everyone to make changes on. There is still much more to improve, this is only day 1 of release to the public.
Thank you everyone for the feedback so far! I just wanted to say that I understand this is not a fully cohesive and functional project for every edge case. This is the first day of releasing it to the public and it is only the beginning of the journey. I do not expect to fully solve a problem of this scale on my own, Craft is open source and open to the community for development. I hope that as a community this can grow into a more advanced and widely adopted tool.
> For example, Cmake can use vcpkg to install a package but then I still have to write more cmake to actually find and use it.
I have this solved at our company. We have a tool built on top of vcpkg, to manage internal + external dependencies. Our cmake linker logic leverages the port names and so all you really do is declare your manifest file (vcpkg.json) then declare which one of them you will export publicly.
Everything after that is automatic including the exported cmake config for your library.
I don’t love this approach either (what a security nightmare…) - but it is easy to do for users and developers alike. Having to juggle a bunch of apt-like repositories for different distros is a huge time sink and adds a bunch of build complexity. Brew is annoying with its formulae vs tap vs cask vs cellar - and the associated ruby scripting… And then there’s windows - ugh.
I wish there was a dead simple installer TUI that had a common API specification so that you could host your installer spec on your.domain.com/install.json - point this TUI at it and it would understand the fine grained permissions required, handle required binary signature validation, manifest/sbom validation, give the user freedom to customize where/how things were installed, etc.
It is definitely worse. At leas a binary is constant, on your system, can be analyzed. Curl|sh can give you different responses than just curling. Far far worse
Only if you download an analyse it. You’re free to download the install script and analyze that too in the same way. The advantage that the script has is it’s human readable unlike the binary you’re about to execute blindly.
Having to work around a massive C++ software project daily, I wish you luck. We use conan2, and while it can be very challenging to use, I've yet to find something better that can handle incorporating as dependencies ancient projects that still use autoconf or even custom build tooling. It's also very good at detecting and enforcing ABI compatibility, although there are still some gaps. This problem space is incredibly hard and improving it is a prime driver for the creation of many of the languages that came after C/C++
I find that conan2 is mostly painful with ABI. Binaries from GCC are all backwards compatible, as are C++ standard versions. The exception is the C++11 ABI break.
And yet it will insist on only giving you binaries that match exactly. Thankfully there are experimental extensions that allow it to automatically fall back.
Uses CMAKE, Sorry not for me. Call me old but i prefere good old make or batch. Maybe it's because i can understand those tools. Debugging CMAKE build problems made me hate it. Also i code for embedded CPU and most of the time CMAKE is just overkill and does not play well the compiler/binutils provided. The Platform independency is just not happening in those environments.
When you need a configuration step, cmake will actually save you a lot of time, especially if you work cross platform or even cross compile. I love to hate cmake as much as the next guy, and it would be hard to design a worse scripting language, but I'll take it any time over autoconf. Some of the newer tools may well be more convenient - I tried Bazel, and it sure wasn't (for me).
If you're happy to bake one config in a makefile, then cmake will do very little for you.
> most of the time CMAKE is just overkill and does not play well the compiler/binutils provided
You need to define a CMake toolchain[1] and pass it to CMake with --toolchain /path/to/file in the command-line, or in a preset file with the key `toolchainFile` in a CMake preset. I've compiled for QNX and ARM32 boards with CMake, no issues, but this needs to be done.
For toy projects good old Make is fine...but at some point a project gets large enough that you need something more powerful. If you need something that can deal with multiple layers of nested sub-repositories, third-party and first-party dependencies, remote and local projects, multiple build configurations, dealing with non-code assets like documentation, etc, etc, etc - Make just isn't enough.
For simple projects. Make is easier for simple things I will grant. However when your projects gets complex at all make becomes a real pain and cmake becomes much easier.
Cmake has a lot of warts, but they have also put a lot of effort into finding and fixing all those weird special cases. If your project uses CMake odds are high it will build anywhere.
One interesting chicken-egg-problem I couldn't solve is how to figure out the C/C++ toolchain that's going to be used without running cmake on a 'dummy project file' first. For some toolchain/IDE combos (most notably Xcode and VStudio) cmake's toolchain detection takes a lot of time unfortunately.
I'm intrigued by the idea of writing one's own custom build system in the same language as the target app/game; it's probably not super portable or general but cool and easy to maintain for smaller projects: https://mastodon.gamedev.place/@pjako/115782569754684469
Craft has project management and generates starter project structure. You can generate header and source files with boilerplate starter code. Craft manages the building of the project so you don’t need to write much CMake. You can also save project structures as templates and instantiate those templates in new projects ready to go.
Seems to solve a problem very similar to Conan or vcpkg but without its own package archive or build scripts. In general, unlike Cargo/Rust, many C/C++ projects dynamically link libraries and often require complex Makefile/shell script etc magic to discover and optionally build their dependencies.
How does craft handle these 'diamond' patterns where 2 dependencies may depend on versions of the same library as transitive dependencies (either for static or dynamic linking or as header-only includes) without custom build scripts like the Conan approach?
The tough truth is that there already is a cargo for C/C++: Conan2. I know, python, ick. I know, conanfile.py, ick. But despite its warts, Conan fundamentally CAN handle every part of the general problem. Nobody else can. Profiles to manage host vs. target configuration? Check. Sufficiently detailed modeling of ABI to allow pre-compiled binary caching, local and remote? Check, check, check. Offline vs. Online work modes? Check. Building any relevant project via any relevant build system, including Meson, without changes to the project itself? Check. Support for pulling build-side requirements? Check. Version ranges? Check. Lockfiles? Check. Closed-source, binary-only dependencies? Check.
Once you appreciate the vastness of the problem, you will see that having a vibrant ecosystem of different competing package managers sucks. This is a problem where ONE standard that can handle every situation is incalculably better than many different solutions which solve only slices of the problem. I don't care how terse craft's toml file is - if it can't cross compile, it's useless to me. So my project can never use your tool, which implies other projects will have the same problem, which implies you're not the one package manager / build system, which means you're part of the problem, not the solution. The Right Thing is to adopt one unilateral standard for all projects. If you're remotely interested in working on package managers, the best way to help the human race is to fix all of the outstanding things about Conan that prevent it from being the One Thing. It's the closest to being the One Thing, and yet there are still many hanging chads:
- its terribly written documentation
- its incomplete support for editable packages
- its only nascent support for "workspaces"
- its lack of NVIDIA recipes
If you really can't stand to work on Conan (I wouldn't blame you), another effort that could help is the common package specification format (CPS). Making that a thing would also be a huge improvement. In fact, if it succeeds, then you'd be free to compete with conan's "frontend" ergonomics without having to compete with the ecosystem.
This certainly seems less awful than the typical C building process.
What I've been doing to manage dependencies in a way that doesn't depress me much has been Nix flakes, which allows me a pretty straightforward `nix build` with the correct dependencies built in.
I'm just a bit curious though; a lot of C libraries are system-wide, and usually require the system package manager (e.g. libsdl2-dev) does this have an elegant way to handle those?
Yes, many libraries are system wide that is true. This is something I had on the list of features to add. System dependencies. Thank you for the feedback!
In the age of AI tools like this are pointless. Especially new ones, given existence of make, cmake, premake and a bunch of others.
C++ build system, at the core, boils down to calling gcc foo.c -o foo.obj / link foo.obj foo.exe (please forgive if I got they syntax wrong).
Sure, you have more .c files, and you pass some flags but that's the core.
I've recently started a new C++ program from scratch.
What build system did I write?
I didn't. I told Claude:
"Write a bun typescript script build.ts that compiles the .cpp files with cl and creates foo.exe. Create release and debug builds, trigger release build with -release cmd-line flag".
And it did it in minutes and it worked. And I can expand it with similar instructions. I can ask for release build with all the sanitize flags and claude will add it.
The particulars don't matter. I could have asked for a makefile, or cmake file or ninja or a script written in python or in ruby or in Go or in rust. I just like using bun for scripting.
The point is that in the past I tried to learn cmake and good lord, it's days spent learning something that I'll spent 1 hr using.
It just doesn't make sense to learn any of those tools given that claude can give me working any build system in minutes.
It makes even less sense to create new build tools. Even if you create the most amazing tool, I would still choose spending a minute asking claude than spending days learning arbitrary syntax of a new tool.
This is a fair and valid point. However, why leave your workflow to write a prompt to an AI when you can run simple commands in your workspace. Also you are most likely paying to use the AI while Craft is free and open source and will only continue to improve. I respect your feedback though, thank you!
You're missing finding library/include paths, build configuration (`-D` flags for conditional compilation), fetching these from remote repositories, and versioning.
Project description is AI generated, even the HN post is AI generated, why should I spend any energy looking into your project when all you're doing is just slinging AI slop around and couldn't be bothered to put any effort in yourself?
But how this tool figures out where the header files and build instructions for the libraries are that are included? Any expected layout or industry wide consensus?
...and for custom requirements a manually created CMakeLists.extras.txt as escape hatch.
Unclear to me how more interesting scenarios like compiler- and platform-specific build options (enable/disable warnings, defines, etc...), cross-compilation via cmake toolchain files (e.g. via Emscripten SDK, WASI SDK or Android SDK/NDK) would be handled. E.g. just trivial things like "when compiling for Emscripten, include these source files, but not those others".
CMakes piles up various generations of idioms so there are multiple ways of doing it, but personally I’ve learned to steer away from find_package() and other magical functions. Get all your dependencies as subdirectories (whichever way you prefer) and use add_subdirectory(). Use find_package() only in so-called "config" mode where you explicitly instruct cmake where to find the config for large precompiled dependencies only
If you think cmake isn't very good, the solution isn't to add more layers of crap around cmake, but to replace it. Cmake itself exists because a lot of humans haven't bothered to read the gnu make manual, and added more cruft to manage this. Please don't add to this problem. It's a disease
As much of a dog as cmake is, "just use make!" does not solve many of the problems that cmake makes a go at. It's like saying go write assembler instead of C because C has so many footguns.
GNU Make has a debugger. This alone makes it far superior to every other build tool I've ever seen. The cmake debugging experience is "run a google search, and try random stuff recommended by other people that also have no idea how the thing works". This shouldn't be acceptable.
That hasn't been true for a few years at least. https://www.jetbrains.com/help/clion/cmake-debug.html is has had CMake debugging since cmake 3.27. Ditto for vscode and probably other C IDEs I am not familiar with. So does Gradle for Java. GNU make is hardly exclusive.
This is very true. My thought process was that since majority of projects already run on CMake, I would simply build off of that and take advantage of what CMake is good at while making the more difficult operations easier. Thank you for your feedback!
FWIW: there is something fundamentally wrong with a meta-meta build system. I don't think you should bother generating or wrapping CMake, you should be replacing it.
Cmake is doing a lot of underappreciated work under the hood that would be very hard to replicate in another tool, tons of accumulated workarounds for all the different host operating systems, compiler toolchains and IDEs, it's also one of few build tools which properly support Windows and Visual Studio.
Just alone reverse engineering the Xcode and Visual Studio project file formats for each IDE version isn't fun, but this "boring" grunt work is what makes cmake so valuable.
The core ideas of cmake are sound, it's only the scripting language that sucks.
Do your Makefiles work across Linux, macOS and Windows (without WSL or MingW), GCC, Clang and MSVC, or allow loading the project into an IDE like Xcode or Visual Studio though? That's why meta-build-systems like cmake were created, not to be a better GNU Make.
Ok, then just cl.exe instead of gcc or clang. Completely different set of command line options from gcc and clang, but that's fine. C/C++ build tooling needs to be able to deal with different toolchains. The diversity of C/C++ toolchains is a strength, not a weakness :)
One nice feature of MSVC is that you can describe the linker dependencies in the source files (via #pragma comment(lib, ...)), this enables building fairly complex single-file tools trivially without a build system like this:
cl mytool.c
...without having to specify system dependencies like kernel32 etc... on the cmdline.
Cmake is infamously not a build system. It is a build system generator.
This is now a build system generator generator. This is the wrong solution imho. The right solution is to just build a build system that doesn’t suck. Cmake sucks. Generating suck is the wrong angle imho.
CMake is a combination of a warthog of a specification language, and mechanisms for handling a zillion idiosyncracies and corners cases of everything.
I doubt than < 10,000 lines of C code can cover much of that.
I am also doubtful that developers are able to express the exact relations and semantic nuances they want to, as opposed to some default that may make sense for many projects, but not all.
Still - if it helps people get started on simpler or more straightforward projects - that's neat :-)
Feedback of someone who is used to manage large (>1500) software stack in C / C++ / Fortran / Python / Rust / etc:
- (1) Provide a way to compile without internet access and specify the associated dependencies path manually. This is absolutely critical.
Most 'serious' multi-language package managers and integration systems are building in a sandbox without internet access for security reasons and reproducibility reasons.
If your build system does not allow to build offline and with manually specified dependencies, you will make life of integrators and package managers miserable and they will avoid your project.
(2) Never ever build in '-03 -march=native' by default. This is always a red flag and a sign of immaturity. People expect code to be portable and shippable.
Good default options should be CMake equivalent of "RelWithDebInfo" (meaning: -O2 -g -DNDEBUG ).
-O3 can be argued. -march=native is always always a mistake.
- (3) Allow your build tool to be built by an other build tool (e.g CMake).
Anybody caring about reproducibility will want to start from sources, not from a pre-compiled binary. This also matter for cross compilation.
- (4) Please offer a compatibility with pkg-config (https://en.wikipedia.org/wiki/Pkg-config) and if possible CPS (https://cps-org.github.io/cps/overview.html) for both consumption and generation.
They are what will allow interoperability between your system and other build systems.
- (5) last but not least: Consider seriously the cross-compilation use case.
It is common in the world of embedded systems to cross compile. Any build system that does not support cross-compilation will be de facto banned from the embedded domain.
As someone who has also spent two decades wrangling C/C++ codebases, I wholeheartedly agree with every statement here.
I have an even stronger sentiment regarding cross compilation though - In any build system, I think the distinction between “cross” and “non-cross” compilation is an anti-pattern.
Always design build systems assuming cross compilation. It hurts nothing if it just so happens that your host and target platform/architecture end up being the same, and saves you everything down the line if you need to also build binaries for something else.
> In any build system, I think the distinction between “cross” and “non-cross” compilation is an anti-pattern.
This is one of the huge wins of Zig. Any Zig host compiler can produce output for any supported target. Cross compiling becomes straightforward.
> Never ever build in '-03 -march=native' by default. This is always a red flag and a sign of immaturity.
Perhaps you can see how there are some assumptions baked into that statement.
What assumptions would that be?
Shipping anything built with -march=native is a horrible idea. Even on homogeneous targets like one of the clouds, you never know if they'll e.g. switch CPU vendors.
The correct thing to do is use microarch levels (e.g. x86-64-v2) or build fully generic if the target architecture doesn't have MA levels.
I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.
I am willing to hear arguments for other approaches.
Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.
-mtune says "generate code that is optimised for this architecture" but it doesn't trigger arch specific features.
Whether these are right or not depends on what you are doing. If you are building gentoo on your laptop you should absolutely -mtune=native and -march=native. That's the whole point: you get the most optimised code you can for your hardware.
If you are shipping code for a wide variety of architectures and crucially the method of shipping is binary form then you want to think more about what you might want to support. You could do either: if you're shipping standard software pick a reasonable baseline (check what your distribution uses in its cflags). If however you're shipping compute-intensive software perhaps you load a shared object per CPU family or build your engine in place for best performance. The Intel compiler quite famously optimised per family, included all the copies in the output and selected the worst one on AMD ;) (https://medium.com/codex/fixing-intel-compilers-unfair-cpu-d...)
What?! seriously?!
I’ve never heard of anyone doing that.
If you use a cloud provider and use a remote development environment (VSCode remote/Jetbrains Gateway) then you’re wrong: cloud providers swap out the CPUs without telling you and can sell newer CPUs at older prices if theres less demand for the newer CPUs; you can’t rely on that.
To take an old naming convention, even an E3-Xeon CPU is not equivalent to an E5 of the same generation. I’m willing to bet it mostly works but your claim “I build on the exact hardware I ship on” is much more strict.
The majority of people I know use either laptops or workstations with Xeon workstation or Threadripper CPUs— but when deployed it will be a Xeon scalable datacenter CPU or an Epyc.
Hell, I work in gamedev and we cross compile basically everything for consoles.
… not everyone uses the cloud?
Some people, gasp, run physical hardware, that they bought.
I'm willing to hear arguments for your approach?
it certainly has scale issues when you need to support larger deployments.
[P.S.: the way I understand the words, "shipping" means "passing it off to someone else, likely across org boundaries" whereas what you're doing I'd call "deploying"]
The only time I used -march=native was for a university assignment which was built and evaluated on the same server, and it allowed juicing an extra bit of performance. Using it basically means locking the program to the current CPU only.
However I'm not sure about -O3. I know it can make the binary larger, not sure about other downsides.
> The only time I used -march=native
It is completely fine to use -march=native, just do not make it the default for someone building your project.
That should always be something to opt-in.
The main reason is that software are a composite of (many) components. It becomes quickly a pain in the ass of maintainability if any tiny library somewhere try to sneak in '-march=native' that will make the final binary randomly crash with an illegal instruction error if executed on any CPU that is not exactly the same than the host.
When you design a build system configuration, think for the others first (the users of your software), and yourself after.
-O3 also makes build times longer (sometimes significantly), and occasionally the resulting program is actually slightly slower than -O2.
IME -O3 should only be used if you have benchmarks that show -O3 actually produces a speedup for your specific codebase.
Not assumptions, experience.
I fully concur with that whole post as someone who also maintained a C++ codebase used in production.
> -march=native is always always a mistake
Gentoo user: hold my beer.
Gentoo binaries aren't shipped that way
Gentoo..... distributes binaries?
Yes
https://wiki.gentoo.org/wiki/Gentoo_Binary_Host_Quickstart
It's also an option on NixOS but I haven't managed to get it working unlike Gentoo.
>15000
15000 what?
1500 C/C++ individual software components.
The 15000 was a typo on my side. Fixed.
I see, thanks. I didn't mind the number it just wasn't clear what was it about.
Besides Cargo, you might want to take a look at Python's pyproject.toml standard. https://packaging.python.org/en/latest/guides/writing-pyproj...
It's similar, but designed for an existing ecosystem. Cargo is designed for `cargo`, obviously.
But `pyproject.toml` is designed for the existing tools to all eventually adopt. (As well as new tools, of course.)
The least painful C/C++ build tool I've used is xmake
https://github.com/xmake-io/xmake
The reason why I like it (beyond ease-of-use) is that it can spit out CMakeLists.txt and compile_commands.json for IDE/LSP integration and also supports installing Conan/vcpkg libraries or even Git repos.
Then you use it likeI would happily switch to it in a heartbeat if it was a lot more well-documented and if it supported even half of what CMake does.
As an example of what I mean, say I want to link to the FMOD library (or any library I legally can't redistribute as an SDK). Or I want to enable automatic detection on Windows where I know the library/SDK is an installer package. My solution, in CMake, is to just ask the registry. In XMake I still can't figure out how to pull this off. I know that's pretty niche, but still.
The documentation gap is the biggest hurtle. A lot of the functions/ways of doing things are poorly documented, if they are at all. Including a CMake library that isn't in any of the package managers for example. It also has some weird quirks: automatic/magic scoping (which is NOT a bonus) along with a hack "import" function instead of using native require.
All of this said, it does work well when it does work. Especially with modules.
Similar to premake I have never been a fan of the global state for defining targets. Give me an object or some handle that I call functions on/pass to functions. CMake at some point ended up somewhat right with that to going to target based defining for its stuff and since I've really learned it I have been kinda happy with it.
Agreed, xmake seems very well-thought-out, and supports the most modern use-cases (C++20 named modules, header unit modules, and `import std`, which CMake still has a lot of ceremony around). I should switch to it.
actually looks very similar to Meson [https://mesonbuild.com/], which is getting a lot of traction in FOSS [https://mesonbuild.com/Users.html]
e.g. from their docs:
Meson is a python layer over the ninja builder, like cmake can be. xmake is both a build tool and a package manager fast like ninja and has no DSL, the build file is just lua. It's more like cargo than meson is.
I didn't claim it was a package manager, just that it looked similar. The root post said "build tool", and that's what Meson is as well.
Other than that, both "python layer" and "over the ninja builder" are technically wrong. "python layer" is off since there is now a second implementation, Muon [https://muon.build/], in C. "over the ninja builder" is off since it can also use Visual Studio's build capabilities on Windows.
Interestingly, I'm unaware of other build-related systems that have multiple implementations, except Make (which is in fact part of the POSIX.1 standard.) Curious to know if there are any others.
I've had some experience with this but it seems to be rather slow, very niche and tbh I can't see a reason to use it over CMake.
Anyone can make a tool that solves a tiny part of the problem. however the reason no such tool has caught on is because of all the weird special cases you need to handle before it can be useful. Even if you limit your support to desktop: OS/X and Windows that problem will be hard, adding various linux flavors is even more difficult, not to mention BSD. The above is the common/mainstream choices, there Haiku is going to be very different, and I've seen dozens of others over the years, some of them have a following in their niche. Then there are people building for embedded - QNX, vxworks, or even no OS just bare metal - each adding weirdness (and implying cross compiling which makes everything harder because your assumptions are always wrong).
I'm sorry I have to be a downer, but the fact is if you can use the word "I" your package manager is obviously not powerful enough for the real world.
There are so many reasons why C/C++ build systems struggle, but imo power is the last of them. "Powerful" and "scriptable" build systems are what has gotten us into the swamp!
* Standards committee is allergic to standardizing anything outside of the language itself: build tools, dependency management, even the concept of a "file" is controversial!
* Existing poor state of build systems is viral - any new build system is 10x as complex as a clean room design because you have to deal with all the legacy "power" of previous build tooling. Build system flaws propagate - the moment you need hacks in your build, you start imposing those hacks on downstream users of your library also.
Even CMake should be a much better experience than it is - but in the real world major projects don't maintain their CMake builds to the point you can cleanly depend on them. Things like using raw MY_LIB_DIR variables instead of targets, hacky/broken feature detection flags etc. Microsoft tried to solve this problem via vcpkg, ended up having to patch builds of 90% of the packages to get it to work, and it's still a poor experience where half the builds are broken.
My opinion is that a new C/C++ build/package system is actually a solvable problem now with AI. Because you can point Opus 4.6 or whoever at the massive pile of open source dependencies, and tell it for each one "write a build config for this package using my new build system" which solves the gordian knot of the ecosystem problem.
I will categorize this as a pattern I've seen which leads to stagnation, or is at least aiming for it. Usually these are built on one or more assumption which doesn't hold. The flow of this pattern:
Anecdotes I'm choosing based on proximity to this particular problem: uv and cargo. uv because people said the same thing about python packaging, and cargo because its adjacent to C and C++ in terms of being a low-level compiled language used for systems programming, embedded/bare-metal etc.The world is rich in complexity, subtlety, and exceptions to categorization. I don't think this should block us from solving problems.
I didn't say the problem couldn't be solved. I said the problem can't be solved by one person. There is a difference. (maybe it can be solved by one person over a few decades)
This is true. There is no way I could solve a problem of this scale by myself. That is why this is an open source project and open to everyone to make changes on. There is still much more to improve, this is only day 1 of release to the public.
I mean -- if I'm going to join a team to solve the hard 20%, I'd like to see the idea validated against the easy 80% first.
If it's really bad, at least the easy 20%.
Thank you everyone for the feedback so far! I just wanted to say that I understand this is not a fully cohesive and functional project for every edge case. This is the first day of releasing it to the public and it is only the beginning of the journey. I do not expect to fully solve a problem of this scale on my own, Craft is open source and open to the community for development. I hope that as a community this can grow into a more advanced and widely adopted tool.
Nice. I have been thinking of making something similar. Now hopefully I don't have to!
Not sure how big your plans are.
My thoughts would be to start as a cmake generator but to eventually replace it. Maybe optionally.
And to integrate suppoet for existing package managers like vcpkg.
At the same time, I'd want to remain modular enough that's it's not all or nothing. I also don't like locking.
But right now package management and build system are decoupled completely. And they are not like that in other ecosystems.
For example, Cmake can use vcpkg to install a package but then I still have to write more cmake to actually find and use it.
> For example, Cmake can use vcpkg to install a package but then I still have to write more cmake to actually find and use it.
I have this solved at our company. We have a tool built on top of vcpkg, to manage internal + external dependencies. Our cmake linker logic leverages the port names and so all you really do is declare your manifest file (vcpkg.json) then declare which one of them you will export publicly.
Everything after that is automatic including the exported cmake config for your library.
The installation instructions being a `curl | sh` writing to the user's bashrc does not inspire confidence.
They did say it was inspired by cargo, which is often installed using rustup as such:
I don’t love this approach either (what a security nightmare…) - but it is easy to do for users and developers alike. Having to juggle a bunch of apt-like repositories for different distros is a huge time sink and adds a bunch of build complexity. Brew is annoying with its formulae vs tap vs cask vs cellar - and the associated ruby scripting… And then there’s windows - ugh.
I wish there was a dead simple installer TUI that had a common API specification so that you could host your installer spec on your.domain.com/install.json - point this TUI at it and it would understand the fine grained permissions required, handle required binary signature validation, manifest/sbom validation, give the user freedom to customize where/how things were installed, etc.
Given you're about to run a binary, it's no worse than that.
It is definitely worse. At leas a binary is constant, on your system, can be analyzed. Curl|sh can give you different responses than just curling. Far far worse
Only if you download an analyse it. You’re free to download the install script and analyze that too in the same way. The advantage that the script has is it’s human readable unlike the binary you’re about to execute blindly.
This is fitting for something simulating cargo, which is a huge supply chain risk itself.
Having to work around a massive C++ software project daily, I wish you luck. We use conan2, and while it can be very challenging to use, I've yet to find something better that can handle incorporating as dependencies ancient projects that still use autoconf or even custom build tooling. It's also very good at detecting and enforcing ABI compatibility, although there are still some gaps. This problem space is incredibly hard and improving it is a prime driver for the creation of many of the languages that came after C/C++
I find that conan2 is mostly painful with ABI. Binaries from GCC are all backwards compatible, as are C++ standard versions. The exception is the C++11 ABI break.
And yet it will insist on only giving you binaries that match exactly. Thankfully there are experimental extensions that allow it to automatically fall back.
Uses CMAKE, Sorry not for me. Call me old but i prefere good old make or batch. Maybe it's because i can understand those tools. Debugging CMAKE build problems made me hate it. Also i code for embedded CPU and most of the time CMAKE is just overkill and does not play well the compiler/binutils provided. The Platform independency is just not happening in those environments.
When you need a configuration step, cmake will actually save you a lot of time, especially if you work cross platform or even cross compile. I love to hate cmake as much as the next guy, and it would be hard to design a worse scripting language, but I'll take it any time over autoconf. Some of the newer tools may well be more convenient - I tried Bazel, and it sure wasn't (for me).
If you're happy to bake one config in a makefile, then cmake will do very little for you.
> most of the time CMAKE is just overkill and does not play well the compiler/binutils provided
You need to define a CMake toolchain[1] and pass it to CMake with --toolchain /path/to/file in the command-line, or in a preset file with the key `toolchainFile` in a CMake preset. I've compiled for QNX and ARM32 boards with CMake, no issues, but this needs to be done.
[1]: https://cmake.org/cmake/help/latest/manual/cmake-toolchains....
For toy projects good old Make is fine...but at some point a project gets large enough that you need something more powerful. If you need something that can deal with multiple layers of nested sub-repositories, third-party and first-party dependencies, remote and local projects, multiple build configurations, dealing with non-code assets like documentation, etc, etc, etc - Make just isn't enough.
For simple projects. Make is easier for simple things I will grant. However when your projects gets complex at all make becomes a real pain and cmake becomes much easier.
Cmake has a lot of warts, but they have also put a lot of effort into finding and fixing all those weird special cases. If your project uses CMake odds are high it will build anywhere.
Odds are high the distro maintainer will lose hair trying to package it
Heh, looks like cmake-code-generators are all the rage these days ;)
Here's my feeble attempt using Deno as base (it's extremely opinionated though and mostly for personal use in my hobby projects):
https://github.com/floooh/fibs
One interesting chicken-egg-problem I couldn't solve is how to figure out the C/C++ toolchain that's going to be used without running cmake on a 'dummy project file' first. For some toolchain/IDE combos (most notably Xcode and VStudio) cmake's toolchain detection takes a lot of time unfortunately.
I'm intrigued by the idea of writing one's own custom build system in the same language as the target app/game; it's probably not super portable or general but cool and easy to maintain for smaller projects: https://mastodon.gamedev.place/@pjako/115782569754684469
Compared to Conan, what are the advantages?
Craft has project management and generates starter project structure. You can generate header and source files with boilerplate starter code. Craft manages the building of the project so you don’t need to write much CMake. You can also save project structures as templates and instantiate those templates in new projects ready to go.
How you can be better than CMake?
Seems to solve a problem very similar to Conan or vcpkg but without its own package archive or build scripts. In general, unlike Cargo/Rust, many C/C++ projects dynamically link libraries and often require complex Makefile/shell script etc magic to discover and optionally build their dependencies.
How does craft handle these 'diamond' patterns where 2 dependencies may depend on versions of the same library as transitive dependencies (either for static or dynamic linking or as header-only includes) without custom build scripts like the Conan approach?
The tough truth is that there already is a cargo for C/C++: Conan2. I know, python, ick. I know, conanfile.py, ick. But despite its warts, Conan fundamentally CAN handle every part of the general problem. Nobody else can. Profiles to manage host vs. target configuration? Check. Sufficiently detailed modeling of ABI to allow pre-compiled binary caching, local and remote? Check, check, check. Offline vs. Online work modes? Check. Building any relevant project via any relevant build system, including Meson, without changes to the project itself? Check. Support for pulling build-side requirements? Check. Version ranges? Check. Lockfiles? Check. Closed-source, binary-only dependencies? Check.
Once you appreciate the vastness of the problem, you will see that having a vibrant ecosystem of different competing package managers sucks. This is a problem where ONE standard that can handle every situation is incalculably better than many different solutions which solve only slices of the problem. I don't care how terse craft's toml file is - if it can't cross compile, it's useless to me. So my project can never use your tool, which implies other projects will have the same problem, which implies you're not the one package manager / build system, which means you're part of the problem, not the solution. The Right Thing is to adopt one unilateral standard for all projects. If you're remotely interested in working on package managers, the best way to help the human race is to fix all of the outstanding things about Conan that prevent it from being the One Thing. It's the closest to being the One Thing, and yet there are still many hanging chads:
- its terribly written documentation
- its incomplete support for editable packages
- its only nascent support for "workspaces"
- its lack of NVIDIA recipes
If you really can't stand to work on Conan (I wouldn't blame you), another effort that could help is the common package specification format (CPS). Making that a thing would also be a huge improvement. In fact, if it succeeds, then you'd be free to compete with conan's "frontend" ergonomics without having to compete with the ecosystem.
> The tough truth is that there already is a cargo for C/C++: Conan2
Is it though?
When I read the tutorial: https://docs.conan.io/2/tutorial/consuming_packages/build_si...
It says to hand write a `CMakeLists.txt` file. This is before it has me create a `conanfile.txt` even.
I have the same complaint about vcpkg.
It seems like it takes: `(conan | vcpkg) + (cmake | autotools) + (ninja | make)` to do the basics what cargo does.
This certainly seems less awful than the typical C building process.
What I've been doing to manage dependencies in a way that doesn't depress me much has been Nix flakes, which allows me a pretty straightforward `nix build` with the correct dependencies built in.
I'm just a bit curious though; a lot of C libraries are system-wide, and usually require the system package manager (e.g. libsdl2-dev) does this have an elegant way to handle those?
Yes, many libraries are system wide that is true. This is something I had on the list of features to add. System dependencies. Thank you for the feedback!
In the age of AI tools like this are pointless. Especially new ones, given existence of make, cmake, premake and a bunch of others.
C++ build system, at the core, boils down to calling gcc foo.c -o foo.obj / link foo.obj foo.exe (please forgive if I got they syntax wrong).
Sure, you have more .c files, and you pass some flags but that's the core.
I've recently started a new C++ program from scratch.
What build system did I write?
I didn't. I told Claude:
"Write a bun typescript script build.ts that compiles the .cpp files with cl and creates foo.exe. Create release and debug builds, trigger release build with -release cmd-line flag".
And it did it in minutes and it worked. And I can expand it with similar instructions. I can ask for release build with all the sanitize flags and claude will add it.
The particulars don't matter. I could have asked for a makefile, or cmake file or ninja or a script written in python or in ruby or in Go or in rust. I just like using bun for scripting.
The point is that in the past I tried to learn cmake and good lord, it's days spent learning something that I'll spent 1 hr using.
It just doesn't make sense to learn any of those tools given that claude can give me working any build system in minutes.
It makes even less sense to create new build tools. Even if you create the most amazing tool, I would still choose spending a minute asking claude than spending days learning arbitrary syntax of a new tool.
This is a fair and valid point. However, why leave your workflow to write a prompt to an AI when you can run simple commands in your workspace. Also you are most likely paying to use the AI while Craft is free and open source and will only continue to improve. I respect your feedback though, thank you!
You're missing finding library/include paths, build configuration (`-D` flags for conditional compilation), fetching these from remote repositories, and versioning.
As long as it's for C/C++ and not C or C++, I'm skeptical.
Why do you say this? I respect it, I'm just curious.
Project description is AI generated, even the HN post is AI generated, why should I spend any energy looking into your project when all you're doing is just slinging AI slop around and couldn't be bothered to put any effort in yourself?
What about cmkr?
https://cmkr.build/
“Show HN” has really become a Claude code showcase in the last 6 months, maybe it's time to sunset the format at this point …
Yup, I read "— think Cargo, but for C/C++." and closed the tab.
Yesterday I had to wrestle with CMake.
But how this tool figures out where the header files and build instructions for the libraries are that are included? Any expected layout or industry wide consensus?
I believe it supports only projects having a working cmake setup, no extra magic
I suspect it depends on a specific directory structure, e.g. look at this generated cmake file:
https://github.com/randerson112/craft/blob/main/CMakeLists.t...
...and for custom requirements a manually created CMakeLists.extras.txt as escape hatch.
Unclear to me how more interesting scenarios like compiler- and platform-specific build options (enable/disable warnings, defines, etc...), cross-compilation via cmake toolchain files (e.g. via Emscripten SDK, WASI SDK or Android SDK/NDK) would be handled. E.g. just trivial things like "when compiling for Emscripten, include these source files, but not those others".
CMakes piles up various generations of idioms so there are multiple ways of doing it, but personally I’ve learned to steer away from find_package() and other magical functions. Get all your dependencies as subdirectories (whichever way you prefer) and use add_subdirectory(). Use find_package() only in so-called "config" mode where you explicitly instruct cmake where to find the config for large precompiled dependencies only
If you think cmake isn't very good, the solution isn't to add more layers of crap around cmake, but to replace it. Cmake itself exists because a lot of humans haven't bothered to read the gnu make manual, and added more cruft to manage this. Please don't add to this problem. It's a disease
As much of a dog as cmake is, "just use make!" does not solve many of the problems that cmake makes a go at. It's like saying go write assembler instead of C because C has so many footguns.
GNU Make has a debugger. This alone makes it far superior to every other build tool I've ever seen. The cmake debugging experience is "run a google search, and try random stuff recommended by other people that also have no idea how the thing works". This shouldn't be acceptable.
That hasn't been true for a few years at least. https://www.jetbrains.com/help/clion/cmake-debug.html is has had CMake debugging since cmake 3.27. Ditto for vscode and probably other C IDEs I am not familiar with. So does Gradle for Java. GNU make is hardly exclusive.
This is very true. My thought process was that since majority of projects already run on CMake, I would simply build off of that and take advantage of what CMake is good at while making the more difficult operations easier. Thank you for your feedback!
I'm all for shitting on CMake, but Jesus, to suggest Make as a replacement/improvement is an unhinged take.
I'm suggesting that people creating build systems read the make manual. Surely this isn't controversial?
Please consider adding `cargo watch` - that would be a killer feature!
Yes! This is definitely on the list of features to add. Thank you for the feedback!
FWIW: there is something fundamentally wrong with a meta-meta build system. I don't think you should bother generating or wrapping CMake, you should be replacing it.
Cmake is doing a lot of underappreciated work under the hood that would be very hard to replicate in another tool, tons of accumulated workarounds for all the different host operating systems, compiler toolchains and IDEs, it's also one of few build tools which properly support Windows and Visual Studio.
Just alone reverse engineering the Xcode and Visual Studio project file formats for each IDE version isn't fun, but this "boring" grunt work is what makes cmake so valuable.
The core ideas of cmake are sound, it's only the scripting language that sucks.
Another fresh example of what you don't like: https://www.youtube.com/watch?v=ExSlx0vBMXo Building C++: It Doesn't Have to be Painful! - Nicole Mazzuca - Meeting C++ 2025
Build systems don't plan to converge in the future =)
My thoughts exactly. I thought this was going to be some new thing, but it's just yet another reason that I'll stick with Makefiles.
Do your Makefiles work across Linux, macOS and Windows (without WSL or MingW), GCC, Clang and MSVC, or allow loading the project into an IDE like Xcode or Visual Studio though? That's why meta-build-systems like cmake were created, not to be a better GNU Make.
There is something fundamentally wrong with Windows or Visual Studio that it requires ugly solutions.
Windows and Visual Studio solutions are perfectly fine. MSBuild is a declarative build syntax in XML, it's not very different from a makefile.
XML is already terrible. But the main problem seems to be that they created something similar but incompatible to make.
Ok, then just cl.exe instead of gcc or clang. Completely different set of command line options from gcc and clang, but that's fine. C/C++ build tooling needs to be able to deal with different toolchains. The diversity of C/C++ toolchains is a strength, not a weakness :)
One nice feature of MSVC is that you can describe the linker dependencies in the source files (via #pragma comment(lib, ...)), this enables building fairly complex single-file tools trivially without a build system like this:
...without having to specify system dependencies like kernel32 etc... on the cmdline.Cmake is infamously not a build system. It is a build system generator.
This is now a build system generator generator. This is the wrong solution imho. The right solution is to just build a build system that doesn’t suck. Cmake sucks. Generating suck is the wrong angle imho.
Impression before actually trying this:
CMake is a combination of a warthog of a specification language, and mechanisms for handling a zillion idiosyncracies and corners cases of everything.
I doubt than < 10,000 lines of C code can cover much of that.
I am also doubtful that developers are able to express the exact relations and semantic nuances they want to, as opposed to some default that may make sense for many projects, but not all.
Still - if it helps people get started on simpler or more straightforward projects - that's neat :-)
Just switch to bazel, copy my hermetic build config and just use it ... yes, you can hate me know.
Will take C only 51 years to adopt.