Developer experiences from the trenches
Mon 07 August 2017 by Michael Labbe
tags code
Between the cloud, VMs, Docker and cheap laptops, I run into more unconfigured shell environments than I ever did before. In the simple old days you used to get a computer and configure it. You reaped the productivity of that configuration for years. Nowadays environments are ridiculously disposable. The tyranny of the default has become incredibly powerful.
I decided to take the power back and create a self-installing bootstrap script which I could use to configure any new system with a Bash shell. This ended up being a one-day hack that has made my life a lot more sane. My requirements were:
It must self-install and self-configure without needing to type any commands.
It must be accessible everywhere from an easy-to-remember URL so I don’t need to copy/paste.
It must optionally let me choose how each system is configured.
It must run anywhere — inside stripped down Docker containers, etc.
To build this, I decided on using Bash scripting. Perl, Python and Ruby are not always available. Bash, while not quite as ubitquious as /bin/sh
and Busybox, is close enough.
Makeself creates self extracting archives. You can download a shell script that unarchives to a tempdir, running a setup script with access to a hierarchy of files. This is exactly what we needed.
In order to centrally host the bootstrap script, I used Amazon S3. S3 buckets have notoriously long names, but Amazon gives you the ability to use a CNAME for a subdomain that you own. This means I could use a subdomain like https://bootstrap.frogtoss.com
that is backed by S3, guaranteeing the bootstrap is accessible virtually anywhere in the free world.
What remained is a long day of enjoyable hacking that produced a set of very personal dotfiles, emacs tweaks and sed manipulations that converted a basic install into something as usable as my most tweaked workstation.
Now I have a chained command that is similar to the following which highly configures any Linux instance:
rm -f bootstrap.sh;
wget http://bootstrap.frogtoss.com/bootstrap.sh;
chmod +x bootstrap.sh ; sudo ./bootstrap.sh
Tue 16 August 2016 by Michael Labbé
tags code
If you are distributing a library, you have a responsibility to make it as trouble-free as possible to build and use. To do otherwise litters the Internet with a project that frustrates and consumes the time of potentially hundreds of programmers. (And when have you met a programmer with extra time?).
This presents an interesting building challenge: how do you create the most compatible build system while simultaneously burdening the user as little as possible?
Evidently, a lot of people turn to GNU Make. GNU Make, however, calls out to bash on Windows and uses Unix path slashes. Good luck getting that to work seamlessly with Visual Studio. The standalone GNU Make executable that the FSF hosts is over ten years old and is riddled with bugs. It hangs on fork()
in Windows 10. Do not tell your users to download this. Do not tell your users to download msys2 or Cygwin so they can build your code if you respect their time.
Another option is CMake. Sadly, its sole installer download is 55MB, bundles QT and a full documentation set, and forces your users to parse syntax that they likely don’t understand. CMake generates projects with hardcoded paths which means the generated files can’t be reused.
SCons requires Python 2 to be present. We live in a Python 3 world; asking users to install a second Python implementation side by side is just rude.
In light of these options, Premake is a breath of fresh air. It’s not perfect, but you can download a one 1MB self-contained binary which consists of a small C-based engine and bundled Lua scripts. Premake generates build scripts with relative paths and gets you up and running in most popular environments very quickly.
But, wait! You can do even better. You can run Premake for your users, checking the generated projects into the repository. Nothing generated by Premake is specific to your computer’s setup. This removes two steps from building for your users: downloading Premake and running it. Awesome!
Premake has a nifty action feature, making it straightforward to script new actions, such as running itself multiple times to generate projects.
Doing so has a couple of subleties. Firstly, Premake is limited to generating project files in the directory that the premake5.lua
script resides in. One directory with multiple builds in it is a mess. One option is to create a subdirectory per build type, then copy the premake5.lua
script in there, run it, and then delete it. This sounds hacky, but it actually works without giving you any trouble.
Another thing to keep in mind when generating all of your Makefiles at once is that the Makefiles are operating system-specific. A Windows-based makefile will link different libraries, for instance. You need to generate build scripts-per platform.
In order to build from a subdirectory, you need to specify your paths relative from the generated subdirectory. This can be resolved succintly at the top of your premake5.lua
script:
-- assumes the premake5.lua is in a subdirectory called "build" under
-- the project root.
local root_dir = path.join(path.getdirectory(_SCRIPT),"../../")
local build_dir = path.join(root_dir,"build/")
My library, Native File Dialog, has a Premake 5 Script action called “dist” which does exactly this. If I need to update the build scripts, I type premake5 dist
and they are all refreshed. My library users need only run the generated builds. Feel free to copy the code for yourself.
Zero dependency building for users and a one meg standalone dependency for the package maintainer! This is the smallest price to pay for cross platform building I’ve ever come across.
Sun 31 May 2015 by Michael Labbe
tags code
Git-svn is the bridge between Git and SVN. It is more dangerous than descending a ladder that into a pitch black bottomless pit. With the ladder, you would use the tactile response of your foot hitting thin air as a prompt to stop descending. With Git-Svn, you just sort of slip into the pit, and short of being Batman, you’re not getting back up and out.
There are plenty of pages on the Internet talking about how to use Git-svn, but not a lot of them explain when you should avoid it. Here are the major gotchas:
If you clone your Git-svn repo, say to another machine, know that it will be hopelessly out of sync once you run git svn dcommit
. Running dcommit
reorders all of the changes in the git repo, rewriting hashes in the process.
When pushing or pulling changes from the clone, Git will not be able to match hashes. This warning saves you from a late-stage manual re-commit of all your changes from the cloned machine.
Rebasing is systematic cherry picking. All of your pending changes are reapplied to the head of the git repo. In real world scenarios, this creates conflicts which must be resolved by manually merging.
Any time there is a manual merge, the integrity of the codebase is subject to the accuracy of your merge. People make mistakes — conflict resolution can bring bugs and make teams need to re-test the integrity of the build.
This might seem obvious, but think of this in context with the previous admonishment. If developers are committing to SVN as you perform time consuming rebases, you are racing to finish a rebase so you can commit before you are out of date.
Getting in a rebase, dcommit, fail, rebase loop is a risk. Don’t hold on to too many changes, as continuously rebasing calls on you to manually re-merge.
Here are a handful of scenarios where git-svn comes in handy and sidesteps these problems:
dcommit
back to SVN.
Thu 27 November 2014 by Michael Labbe
tags code
I open sourced Native File Dialog last night. It’s a lean C library that pops up a native file dialog on Linux, Mac and Windows. It seems to have scratched an itch that a lot of people have, as it’s been starred 50 times on Github overnight, and my announcement tweet has received 27 retweets and 52 favorites as of this morning.
I did learn a few things writing and releasing this. Here they are in no particular order.
Linux users are reticent to link against GTK+. Plenty of people don’t want the overhead of GTK+ just to get a few paths back from the user. This makes sense if your app benefits from staying small. In order to get GTK+ up, you have to link against quite a few libraries: Cairo, GLib and GTK+ itself, to name three.
Interestingly, there must be similar overhead in Windows, but it isn’t as visible; most of what you need is given to you through COM object initialization, not linking libraries. Linux has no official toolkit, and as a result, the cost is much more visible to developers.
Of course, you can release COM objects.
GTK+ has no uninitialization. Once you have paid the cost of initializing GTK+, it permanently allocates private memory in your process. This is a shame. At least I can release COM objects.
This has led people to suggest Zenity and pipes; actually launching the file dialog in a separate process! I have always been skeptical of this approach — what happens when the dialog opens in another workspace, or the main app crashes, or X remoting causes other idiosyncrasies?
At any rate, I would be happy to accept an nfd_zenity.c
patch which allows people to trade off between these problems.
People Really Like to just add one file to their project. Native File Dialog builds with SCons, which, admittedly, is not something every developer has installed. STB has spoiled people — they really want to just drop a file in to their current build! I am going to provide instructions that help people bypass SCons.
Feedback is hard to come by when you work on a small team. If you open source something useful, devs will scrutinize it and provide feedback. It doesn’t have to be a large piece of code — actually, it’s probably better that it’s not! If NFD did ten things, the issues people raised would have been broader, and the criticisms received may not have been as beneficial to me.