After a year of not really touching this blog, I figured it was maybe time to start doing that. Over the past year (2020), there was a global pandemic, but more importantly, I got moved to a core components team at work. The work hasn't yet panned out to be what I want, but in the meantime, I took the liberty to script up most of my day to day operations outside of actually coding.
Some of these problems are things this script solves:
buildall
)newdb
)Before I get into the what, I have to describe some of the why.
In my day job, I work on an Angular/C# monorepo, specifically the core components/features part. The idea is this monorepo should allow people to easily reuse code that's been written for past projects, but we all know how that goes without serious planning. To make matters worse, most of the codebase is really just spaghetti no one wants to touch. New components become legacy in 4 - 6 weeks.
It's pretty gross to work in. Before the workscript, builds would take me 6 minutes, time to reviewable code sometimes took half a day, and a CRUD page would take an entire day for a "normal" dev to produce. There's really just a lot of weight that the Angular build system can't handle.
I've abstracted and automated a large amount of my repeated operations through this workscript, and I hope it might be of some use to someone else.
buildall
is kind of like Make, except that it can store build state, and
(re)run commands that may or may not output state. With a Makefile building a C
program, you might see something like this:
%.o: %.c
$(CC) -c -o $@ $<
For the uninitiated, this is a rule to transform every c source (.c) file into
an object (.o) file via this rule. The in the production text, the parameter on
the left is replaced by $@
, and the constraints on the right get expanded
into $<
. By virtue of how Make works, whenever there is a .c file that is
newer than the corresponding .o file, the rule gets executed again.
However, the Angular/C# system I work in doesn't compile every Typescript file into a Javascript file, for example. Every CS file doesn't become its own DLL, or object file. That just isn't how they work (as far as I know).
So, I came up with the next best thing. We have a file full of build commands
with associated metadata strings. This is stored in ~/.builditems
. When a
user executes workscript buildall
, it iterates through ~/.builditems
,
pulling out the command, a metadata string, and the directory from the root of
the git repo. Armed with this information, we can pushd
into that directory,
run the command, and store the state string if we're successful, or bail out
early if there are errors to fix.
We store the state string so the next time we run the command, it first checks
to see if the state string is in ~/.buildstate
. If it's present, this command
executed without issue, and we can continue to the next one. If not, we execute
it.
This really comes in handy when there are lots of commands to run, and you need to manually intervene every time there's an error, like on a weird merge, or when editing shared code, that might be perfect for one project and makes another explode.
newdb
is very similar to the previous command in that it has a list of
containers and their correct versions stored in ~/.containers
. The trick
here is that we've decided to run database containers on different ports
instead of having multiple databases in a single container. So, my solution
has a port remapping string that you might not need, should you implement this
for yourself.
Oh my gosh. The amount of times Entity Framework has ruined my day is far too
many. That being said, there isn't much to wrap around it. Because it made
more sense to me, I added push
and pop
commands to treat migrations like
a stack. I added update
because if I have a wrapper for adding and removing
migrations, I might as well have one for running the migration.
Because I often need to review the migration (I don't trust Microsoft) I added
another command mostrecent
, which brings up the most recent migration in
my $EDITOR
.
Not much to see here, but it does come in handy.
reap
really only exists because Microsoft and Google don't know how to write
programs. On Ubuntu 20.10, as of writing, every time a dotnet
command is
executed and "exits", it hangs around. They're basically just all zombie
processes. So, if the computer ever feels sluggish, I just reap
the processes
and get back to work.
This is the good one.
On a good day, CI will fail only 5 times do to OOM issues. The problem's gotten to be so bad, I wrote this command to fetch my open PRs that failed CI, and re-trigger them, completely out of desperation.
It uses Github's API for PRs and Statuses, and pipes that through jq
to
nicely format for the Unix utilities. If it finds one it fails, pushes another
"Trigger Build" commit.
If someone from my company's reading this, the weight of the core repo is just too heavy for this manual CI machine setup: we really need an elastic build situation, and we needed it months ago.
My favorite part of the workscript is the while (( "$#" )); do ... done
loop
that handles the command line arguments. It's setup in such a way that you can
effortlessly chain these commands together.
For example, let's say I'm iterating on a migration:
workscript pop push TestingMigration mostrecent
That'll let the machine remove the last migration, try to add a new one, and show it to me.
Another common thing I do is this:
workscript azlogin newdb update
I didn't talk about the azlogin, but it reads from env vars and logs me into the Azure cloud. After that, it'll reset my containers, then do a database update. This is useful at the end of a sprint, or again, if I'm testing migrations and I just need to auto-reset everything.