07 September 2018

Reflections on DEFCON CTF 2018

Played DEFCON CTF for my alma mater's CTF team from the 10th to 12th of August.  Having returned home and mostly-fixed my sleep cycle, reflections.  For another perspective on the same events, see Down to the Wire.

Why / how:

We had some summer interns this year from the college CTF team, and they were talking about DEFCON a lot during lunches.  I caught the bug and decided to see if I could get a spot.  This was a source of some consternation for team leadership, since more people is more expensive (food and badges), and I hadn't actually played any CTF since...  Hack.Lu of 2014, probably, and certainly not DEFCON quals of 2018, which is usually the heuristic for finals attendance (in my defense, I was in a plane over the Pacific for much of quals).  I had also never played in DEFCON finals before.  The interns thought I'd be worthwhile to have on the team anyway, and went to bat for me.  I sweetened the deal by getting my employer to cover my lodging.  Still, I was left with the feeling that I was on thin ice, and consequently felt a strong desire to earn my keep.


The decision that I was going to finals occurred in early June, leaving me about two months to get back into CTF-shape.  I began playing through pwnable.kr after work, practicing with pwntools, and worked on reverse-engineering problems during Google CTF on June 22-23.  During Google CTF, I noticed a number of deficiencies in my play.  Lack of a decompiler put me at a big speed disadvantage relative to other reverse-engineers; fortunately one of the challenges was self-modifying and not very amenable to decompilation, so I contributed there as well as on a cross-architecture reversing challenge.  I also noticed that I felt acutely stressed / under pressure during the event, and this caused me to make mistakes which cost me time.  My sleep discipline leading up to Google CTF had been poor (about 6 hours a night for the preceding three weeks), and I had difficulty maintaining intensity of effort across a 14-hour day of reversing.  Generally I was happy with the state of my knowledge and ability to assimilate new documentation, but my execution needed improvement.  Unfortunately, most of these issues would recur during DEFCON finals in some form or other.

One place where my knowledge was deficient compared to other reversers was in IDA Pro's more esoteric hotkeys.  I set up mnemosyne flashcards for some of the ones that I didn't know by reflex but saw others use during Google CTF, and they proved useful - I was able to recall them quickly and accurately during Finals under stress, despite not having ever used some of them in practice prior.  I could see doing the same for opcodes, syscall numbers, and other structureless trivia.

A coworker who had played in finals before mentioned that I'd probably be useful for binary patching.  It's a fairly uncommon skillset that isn't exercised by most of the jeopardy-style CTFs that the team plays.  There's only one dedicated veteran patcher on the team, and he has to sleep occasionally.  I resolved to do patching during finals, and started playing pwnable.kr as "patchable.kr" - first pwn the challenge, and then generate a patched version that defeats your exploit.  This was helpful, and I soon had a list of attack-types and corresponding patches that I should be able to generate.

I lost about the first two weeks of July to videogames (nights and weekends / practice time).

When I returned to patching prep, I worked on tooling to make some common types of patches easy to apply.  I also took several approaches to training for speed under stress; the simplest was timing my pwnable/patchable.kr plays.  Per Randall Collins' work on meditation allowing people to overcome their emotional barriers to competent violence, I extrapolated that meditation might be useful for overcoming stress and aversions during CTF.  Unfortunately, I was undisciplined in my approach to meditation, and it is hard to tell if it bore any fruit.  If I were to do it again, I would screen-record myself playing pwnable/patchable, and use that to better reflect on and improve my execution.  The knowledge that you are being recorded also adds a layer of stress which I think would be useful.

Around 19 July, I began work on some patching tools to create huge amounts of space inside ELF binaries, which could then be used for very extensive patches, potentially up to relocating whole functions or pulling in whole functions from compiled C (in recent years, the DEFCON CTF organizers have denied contestants the ability to use LD_PRELOAD defenses, but this approach would allow us to put equivalent defenses inside the binary itself).  This project of creating space occupied most of my prep effort for the remaining three weeks leading up to DEFCON.  I also assisted one of the interns in her preparations, building a repository of docker images for testing patches and exploits.

My sleep in July was mostly good (7+ hours a night), but interrupted in several cases by taking visiting coworkers out on the town.  It began deteriorating down towards six hours a night in August, as I started staying up late to put more time on preparing tooling.  This was a mistake.

I arrived in Las Vegas for finals on the evening of the 8th of August.  The hard-drive of my laptop began acting up on the flight, but recovered on arrival - I suspect it was just intolerant of the vibrations of the engine.  Still, perhaps it is time to replace my spinning-rust with an SSD.  The time-shift did my sleep cycle no favors.  I spent the 9th socializing and adding a few last-minute features to the tool for making space for patches.  Our main patcher thought it was a splendid tool; something to the effect of "I've always wanted this but didn't want to write it".  He began building tooling on top of it.


I woke up very early on the friday of finals, and my biological clock was still set to EST.  This year finals were run by the Order of the Overflow, an organization including parts of Shellphish, some academics at University of Arizona, and a smattering of others.  We weren't really sure what to expect; new organizers always change things up.  Finals began slightly delayed (not surprising), and opened with a new type of challenge, King of the Hill, where teams competed to maximize their score on a game of assembling and disassembling (instead of a more traditional attack-defense challenge, where teams try to exploit services that other teams are running).  I ran into difficulty with my networking setup, where I was unable to connect to our team's VPN because my version of OpenSSL was too new, so I sat out the reversing King of the Hill while I compiled libraries and eventually ended up building a virtual machine for VPN access.  In addition to introducing King of the Hill challenges, the rules also mentioned that patches would not be made available to other teams (unlike the previous two DEFCON CTFs), but there would be limits on the number of bytes a patch could change in a service.  This rendered all of our patch-tooling preparations useless immediately.  While this was obviously disheartening at the time, on further reflection I think it was a very reasonable decision - since the Cyber Grand Challenge, some teams have had very strong tooling for patches, and compared to them our stuff was probably pretty weak.  So this change ultimately worked in our favor, I think.  Also, it meant that I got to spend my weekend doing old-school manual patches instead of writing and speed-debugging software (like my day-job), and I had a lot of fun.  The manual patching practice I had done on pwnable.kr definitely paid off.  Other changes to the rules included a big delay on the release of network traffic to teams, which meant that pulling exploits off the wire was no longer a viable strategy.  This would've made testing our patches more difficult too, but the organizers provided automatic testing of patches on upload, and rejected patches which failed functionality, instead of deploying them and then docking your score.  This allowed us to patch pretty aggressively and then roll back to more conservative patches if necessary without much of a penalty, which was great.

The first attack-defense service, twoplustwo, came out around lunchtime on friday.  Our veteran patcher was playing the King of the Hill, so one of the interns and I fielded patches as bugs were discovered.  We didn't first-blood the challenge, but discovered the bug shortly after and had working patches before we had working exploits, so I felt good about that.

The second attack-defense challenge, pointless, opened in the afternoon.  It was a MIPS binary that did a bunch of crypto.  Our most-senior exploiters went to town on it and had an exploit for it about two hours before the competition closed for the night, but were unable to throw it due to organizer error for about half an hour.  This caused much frustration.  Once the competition closed for the night around 8PM, they came up and briefed us on the bugs.  Our main patcher handled most of the patching for it, as I was pretty bushed.  Another king of the hill challenge on polyglot shellcoding was also released just before the end of the day.  I slept from about 0200 PST to 0730 PST; not bad for a ctf.

Saturday morning, we showed up bright and early with a bunch of new patches and exploits to deploy, only to find the game start shifted back half-hour by half-hour for two hours or so, with a corresponding one-hour shift of the end of the game.  So we sat there on high alert for a couple of hours, because we knew we would probably have to revert and redeploy patches ASAP once things launched.  On the upside this gave me some time to debug and fix a battery problem with my laptop (another TODO).  When things did start, our patches for pointless failed because we had to leave a directory traversal in for intended functionality.  We spent a while going back and forth on how to patch pointless properly; I think we got a patch we were happy with in the early evening.

The twoplustwo service was retired (maybe a little early IMO, as there were still teams unpatched), and another service named poool was released.  This was a monero pool mining server (which I was really hoping would require Andersen-style optimization), and required pretty extensive reverse-engineering.  Here again I was at a disadvantage due to lack of familiarity/practice with decompilation.  Unlike previous services, the problem description for poool did not specify a limit of number of bytes we could patch, but did mention that we could only submit five patched versions.  This made testing our patches our responsibility again, and a coworker of mine who was playing took it on himself to write a testing framework for poool.  That evening, DEFKOR started exploiting poool, while we were nowhere close to having an exploit or patch ready.  We figured that the bug was related to multiple-counting of submitted shares, and our main patcher was asleep, so I hammered out a patch for that using make_space and not worrying about the number of bytes I changed.  It took me a smidge over an hour, during which I was sweating and shaking / experiencing physiological stress, and during which I made some stupid assembler mistakes.  At the end it passed our tests beautifully, so we submitted it about an hour and a half after exploits had started hitting us.  It promptly failed the SLA check, because there was actually a limit in bytes, and the organizers had changed the description of the problem without alerting anyone.  This also cost us one of our five shots at patching the binary.  We tried to get it working without make_space but it was segfaulting and there was very little time left in the game for the day.  So that whole evening was a bit rough, but also somewhat satisfying, because I didn't choke under the stress - I did a totally reasonable thing given the information I had available to me, it worked, and it turned out the information I had was wrong.

That night, we discovered that there was a much easier bug in poool, which was probably the one that DEFKOR had been exploiting.  It was a one-byte patch.  Our main patcher also took over and shrank the multiple-counting patch better than I had, and we added the new variants of these bugs to the test suite.  A new service, vchat, had been released just before closing time, and most of our exploiters were working on that.  I think there was also a web challenge that came out that evening?  I didn't look at it.  I went to bed around 0200 PST and got up on sunday around 0700 PST.

Sunday morning, the limit on the number of times we could patch poool had been removed, which made all the work on the testing framework sort of wasted.  We submitted our well-tested patched for poool and it passed SLA from the get-go.  Had some problems with the web patch.  Our main patcher was wiped out from being up all night and he delegated the one bug that had been found overnight in vchat to me.  It was actually a very nice patch and I was proud of it, even though the bug turned out to not be exploitable (nobody ended up exploiting vchat at all).  Another service, reeducation, a rust attack-defense binary, was released that morning.  I wasn't involved in patching it; I think one of the interns handled it.

Sunday around lunchtime, a new king of the hill was released, this time focused on patching and minimizing the number of bytes you changed to modify the functionality of a binary that hashed itself, while still preserving the hash.  Honestly my mind was kind of blown by the whole thing, and I helped out with debugging our patch scripts for it but didn't make any tremendous structural contributions.


Overall, the types of bugs that were being exploited were different than the ones I came prepared to patch by pwnable.kr; several signedness bugs, heap leaks, bytecode execution, one-byte-writes off end of heap object instead of format strings and buffer overflows (except the MIPS challenge, which had a straightforward stack buffer overflow once you got through all the crypto).  Not really surprising, just not quite what I practiced for.

Though my preparations were for naught and I could've certainly done better, I felt that I was useful and I had fun.  I think choosing to operate as a dedicated patcher was a mistake in terms of team labor-allocation strategy; while one of our captains said it was "our best year ever for patching, though not by much", ultimately we needed more exploits, which would require more eyes on reversing.  If I want to be more useful next year, bug discovery is probably a good thing for me to work on (though there were definitely subtleties of patching that could stand improvement too).  I think the reverse was also true - given that the stakes for an incorrect patch were much lower than in previous years (due to automatic SLA), there was little reason that our reversers shouldn't also be doing patching, especially given their greater familiarity with the services.  In a number of cases, they handed off one-byte patches to us ("just change this signed compare to an unsigned compare"), which felt a bit silly.  I have little doubt that they have the necessary background; there's just sort of a mental block around patching, I think (I know I once had such a block).  Likewise, our network defense folks really didn't shift to something more useful when it became apparent that pcaps were not forthcoming.  Our labor allocation was too rigid; we didn't adapt enough to the changed environment, and our command hierarchy / senior players didn't push us to.

19 July 2018

Reading / Status, 19 July 18

Been writing a ptracer, playing pwnable.kr, and played Google CTF a few weekends back.

pwntools documentation
man pages for process_vm_writev, process_vm_readv, memfd_create, and ptrace

ptrace gotchas:

__WALL in wait - if you use O_TRACECLONE and you want to trace all the threads of your traced process and its children, you need to use __WALL in the flags to wait/waitpid.  Otherwise your tracer won't be notified properly of events in non-main threads of your tracees.  This may no longer be necessary for newer kernels, but it sure is on older ones.

SIGSTOP on launch - immediately after your tracee calls ptrace with PTRACE_TRACEME, it should raise SIGSTOP.  Otherwise there is a race, where your tracee can exec and do whatever it pleases for a while before your tracer gets around to attaching to it.

Don't trust PTRACE_GETEVENTMSG's exit status for the tracee during a PTRACE_EVENT_EXIT on 4.9 kernels - if your tracee is exiting due to a signal, geteventmsg reports the exit status as positive rather than negative (for example, during an exit due to a segmentation fault, the process will actually exit with status -11, but geteventmsg will say that it is exiting with status 11...  which makes it very difficult to tell if the tracee is actually dying with a signal, or just exiting with a weird return code).  It also returns 1 instead of -1 for SIGHUP on a 3.13 kernel.  Don't trust it.  Disregard, I had a bug with the way I was processing the output of PTRACE_GETEVENTMSG; it was generating exit codes, not exit statuses, so some bits were set differently than I expected based on whether dumping core files was permitted on the machine.

More readings:

Network Science, chapter 8 - this is really what I started this book for in the first place.  After setting it aside for six months, I determined to cut to the meat and then work my way backwards.  It was actually a very interesting chapter - some of his conditions for cascade failures sound a lot like the mechanism of action for neural networks (flow over a network, and each node has a local breakdown rule that determines whether it rebroadcasts the cascade).  The inherent tradeoffs between resistance to random failures and resistance to targeted attacks were also unexpected and worthwhile.

T.E. Lawrence's Evolution of a Revolt.  Also interesting in terms of successful adversarial thinking; enumerate enemy weaknesses in terms other than the obvious targets for attrition.

Brooks' No Silver Bullet.  Much of what he says on the essential difficulty of software still rings true; much of his pessimism on the popular emerging technologies of his time has been vindicated.  I'm a little skeptical today of "buy instead of build", because my experience so far has been that external dependencies are also full of bugs and in the worst-case, a high-priority bug in an external dependency will take longer to fix than in an internal one (but on the converse, they do enable force multiplication in the average case, by increasing the effective total labor available.  It's just less immediately allocable).  I think incremental development and "grow software instead of building it" is mostly-correct, though a somewhat missed point.  My perception of the industry today is that the two primary metaphors for software are building software (more on the enterprise side) and cooking software (more on the devops side), and that the biological metaphor has been lost and certainly hasn't won.  Definitely something to keep in mind in my own practice.  Finally, on great designers, I think he's mostly right, that great software is the product of one or a few minds, rather than committees (conceptual integrity).  I do find it curious that he commits the common error of misinterpreting Peopleware's results on productivity, claiming that the gap between average and peak is 10x, whereas Peopleware argued that the gap between the very worst and the very best was 10x.  Perhaps he had other sources.  Finally, I am saddened that I have zero career development plan, with anything resembling apprenticeships or opportunity for advanced study outside of my daily duties.

Ran into some bugs related to SIGHUP recently, ended up reading The TTY Demystified and found it quite useful.

08 May 2018

Setting up LUKS on Xubuntu 18.04

Spent some time this weekend setting up LUKS on my work laptop in preparation for some international travel.  This document is as much notes for me so I can reproduce the process as it is for my three dear readers who are probably bots.  Followed these directions, and for the most part they worked alright.  I did run into one issue: when running the refresh grub script from step 6 of this page, a number of x86_64-efi files did not exist.  I was able to work around this issue by running
apt-get install grub-efi-amd64-bin
, after which the script ran successfully.  I don't know why grub wasn't properly installed earlier in the setup procedure, but there you go.

I also made two small improvements to the process.  During paranoid setup, I used the AES noise fill from here:

openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > /dev/sdxy 
In the check and finalize procedure, there's a note that you now have to manually run the update grub script every time you update the kernel.  But I know I'm going to forget to do that, so I googled a bit and found this thread, which suggested adding the script to /etc/kernel/postinst.d/ .  So I did that, and we'll see if it comes back to bite me in the ass and render this machine unbootable in a year or so.

17 March 2018

Peopleware, Slack, and Design Patterns

I read Peopleware last week, and was surprised to find that it referenced Christopher Alexander's work on design patterns in architecture, as these relate to laying out working spaces for developers.  I was struck by the notion of an intimacy gradient, where a household has public areas (a dining room or sitting room), then shared private areas (a kitchen, perhaps), and then individually-private areas (bedrooms, studies).  Peopleware proposed that we lay out spaces for programming teams in a similar way; meeting areas, then common areas, then private offices.

This all seems very reasonable, but I find myself lately working on geographically distributed teams reshuffled about quarterly, with most of our communication happening through slack.  There is a persistent pattern where a new team is formed (administratively; hardly to the level of unit cohesion that Peopleware would ask of something called a team), and someone creates a slack channel for that team and invites all the formal members of the team.  Gradually, more and more people from outside the team end up in the channel; sometimes they're experts in a particular thing that are called in for a particular question, sometimes they're managers or executives who invite themselves, sometimes they're just curious engineers trying to maintain situational awareness (I have been guilty of this). As the number of people increase, conversations tend to get sidetracked (or be off topic from the start), bystander effects increase, the SNR drops, and the utility to the team deteriorates.

To use Peopleware and Alexander's language: a shared private space for a team becomes a public space over time.  While we have the technical means to kick people out of channels (or should I say rooms?), it's never been done; it's outside the norms of our microculture.  In short: we're wusses.

My solution, on realizing the nature of this problem,was to create a direct message set with the other members of my team (so now we have effectively a public channel where management can ask us questions, and a private channel where we can figure out our collective answer before replying).  So far this has been very productive, and it is immune to gradual dilution.  It does produce a potential gap in organizational memory, though - if a new person is added to the team, they can't search that DM channel's history.

This realized parallel between slack and the architecture of working spaces raises some interesting questions about the social parts of the web generally.  I am sorely tempted to go hunt down a copy of Alexander's book, A Pattern Language, and see if there are more elements applicable to the design of digital spaces where people live and work, again by the "room" metaphor.

While investigating Alexander's work, though, I learned that it was the inspiration for the whole software design patterns set of memes.  I had heard of them, but had mostly seen them in code that I considered quite ugly, and so found them distasteful.  Reading more about them now, I have not found any evidence strong enough to shake that distaste.  I think at the very least, the method used to arrive at them is unsound.  When Alexander derived architectural design patterns, he did so from a thousand years of functional architecture, developed at global levels of parallelism and leaving behind concrete artifacts amenable to public study.  When the Gang of Four derived software design patterns, they did so from a bare half-century of object-oriented programming (which, as for example Norvig notes, is hardly the only reasonable idiom for programming).  Software engineering as a discipline does not have the corpus of high-quality, environment-adapted folk-work that architecture does.  Most of what we build is crap that evaporates in a decade or less, and the projects that survive do so not because of their great and highly-functional architecture but on the whims of the market.  It is still very early days to be inductively drawing software design patterns with anything like the authority that Alexander can achieve with architecture.

11 March 2018

Reading / Status, 11 March 18

The Google SRE book, up through chapter 26.  Chapter 25, on pipelines, and 26, on data integrity, were actually relevant to my work, so that was neat.  I was somewhat disappointed with Chapter 22, on cascading failures - they're a topic of great interest to me, but the treatment here was very practical, specific to distributed computing environments, rather than as a general phenomenon.  I was impressed by the degree to which PAXOS is central to Google's production systems (having previously considered PAXOS an academic / military-industrial curiosity with little commercial application).

Peopleware, 2nd Edition (because I'm cheap).  This is an excellent book, and agrees mightily with my experience as a software engineer and briefly as a team lead.  I'm considering springing for the 3rd edition.

Related to Peopleware's argument that most differences in programmer productivity are a product of the work environment, particularly distractions, Dan Luu on programmer moneyball and Abe Winter contra slack.

Paul Graham's Beating the Averages, and subsequently parts of ANSI Common LISP, particularly the macros chapter.  I was kind of unimpressed, which might mean that I didn't really understand it.

Parts of Thinking in C++, Second Edition, which stackoverflow recommended in answer to a query about "C++ for C programmers".

Parts of Cracking the Coding Interview.  Not an especially insightful book; I could see this having been very useful to me when I was an undergrad interviewing for internships, and it was a decent refresher on basic topics, but I guess the main thing I learned is that the bar for algorithmic knowledge might be lower than I thought and I shouldn't try to cram (say) optimal max-flow/min-cut and convex hull algorithms before interviews.

In sum: things which name and explain my dissatisfactions with my current employer, and resources for acquiring a new employer (maybe I should be trying to change things here instead of just bailing?).  Not so much stuff that I'm really interested in reading for its own sake.

15 February 2018

Reading / Status, 15 Feb 18

Been a while, still alive.  Got into a relationship right around the time of last post, ended in January, so now I'm back (to the extent that I was ever here).  Interesting things I've read (or done) recently:

Gwern on spaced repetition, started using mnemosyne for data related to a couple of new hobbies / capabilities I want to bring online this year.  Related and linked from Gwern, SuperMemo's guidelines for writing flashcards.

KB6NU's Tech and General ham radio exam study guides, followed by passing both of those licensing exams (spaced repetition helped).

Started supplementing vitamin D and fish oil (since I have very little sunshine or fish in my life).  Mood seems slightly increased generally, though I have had difficulty focusing (really need to start meditating regularly again).  A multi-month skin condition cleared up in 48 hours with the vitamin D supplementation, which was neat.  Also began playing with caffeine pills twice a week, rhodiola rosea once a week, and irregular panax ginseng.  Stimulants really don't do me any good without focus though.

Relatedly, SlateStarCodex on placebo - I'm willing to chalk up improved mood to regression to the mean following relationship termination, but the skin thing seemed pretty real.

Been having persistent fasciculation in my left eyelid, pretty annoying.  Have been getting ~7.5 hours of melatonin-assisted sleep most nights, though frequently interrupted in wee hours of morning.  Doesn't seem to be a magnesium deficiency; brocolli and blackstrap haven't done anything for it.  Might just be stress.

Most of the Google SRE Book.  Error budget seems like a great concept.  I like the line "every line of code is a liability."  Pleasantly surprised to find unattributed quotes from John Gall / General Systemantics.  Most of the book is not really applicable to my current work, though.

Evolving a Decompiler.  Very cool, I should really download and play with their code.

Several posts at Path Sensitive, along with the first half of this paper that he linked to (I intend to finish it).

Gwern's book review on a history of naval operations analysis.  I tend to roll pretty Chesterton, so this gave me some things to chew on.

Spolsky came up at work, so I read Field Guide to Developers (I really need to get around to reading this copy of Peopleware on my shelf), Guerilla Guide to Interviewing, and Paul Graham's Beating the Averages.

Martin Jambon's Universal Career Advice.

GDB's startup-with-shell off command, for when you're debugging programs that LD_PRELOAD libraries that break bash.  Relatedly, ld.so man page.  Lots of grungy environment variables to play with.

04 September 2017

Reading / Links, 4 Sept 17

Stuff I've been reading in the last week or two:

Network Science, chapters 3 and 4. Pretty funny; he throws some shade on Erdos and Strogatz.  The editing / proofreading continues to disappoint, but the material is decent.  The main thing I want out of this book is an understanding of cascade failures (which he claims to have a good model for in the introduction); a graph theory refresher doesn't hurt though.

The Mind Illuminated, Chapter 4, Interlude 4, beginning of Chapter 5.  Interlude 4 was very interesting - consciousness is quantized and cut up into frames, like network packets, and dullness is packets dropping.  I really wish he'd include footnote references for the science behind this stuff, given that he's a neuro guy...  Given chapters 3, 4, and 5, it seems like I'm somewhere in late phase 3 or early phase 4 (modulo the fact that my practice is still irregular).

The Systems Bible.  Has nothing to do with systems programming, except inasmuch as programmers build systems.  Describes ways in which complex systems evolve and dysfunction.  Not at all rigorous, to the point where it doesn't bother to define "system", but some parallels with Rao's Gervais Principle in the organizational context (organizations constructed with backdoors allowing actual work to get done, eventually collapse under their own entropy) and with some of Scott's criticisms of high modernism in Seeing Like a State (the designed system opposes its own intended function and scales in unpredictable ways).  Also seems sort of linked to The Dispossessed, with its point about the emergence of effectively-bureaucratic systems under anarchist conditions.

Introduction to the DWARF Debugging Format.  I'm looking for stupid dwarf tricks, and was excited to find that DWARF contains at least two sorts of bytecode for generating tables of debugging information.

Relatedly, Funky File Formats.

Documentation on ptrace, proc, ELF, bpf, more bpf...  There's all kinds of fun stuff in /proc that I didn't know about.