Security Engineering: Proceed with Caution

An expert’s critical review

Alex Gantman
10 min readNov 27, 2023

TL;DR: An engrossing opinionated tour through the field of security, but harmful biases, unsubstantiated claims, and lack of actionable advice risk undermining readers’ future effectiveness.

I have recently finished reading the third edition of Ross Anderson’s Security Engineering cover-to-cover and decided to write down some reflections on this classic of our field.

First, the great parts. The book is an exceptional introduction to and survey of the field of security engineering. There is a good reason for why Security Engineering has been so firmly implanted in our canon. It provides a broad-reaching overview of the field and associated policy, economic, and ethical issues. The grounding in decades of personal experiences makes the read that much more relatable and enjoyable. As there appears to be no shortage of praise for the book, I won’t dwell on the subject and move on to areas of disagreement.

As with any work, there are, of course, nits to pick. But that’s not my intent here. There are aspects of the book that I consider to be major, even harmful, faults. My criticism of the book falls into three major themes: dearth of actionable advice, derision towards developers and users, and strong claims, weakly supported. None of these things would necessarily be a problem in a memoir or a philosophical treatise. They might even add an enjoyable edge to such a piece. But for a technical reference, each one on its own is a serious concern. In combination, they may be harmful. Let me address these topics one by one.

Dearth of actionable advice

I am a big fan of learning from failure and wholeheartedly agree with the book’s premise that just as “civil engineers learn far more from the one bridge that falls down than from the hundred that stay up; exactly the same holds in security engineering.” Unfortunately, the lessons in Security Engineering seem to be confined to pointing a mocking finger at the mistake and advising the reader to avoid such stupidity in their work. Time and again the reader is advised to tackle problems by thinking hard, thinking carefully, thinking through, thinking from first principles [which are left undefined], paying serious attention, taking care, keeping a level head, etc. Such platitudes are not actionable or constructive. Thought and self-reflection are good, but they are not substitutes for technical skills. When we teach people calculus, we don’t just tell them to give serious thought to the area under the curve, we teach them specific mechanisms for doing so.

It’s not enough to point at the ruins of the Tacoma Narrows bridge and say “don’t do that.” From failures we must extract for the aspiring engineer specific techniques they can use to avoid repeating the same mistakes — what forces must be considered, how they can be calculated, what are compensating controls to use, etc. Of course, not everything can be reduced to algorithmic processes. Critical thought always plays a role in engineering activities. But it does not mean that nothing can be taught. After three editions there is not a single exercise in the book to help solidify understanding and practice a taught skill. It is a book about engineering, but it is not an engineering textbook.

The reader is told to read a specialist book, consult an expert, use proper tools, and get someone knowledgeable to evaluate what they’ve done, without being given the relevant references. How does one identify the right specialist literature? Will any book on the subject do? Is the one with the most or the best reviews on Amazon the right one to consult? How does one identify the proper tools? Is it the one at the top of Google search results or the one with the most GitHub stars? How does one identify an expert without being one, given that security is a market for silver bullets? Above all, wasn’t the promise of Security Engineering to teach the reader these skills so that they become the expert?

Perhaps that “think hard” remains the most we can offer in so many cases is a sign of immaturity of our field. When we don’t know enough about a practice, we can’t extract underlying principles into repeatable, teachable, proven methods and techniques. So we fall back on apprenticeship and advice for the student to observe the master and draw their own lessons. Is that really the state of our engineering art? I don’t believe it is. I believe there are security engineering techniques that we can teach. Though, maybe I just don’t want to accept that as a field we have failed so utterly so as to not have developed any proven security engineering techniques over the last half century.

So while the book is an amazingly engrossing and enriching guided tour through the field of security, it is not a technical engineering textbook, in the sense that it does not impart technical skills. Reading the book helps one become a security engineer to the same extent a visit to Falling Water helps one become an architect. It inspires reflection and creativity, but it cannot serve as the core technical reference for the aspiring professional. Maybe that was not the goal, but the title and the cover text certainly frame it as such.

Derision towards others

Reading the book, I felt strained by the cognitive dissonance between the avowed recognition of the importance of psychology, economics, and usability on the one hand and the unabashed derision towards anyone whose decisions or actions appear at odds with the interest of a security engineer on the other. Somewhat ironically, given the stated appreciation for psychology and the Fundamental Attribution Error, the book rarely misses an opportunity to portray others (developers, users, companies, governments) as thoughtless, clueless, toxic, dreadful, lazy, careless, incompetent, unskilled, clumsy, so dumb, foolish, cynical, negligent, not interested, and beyond stupid. They can’t get their act together, don’t bother, don’t understand, wish to get away with something, don’t care or don’t take enough care, screw up, bully, arm-twist, wiggle and evade, stop their ears, blunder, don’t stop to think, ignore the basics, should have known better, make fools of themselves, misunderstand or ignore insights, lack concern, and lack system thinking. They produce shoddy, badly designed systems, make egregious errors, and are likened to pigs with their snouts in the trough. This is in sharp contrast to the heroes of the story, the prudent, responsible, thoughtful security engineers who advise and recommend elegant, sophisticated solutions. Even their faults are thinly veiled compliments — they are not as good at deception and less likely to keep quiet about uncomfortable truths. They are miraculously not subject to perverse incentives. When they do make a mistake, it is “simplistic to think that great minds got it wrong,” the emergent behavior of the adversary was simply impossible to predict. Such black and white portrayals are textbook examples of the Fundamental Attribution Error — the mistakes of others are attributed to their competency or personality flaws, while our mistakes are unavoidable learning opportunities.

Security arrogance plagued our community in the 1990’s and early 2000’s, but we have largely grown out of it and developed the maturity to constructively partner with development and business teams. The outcomes are indisputable[1] — security of today’s systems built by organizations with competent security teams is head and shoulders above anything the industry produced in the preceding decades. The level of effort required to discover a vulnerability and develop an exploit against an up to date system is orders of magnitude higher than it was in prior decades, even as our exploitation capabilities have grown. Progress is also reflected in the expansion of the threat model for commercial devices. Today, industry generally accepts state-backed attackers as firmly within scope to defend against. This was not the case 20 years ago, in part because protection against such adversaries was unthinkable in consumer products.

Beyond the purely human reasons for not calling people names and treating others with respect, there is also a logical mistake in attributing design flaws to lack of intelligence or foresight. First, there’s certainly no shame in not applying skills that we have no idea how to teach (see section above). Second, the emergent nature of the attacker/defender dynamic makes many predictions impossible, even though the outcomes often appear misleadingly inevitable in retrospect. We don’t blame vaccine creators for failing to foresee how a virus might evolve to bypass their defenses. And one does not have to look very hard in classic security papers to find very bad predictions from very smart people.

This attitude towards “others” is my biggest concern with the book — the one that warrants calling it dangerous. It instills in aspiring security engineers toxic views of their future colleagues. Such attitudes among security engineers can be harmful to their teams and limiting to their effectiveness.

Strong claims, weakly supported

My final criticism of the book is the questionable foundation for many of the claims. Throughout the book, a multitude of claims is made without any supporting reference at all. In some cases, having worked in the industry for a while, I know where the claim is coming from and can find the reference myself, but I am not sure the same will hold for someone 20 years younger, new to the field, reading it five years from now.

But the real issue is not lack of citations for claims that are true and easily verifiable. I am much more concerned by the claims that are questionable, specious, or outright wrong. There are many such claims missing supporting references and many others backed by only weak sources (such as opinion pieces, blog comments, and second- or third-hand accounts) that would not withstand critical scrutiny if not for confirmation bias. Others are bordering on conspiracy theories, supported by nothing more than innuendo and hearsay. Even as someone sympathetic to the conclusions supported by these claims, I frequently found myself wincing at the tenuous connections and weak supporting evidence. Perhaps it was the tendency to frame opinion as fact, or one possible interpretation as the only reasonable one. Some of the more sensational claims are:

  • That the NSA has a deal with CERT, allegedly forged at a secret meeting in “FBI offices in Quantico” attended by major tech companies, to get access to early vulnerability reports to give them a window to exploit bugs before they are patched. (pp. 297, 929). No reference is provided. There is also no explanation of why CERT has seemingly backstabbed the NSA by encouraging vendors to become CNAs, thus removing CERT from the pre-patch disclosure process.
  • That more Americans have died in car crashes by avoiding air travel after 9/11 than perished directly from the attacks on that date. (p. 937) The reference cited is questionable at best and has been widely contested.
  • That “safety usability failures are estimated to kill about as many people as road traffic accidents — a few tens of thousands in the USA, for example, an a few thousand in the UK.” (p. 9) No reference is provided.
  • That “online crime now makes up about half of all crime, by volume and by value.” (p. 18) No reference is provided.
  • That “in 2018, the Trump administration changed the doctrine to allow first use of nuclear weapons in response to [a cyber]attack.” (p. 542) No explicit reference is provided. However, from context it can be assumed that the source is the 2018 Nuclear Posture Review document. The change in question seems to be a clarification of the longstanding policy to “only consider the use of nuclear weapons in extreme circumstances to defend the vital interests of the United States or its allies and partners.” The 2018 NPR has added that “Extreme circumstances could include significant non-nuclear strategic attacks. Significant non-nuclear strategic attacks include, but are not limited to, attacks on the U.S., allied, or partner civilian population or infrastructure, and attacks on U.S. or allied nuclear forces, their command and control, or warning and attack assessment capabilities.” The interpretation offered in the book is one of many possible ones. It can be equally seen, as it was by many observers, as simply clarifying the policy that has existed all along.

There are many less audacious claims that appear to rest on similarly thin support, that I omit here. But there are also a number of claims that I believe to be wrong and I have included a list of the ones that stood out for me below.

Conclusion

Bottom line: would I recommend the book to others? I would. In fact, I just recommended it to my son. But I would do so (and did do so) with a disclaimer that what they are picking up is an opinionated guided tour through a museum, not a technical reference or tutorial. I would help them contextualize the views and biases of the book and would point them to additional sources for the technical details of how to build secure systems.

Contested Claims

I have shared all of the below (and all of he above as well) with Ross, unfortunately without either of us changing our views.

  • p. 233: The description of stack buffer overflow is just wrong. The buffer on the stack does not overflow into executable code. It overflows the return address which, when overwritten, allows the attacker to hijack control flow when the function returns.
  • p. 639: “Side channel attacks are everywhere, and 3–4 of them have caused multi-billion dollar losses”. The examples provided (such as Tempest and Spectre/Meltdown) are technically vulnerabilities, not attacks. There is no indication that exploitation of these vulnerabilities resulted in significant losses.
  • p. 682: “The main application [of TrustZone] has been mobile phones, whose vendors wanted mechanisms to protect the baseband against user tampering (for regulatory reasons) and to enable the phone itself to be locked (so that mobile network operators who subsidise phones could tie them to a contract).” As someone who was there at the time, I was surprised to read that TrustZone had anything to do with baseband. As far as I remember, DRM was the main driver.
  • p. 872: “In 2010, Karl Koscher and colleagues got the attention of academics by showing how to hack a late-model Ford.” It was a Chevy Impala, and according to the authors and GM employees, they got attention of the industry too.
  • p. 933: “by 2019 it [EternalBlue] was being used in ransomware that shut down email and other services in the city of Baltimore.” These were early unsubstantiated rumors that have been subsequently debunked.
  • p. 956: “after a Washington newspaper published Judge Robert Bork’s video rental history, scuppering his nomination to the US Supreme Court” Bork’s video history was unremarkable by all accounts and unlikely to have played any part in his failed confirmation.
  • pp. 1033–1034: “certified websites were more likely to attempt to load malware on to your computer, rather than less.” That’s not really what the cited reference demonstrated. Technically it only showed that certified websites were more likely to generate alerts from their automated scanning tool. There was no subsequent validation of the tool’s claims.
  • p. 1052: “We soon found out that it exploited Xiaomi CCTV cameras that had default passwords and whose software could not be patched.” I do not recall, nor could I find references to any large scale (or any really) Mirai exploitation of Xiaomi devices, or references to devices being non-upgradeable. My recollection is that there were devices (from other vendors) that had hard-coded credentials that could not be updated without updating the firmware.

[1] These results seem indisputable to me, but I recognize that this view is not shared by the author of the book.

cross-posted on LinkedIn

--

--

Alex Gantman
Alex Gantman

Written by Alex Gantman

Security defense. No wires. Disclaimers: Work at $QCOM. Opinions are mine. https://twitter.com/againsthimself

Responses (1)