I found myself saying this for the second time today: “Cybersecurity is applied paranoia”. Hardly the first time I said it, but twice in the same day is still rare.

The first time was during a meeting of cybersecurity experts, hardware and firmware engineers, and project managers. We were talking about the supply chain security of a product we’re working for, and the legal agreements we have in place with one of the manufacturers. We were doing an inventory of all the ICs (integrated circuits) on the system-on-module the manufacturer manufactures for us, and where the firmware for each of those ICs comes from. When we provide the firmware ourselves, or when we at least have all of the source code for the firmware, we can have some measure of control over what goes in inside the IC. When we don’t however, things get complicated: if the IC comes pre-programmed or if the IC’s firmware, although pushed into the IC by our own firmware, is closed-source and delivered as a binary, we have very little control over what it actually does, and we have very little leverage to address things like vulnerabilities found when devices are already in the field (and part of critical infrastructure).

This means that we need to ask two questions: how much do we trust the manufacturer to provide us with firmware and hardware that does not contain “intentional vulnerabilities” such as back doors, knowing that we won’t be able to verify the code ourselves; and to what extent can we rely on the manufacturer’s own quality assurance and vulnerability management processes to notify us of any new vulnerabilities in a timely manner, so we can manage the quality and security of our products.

As we were talking about the first of these two parts, how much we trust the manufacturer, I mentioned that “Cybersecurity is applied paranoia: we need to decide how paranoid we want to be with respect to this supplier”. If I wanted to, I could design an IC firmware that, when updated, will leave some of my code behind so I have a permanent and practically undetectable back door. That would require malicious intent on my part, or would require me to be coerced to do it on behalf of someone else. Food for thought to be sure, but for practical purposes is means paranoia is a dial you can turn one way or another, with more paranoia leading to demanding more transparency on the part of your suppliers, strictest terms and conditions in the legal agreements you set up with them, and more verification on your own part.

The second time I said it today was during a meeting of the DNP Cybersecurity and Secure Authentication Task Force, the CSTF. The task force is made up of cybersecurity and SCADA experts and is currently in the process of designing two protocols, DNP3 SAv6, which will be codified as IEC IS 62351-5 and as clause 7 of IEEE Std 1815, the former defining the protocol for IEC 60870-derived protocols including DNP3, and the latter defining it as the new Session layer of DNP3. The second protocol we’re working on is called AMP: the Authorization Management Protocol. It will be codified as a separate standard and will be proposed to the IEEE when ready.

We were discussing the DNP model for managing communications endpoints, which was radically revised in the upcoming clause 13 of the standard, and how it relates to how AMP will be able to route its messages from the Authority to a field device (what DNP3 calls an outstation). The issue involved, among other things, the inability of intermediate devices along the path from the originating device to the Authority, to authenticate messages from the device: only the device itself, its immediate peers, and the Authority have the device’s certificate and can therefore authenticate the messages signed by the device. Any other device can’t, and will therefore refuse to parse any such message. This, again, is applied paranoia: cybersecurity is based in trust. Devices will trust the Authority and will trust whomever the Authority tells them to trust, but trust is not transitive: if I trust you and you trust Bob, I don’t necessarily trust Bob.

This particular feature of AMP has already been debated thoroughly at the CSTF, but it scuttled an idea that came up to resolve av different issue. Hearing myself mention “applied paranoia” for the second time in a few hours, though, made me look up the applied-paranoia.com domain name, and reserve it.

So here it is: a new blog about cybersecurity. My “C++ for the self-taught” blog and podcast went dormant when I stopped spending most of my days writing code in C++, and that’s not a good forum for this subject. My other blog is a better forum for more general musings, and I haven’t take had much time for those – at least not ones I can easily share without an NDA. Cybersecurity is a vast subject, though, and I can talk about it in general and sometimes specific-enough terms without breaking any confidentiality agreements, so time permitting I’ll do just that.