Principles

 


There exist many principles in information security. By principle I understand some general guidelines or best practices one should be aware of and apply when designing/implementing/configuring/etc. secure systems (You might sometimes find sources that refer to the components of the CIA Triad - see Crypt(?)  - as principles; that is not what this post is about). I also use the term principle to follow the first in line: Kerckhoff's principle, which is maybe the most known principle in cryptology. 

The list is not exhaustive, but I think it is a good selection. Also, keep in mind that implementing one principle depends on the situation (goal, costs, trade-off with other aspects such as efficiency and usability, etc.).

Kerckhoffs's principle is the second in a list of six principles specified by A.Kerckhoffs in his paper entitled La Cryptographie Militaire, published in 1883 [1]. The article is available online. The principle states that a cryptosystem should not be kept secret, as it might (easily) fall into the hands of the adversary. The key should, of course, be kept secret and only known to the legitimate parties. Put in other words, the security of a system should only rely on the secrecy of the key and not on the design of the system itself.  There are famous cases that ignored Kerckhoffs's principle, and the impact on security was severe. One of these is the A5/1 cipher used for encryption in GSM, the 2nd generation of mobile communication. A5/1 has been reversed-engineered [2] and afterward broken (e.g., [3,4]). Despite security reasons, keeping a system design hidden is not always feasible. It might work for specific applications (e.g., military) but cannot work for standardized communication (e.g., over the Internet). The principle applies not solely to encryption/decryption systems but to cryptological primitives in general (e.g., message authentication codes, signature schemes, commitment schemes).







 


The principle of sufficient keys states that the number of keys should be large enough such that an adversary can't simply brute force. Brute force (also called exhaustive key search) means that the adversary tries all the possible keys until he/she discovers the right one. Realizing that a key is right is feasible in practice because once the adversary tries it, the adversary obtains some advantage. For example, the adversary attempts to decrypt a ciphertext and knows he/she guessed the key when he/she can read the data. There exists a cipher that does not allow this, but it is generally infeasible in practice - the One Time Pad (OTP), which I will explain in a later post. The sufficient number of keys differs from system to system (e.g., for the same security, symmetrical cryptology usually requires shorter keys than asymmetrical cryptography) and changes in time, as, e.g., the computing power increases or the cryptanalysis advances.

The principle of key separation states that one should use different keys for different purposes. For example, use one key for confidentiality and another key for integrity protection. The reason to do so is that the damage is lower in case of disclosure: if the adversary finds the decryption key, the integrity might still hold; and if the adversary finds the integrity key, the confidentiality might still hold. A second example to illustrate this principle is the presence of session keys, where each session uses new, fresh keys. If the adversary breaks a key in one session, other sessions might remain secured (see e.g., the concept of forward secrecy). In real life, the LTE key hierarchy [5] nicely illustrates the principle of key separation.






The principle of simplicity states that one must avoid unnecessary complexity, as complexity brings in security risks. To enumerate just a few examples, think that a complex system brings in more possible errors that can be exploited by attackers (either in design, implementation, or usage), makes recovery more difficult in case of attacks, and introduces supplementary effort to maintain security up to date. NIST competitions launched to adopt cryptological standards name simplicity as one of the requirements and criteria for choosing the winner (see, e.g.,  [6] 4.C.2). Sadly, the principle of simplicity is often broken, also as a consequence of economic gain. A commercial of one of the Nordic grocery chains (founded in Trondheim, a beautiful Norwegian city I have worked in for some years) nicely illustrates this - see the video here.

The principle of diversity states that one should use diversity in securing a system. This is because a vulnerability or a successful attack against one component will not necessarily hold against the other components. On the contrary, if the security components are similar, the chances to damage them using the same strategy are high. A good example is to use cryptographic primitives that are different in construction. NIST mentioned diversity as a benefit of KECCAK, the winner of the SHA-3 competition: "One benefit that KECCAK offers as the SHA-3 winner is its difference in design and implementation properties from that of SHA-2. It seems very unlikely that a single new cryptanalytic attack or approach could threaten both algorithms" [7]. Another example is to use security solutions from different vendors because this mitigates the possibilities of errors/bugs/misconfigurations/etc.







The principle of security by default states that one should keep the default configuration as secure as possible. Let's take the example of an access list. The principle asks to start with deny all and then add permissions, not the other way around (begin with permit all and then deny specific access). The same applies to granting rights to users/apps/services/etc. In both these cases, the principle prevents misconfiguration. An example where the principle does not hold is the default access on a WiFi router, as all the products in a line are pre-configured with the same easy-to-find credentials (e.g., admin/admin). There is a permanent need for manufacturers to release on-market products secured by default.
The principle of minimal trust states that one should keep trust assumptions as low as possible. In security, you should not trust easily but consider anyone and anything a risk. For example, one should not trust others to keep his/her key secret, so never share your asymmetrical private key with other parties 😃. Commonly, security policies in organizations ask not to leave sensitive documents on the desk (but in a locked drawer); this is also an example of minimal trust.





The principle of the weakest link states that the weakest component gives the overall security of a system. Let's say your house has two entrances. If one is locked and the other is left open, a stranger could easily use the second door to enter your home, regardless of how secure the locker of the first door is. The same occurs in information security. Think of a web application that stores sensitive data. It does not matter how strong the encryption of the data is if, for example, an adversary can easily authenticate as a user and read his/her data directly from the application. All components need to be secured to overcome such problems.

The principle of least privilege states that one should grant exactly the required privileges to perform a task. Not more, not less; granting fewer privileges results in the impossibility of performing the job, while granting more privileges brings in a security issue. For example, a bank employee that directly works with customers should have access to update customers' personal information in the system (e.g., phone no., identity document) but must not access the PINs of their cards. The principle is highly related to the principle of minimal trust: a user/application/process/etc. should not be trusted and therefore granted the minimal rights to perform their job. It is one of the principles implemented in the Zero Trust Security Model.




The principle of security by design states that one should incorporate security into a system (either software or hardware) from the very beginning. Adding security from a late stage is difficult, increases the risk of vulnerabilities, and usually results in less elegant solutions. Let's take the case of software development. The principle asks to design and implement software having security in mind in all stages of the development lifecycle. OWASP guidelines for secure product design are available here. Similar to security by design, privacy by design should also be thought of from the very beginning. 

The principle of modularization states that a system should be designed as a collection of communicating components (modules) that are independently created and maintained. Of course, the modules interact, but the internals of one module are transparent to the others (as long as the interfaces remain the same). This approach has the advantage of high flexibility and rapid fix. In case of security issues with one of the modules, the affected module is updated/changed/fixed/etc., while the rest remain unchanged. The Transport Layer Security (TLS) Protocol (actual version TLS1.3), defined to provide secure communication and used massively over the internet, e.g., in HTTPS, is designed under the principle of modularization. 






The principle of defence in depth asks to use independent and diverse methods as layers to achieve security. More precisely, a system (data, etc.) is secured redundantly so that an adversary needs to break all the layers to win. As defined by NIST CSRC, it "involves layering heterogeneous security technologies in the common attack vectors to ensure that attacks missed by one technology are caught by another". Defence in depth is more a strategy, an approach, and less a principle, but I think it is useful to present here. Note that defence in depth is not necessarily in opposition with the principle of security by simplicity, although it does add complexity to the system.



Security through obscurity is a strategy to enhance security by hiding different aspects (e.g., internals, data, vulnerabilities) within a system. In fact, security by obscurity enforces secrecy to decrease the risk of attacks or delay them. Security through obscurity is a debatable technique, and, for sure, it should not be the single method of defense - remember Kerckhoffs' principle and keep in mind that many aspects cannot be hidden (e.g., over the internet) because of standardization. However, security by obscurity might come in handy under some circumstances. A good example is the release of patches, where time is a decisive factor. In this case, code obfuscation (i.e., making the code difficult to understand) can hide the patched vulnerability/bug/etc. for some time (it makes reverse engineering harder) while most systems get fixed; hence, an adversary that finally understands the problem can only attack a limited number of systems while most should have already become immune. There are many other nice primitives and concepts that are somehow related to hiding information (although not necessarily directly connected to security through obscurity), such as oblivious transfer, convert channels, and the very interesting area of kleptography. You can read about these in [8]. I recommend [9] for the ones who want to learn more. 
Finally, always remember ethics. I don't say more here, but I always make my students aware of ethics at the beginning of my courses.






[1] Kerckhoffs, A. La cryptographie militaire. In Journal des sciences militaires, vol.IX, pp.5-38, 1883.
[2] Briceno, M., Goldberg, I., Wagner, D., A Pedagogical Implementation of A5/1.
[3] Biham, E., & Dunkelman, O. (2000). Cryptanalysis of the A5/1 GSM stream cipher. In Progress in Cryptology—INDOCRYPT 2000: First International Conference in Cryptology in India Calcutta, India, December 10–13, 2000 Proceedings 1 (pp. 43-51). Springer Berlin Heidelberg.
[4] Biryukov, A., Shamir, A., & Wagner, D. (2001). Real time cryptanalysis of A5/1 on a PC. In Fast Software Encryption: 7th International Workshop, FSE 2000 New York, NY, USA, April 10–12, 2000 Proceedings 7 (pp. 1-18). Springer Berlin Heidelberg.
[6] NIST, Submission Requirements and Evaluation Criteria for the Post-Quantum Cryptography Standardization Process. Available at: https://csrc.nist.gov/CSRC/media/Projects/Post-Quantum-Cryptography/documents/call-for-proposals-final-dec-2016.pdf 
[8] Van Tilborg, H. C., & Jajodia, S. (Eds.). (2014). Encyclopedia of cryptography and security. Springer Science & Business Media. Available at: https://www.researchgate.net/profile/Krzysztof-Kryszczuk/publication/230674947_Springer_Encyclopedia_of_Cryptography_and_Security 
[9] Young, A., & Yung, M. (2004). Malicious cryptography: Exposing cryptovirology. John Wiley & Sons. Available at: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=4b782a113bfaaa0b6152f99ad5542ce6da4bd1cb

Comments

Popular posts from this blog

Unconditional vs. Conditional Security

Perfect Secrecy and the One Time Pad (OTP)

Crypt(?)