# Tietoturva

## Targets of attacks against protocols

• Used algorithm and primitives
• Encryption algorithm, hash function
• Used security technique
• Random number generation.
• Implementation of algorithms
• Protocol itself

## Shannon definition of good encryption algorithm (1949)

• 1 The amount of secrecy needed should determine the amount of labor appropriate for the encryption and decryption
• Even simple cipher may be strong enough to deter the casual interceptor for a short time
• 2 The set of keys and the enciphering algorithm should be free from complexity
• The algorithm should not restrict the selection of key
• All keys should be equally secure
• The algortihm should not give restrictions to plaintext either
• 3 The implementation of the process should be as simple as possible
• It is easier to make mistakes when implementation is complex (this was especially important when encryption was mostly done by hand)
• Complex algorithms will easily slow down the operation
• 4 Errors in ciphering should not propagate and cause corruption of further information in the message
• This was important rule on time when encryption and decryption was done by hand.
• Currently this is important on time critical communication systems, where retransmissions are not feasible e.g. live interviews.
• 5 The size of the enciphered text should be no larger than the text of the original message
• Encrypted message does not carry more information than original message, but longer message gives more data for cryptoanalyst to analyse

## Knudsen classification for algorithm break

• Lars Knudsen PhD thesis 1994
1. Total Break
• Attacker finds Key k so that $D_{k}(C)=M$
2. Global Deduction
• Cryptoanalyst finds another algorithm that can be used instead of the original one
3. Instance Deduction
• Analyst can find the plaintext from an instance but not the key
4. Information deduction
• Analyst gets some information about key or the plaintext.

## Algorithm security

• Unconditionally secure
• cannot be broken no matter how much ciphertext the cryptoanalyst can obtain
• OTP is only unconditionally secure algorithm
• all the others can be broken by Brute-force
• Computationally secure
• Cannot be broken with available resources, now or in the future.
• unless something ground breaking happens in computing
• Currently used algorithms are computationally secure

## Measuring the hardness of breaking

• Data complexity
• How much data is required to conduct the attack
• Prosessing complexity
• Time needed to breaking the system
• required amount of work
• Memory requirements
• How much processing memory is required
• How much other storage space is required

## Cryptoanalysis basis

• Basic assumption is that encryption algorithm is known
• 1. Ciphertext-only attack
• Analyst sees only the ciphertext and tries to find out key and plaintext
• Protocol should be designed so that the cryptoanalyst is forced for this type of attack
• not always possible
• Some sources use cipher-text only attack as an attack where even the encryption algorithm is not known
• 2. Known-plaintext attack
• Cryptoanalyst knows something about the encrypted plaintext
• Goal is to find the key that is used for encryption of several documents
• Headers, templates or even whole documents
• 3. Chosen-plaintext attack
• Attacker can select what will be encrypted and uses that for analysis
• e.g. email
• Attacker can make smaller changes to data that will be encrypted based on the result
• No-one is looking whether the data that will be encrypted makes sense
• 5. Chosen ciphertext attack
• Cryptoanalyst can select the data that will be decrypted and see the result
• tamperproof hardware solutions
• works mainly on asymmetric cryptography
• 6. Chosen-key attack
• cryptoanalyst has some knowledge about the relationship between generated keys
• 7. Rubber-hose cryptoanalysis
• Threaten, blackmail, bribe (purchase key)

## Passive attacks against protocol

• Eavesdropping
• Hard to notice
• Preventing easier than detecting in many cases.
• Hard to decide which information requires protection
• Traffic analysis

## Active attacks against protocol

• Goal is to change the protocol behaviour for attackers benefit
• Adding a new message in communication
• Removing/deleting messages
• Substituting messages
• Resending earlier message or repeating same message several times
• Changing stored data.
• Pretend to be someone else in protocol
• Outsider pretends to be A
• A pretends to be T
• Man-In-the-middle attack
• M pretends to be A to B and B to A

## Cheaters

• Legal partner on protocol
• Do not follow the protocol as the protocol states
• Passive attacker tries to get more information with protocol than he is allowed to
• e.g. attack against zero knowledge protocols (see authentication lectures)
• Active attack
• Sending illegal messages
• Co-operation of two partners against third one

## Implementation errors

• Even if the protocol and used algorithms are secure the implementation may open up flaws.
• Most of the security vulnerabilities are in fact due bad implementation and not a flaw in protocol or algorithm.
• Transmitted data not verified
• e.g. SQL injection attack
• Automatic encryption or signing of any data
• Flaw in generated authentication key
• Key is not really random
• Netscape 1.1. generated 128 bit encryption key, which entropy was equal to 20 bit key, due bad implementation
• Key is based on lesser entropy source
• password based keys: while 8 bits is used to describe one letter, the entropy is far from the 8 bits. (you can produce roughly 100 different values from keyboard, while full 8 bit can be used to represent 256 different values)
• misuse of primitive
• CBC-MAC used as hash via publishing key.
• use of common IV rather than random one.

## Assumptions on security and trust

• Protocols have usually designed to have some assumptions that has to be filled for being secure
• e.g. If there is no authentic time available, then protocols based on authentic times are not secure.