Thursday, July 5, 2012
Informative Review: Computer Security Concepts, Cryptographic Tools and User Authentication
Define computer Security.
Computer security can be defined and summarized through three main concepts in a triad like model comprised of the acronym “CIA”. The “C” stand’s for confidentiality, “I” for integrity and “A” for availability. The formal definition of computer security via the National Institute of Standards and Technology Handbook (NIST95) is the safeguarding and support of an automated information system in order to achieve the appropriate goals of maintaining the integrity, availability and confidentiality of information system assets or resources which includes hardware, software, firmware, information/data and telecommunications. Confidentiality ensures that information is not accessed by unauthorized persons and can be broken down into two related subcategories. The first being data confidentiality which makes certain that private and confidential information is not revealed or made available to unauthorized individuals. The second subcategory is privacy which makes certain that individuals administer or influence what information related to them may be accumulated and stored and by whom and to whom that information may be disclosed. In reference to integrity it ensures that information is not altered by unauthorized persons in a way that is not detectable by authorized users and there are two related concepts that comprise of this word. The first being data integrity which give the reassurance that programs and information are modified only in an authorized and specified manner. The second concept is system integrity assuring that a system accomplishes its intended function in a decisive unimpaired manner, free from intentional or careless manipulation of the system. Finally availability describes the “A” in the triad as availability which is the assurance that the system will work without delay and service is not prohibited by authorized users. Now although the triad and the formal definition consist of the words confidentiality, integrity and availability many in the security field believe two additional words should be added. The first is authenticity which is confidence in the validity of the transmission by making sure users are who they say they are and each input arriving at the system came from a trusted source. The final word individuals in the field deem to be significant is accountability which states we must be capable of tracing a security breach back to the party responsible and systems must keep records of these breaches to aid in forensic analysis which in turn helps in transaction disputes and security breaches.
1. What is the OSI security architecture?
The OSI security architecture is a recommendation by the International Telecommunication Union Standardization Sector (ITU-T) a United Nations sponsored entity, known as X.800 by defining OSI security architecture as a systematic approach for managers offering an overview and organizational structure for the task of providing security primarily focusing on security attacks, mechanisms and services. OSI security architecture was developed as an International standard in connection with the OSI (Open Source Interconnection) protocol architecture which is theoretical design of standards in varying equipment and applications for inter-computing and inter-networking communications via seven layers. These seven layers are the application layer, presentation layer, session layer, transport layer, network layer, data link layer and finally the physical layer whereby communications are passed from one layer to the next. One thing to note is that X.800 focuses on network security as opposed to security for a single system. As talked about above the OSI security architecture is comprised of a security attack which can be defined as any action that jeopardizes the security of information owned by an organization, a security mechanism constructed to identify, defend or recover from a security attack and finally security service whereby making use of security policies by implementing one or more mechanisms provided by a protocol layer of communicating open systems the service is designed to improve the security of information transfers of an organization, the data processing systems and counter security attacks.
2. What is the difference between active and passive security threats?
As talked about above the OSI security architecture is comprised of a security attack which can be defined as any action that jeopardizes the security of information owned by an organization, a security mechanism constructed to identify, defend or recover from a security attack. In breaking down security attacks even further X.800 and RFC2828 describe these network security attacks as either passive attacks which is to obtain or make use of information from a system not impacting system resources but eavesdropping on or monitoring transmissions, Electronic mail, file transfers, and client/server exchanges and active attacks which does in fact compromise a system by altering its resources or effecting its operation via some modification of the data stream or the creation of a false data stream.
3. List and briefly define categories of passive and active network security attacks.
There are a number of categories that encompass an active attack which described above is some modification of the data stream or the creation of a false data stream. These categories include a masquerade taking place when one entity pretends to be a different entity; replay which involves the passive attainment of a unit of data and its unauthorized retransmission of that data unit; modification of messages, meaning some part of a legitimate message is altered; and denial of service which prevents or inhibits a user /organization services of a resource they would expect to have via normal use of communication facilities. As with active attacks passive attacks which described above is obtaining or making use of information from a system while not impacting system resources also comprises of a few categories. These categories include the release of message contents such as an email which may contain confidential information so one way to prevent attackers from learning this information can be to counter this attack through encryption; and traffic analysis is even if contents of a message are closed the attacker could determine the location and identity of communicating hosts and could observe the frequency and length of messages being exchanged and guess the nature of the communication that was taking place.
4. List and briefly define categories of security services.
In using a more simplified definition, X.800 as stated above defines the OSI security architecture service as security services implemented by a protocol layer of communicating open systems, which provides sufficient security systems or data transfers. X.800 divides these services into 6 categories and 14 specific services. The first category is authentication, the affirmation that the communicating party is the one that it claims to be. Authentication performs two services. Peer entity authentication, providing confidence that the two entities identified connecting over a network claim to be whom they in fact are and data origin authentication, providing affirmation that the source of the data being received is as claimed in a connectionless transfer. The second category is access control, the forbiddance of unauthorized use of resources inclusive this service controls who can have access to resources, under what conditions access can occur, and what those accessing the resources are allowed to do. The third category is data confidentiality, ensuring and protecting data so that it is not accessed by unauthorized entities. Data confidentiality includes connection confidentiality which is protecting all user data on a connection, connectionless confidentiality which is protecting all user data in a single data block, selective-field confidentiality which is the covertness of selected fields within the user data on a connection or in a single data block and finally traffic-flow confidentiality which protects information that might be obtained from the observation of traffic flows. The fourth category would be availability, making certain that there is no denial to entities of authorized access to network elements, stored information, information flows, services and applications due to events that will impact a network inclusive is disaster recovery solutions. The fifth category is data integrity, the vow that data received by an authorized entity has not been modified, inserted, deleted, or replayed. Services that comprise of this category include connection integrity with recovery which produces the integrity of all user data on a connection and detects any modification, insertion, deletion, or replay of any data within an entire data sequence and makes an attempt to recover missing data. Connection integrity without recovery provides the same services as connection integrity but without recovery. Selective-field connection integrity produces the integrity of selected fields within the user data of a data block which is transferred over a connection and determines whether the selected fields have been modified, inserted, deleted, or replayed. Connectionless integrity furnishes the integrity of a single connectionless data block. Finally selective-field connectionless integrity provides integrity of selected fields within a single connectionless data block. Nonrepudiation is the sixth category within security services protecting against denial by one of the entities involved in the communication of having participated in all or part of the communication or in simple terms ensuring that the originators of messages cannot deny that they in fact sent the messages. Services include nonrepudiation, origin which provides authentication that the message was sent by the specified party and nonrepudiation, destination which corroborates that the message was received by the specified party.
5. List and briefly describe categories of security mechanisms.
First let’s discuss specific security mechanisms. X.800 describes specific security mechanisms as being implemented into the appropriate protocol layer such as TCP or an application layer in order to provide OSI security services. Specified security mechanisms include encipherment the use of mathematical algorithms to transform data into a form that is not readily understandable and following recovery of the data depends on an algorithm and zero or more encryption keys. There is reversible encipherment which is an encryption algorithm that allows data to be encrypted and subsequently decrypted and irreversible encipherment which includes hash algorithms and message authentication codes used in digital signature and message authentication codes; Digital signature another security mechanism whereby the data is appended to or a cryptographic transformation of a data unit that allows a recipient of the data unit to prove the source and integrity of the data unit and protect against forgery; Access control is a diversification of mechanisms that enforce passage rights to resources; Data integrity is a variety of mechanisms used to assure the integrity of a data unit or stream of data units; Authentication exchange is a mechanism intended to ensure the identity of an entity by means of exchanging information; Traffic padding is the incorporation of bits into gaps in a data stream to impede traffic analysis attempts; Routing control allows routing changes especially when a breach of security is suspected and allows the selection of particular physically secure routes for certain data; finally the last specified security mechanisms is notarization which is making use of a third trusted party to authenticate certain properties of a data exchange. In further discussing security mechanisms we now go into the opposite of specific mechanisms to what’s known as pervasive security mechanisms which are not specific to any particular OSI security service or protocol layer. Under pervasive mechanisms we start with trusted functionality which is what is perceived to be correct with respect to some criteria as established by some security policy; Next is security label is the marking bound to a resource such as a data unit that names or designates the security attributes of that resource; Event detection is the discovery of relevant security events; Security audit trail is the data collected and potentially used to facilitate a security audit which is the independent review and examination of system records and activities; The final pervasive mechanism is security recovery dealing with requests from mechanisms, such as event handling and management functions inclusive is recovery actions.
6. Consider an automated teller machine (ATM) in which users provide a personal identification number (PIN) and a card for account access. Give examples of confidentiality, integrity and availability requirements associated with the system, in each case, indicate the degree of importance of the requirement.
In general terms confidentiality ensures that information is not accessed by unauthorized persons, integrity ensures that information is not altered by unauthorized persons in a way that is not detectable by authorized users and as availability which is the assurance that the system will work without delay and service is not prohibited by authorized users. Many organizations use PCI requirements as a baseline model to protect ATM’s globally. An example of the first requirement would be to optimize integrity and availability by installing, monitoring and maintaining a firewall to protect cardholder data by examining all traffic over the network and blocking communications that do not meet an organizations security measures. Secondly one should not use vendor default passwords to optimize integrity of the data as one can use them to reprogram ATM’s. This can possibly be prevented by making use of randomized local administrator passwords and using a centralized directory services. Thirdly provide optimum confidentiality by protecting stored cardholder data and encrypt transmission of customer data across open distributed systems as attackers can intercept this over the network. This data includes printed, stored locally, or transmitted over a public network to a remote server or service provider. Fourth, to further help protect against integrity issues antivirus software should be used and monitored while developing and maintaining secure systems and applications to avoid malicious viruses from accessing the network. Fifth business functions should be segregated by limiting access to system components and cardholder data to only those individuals whose job requires such access. Therefore privileges are provided to those through a job classification hierarchy to protect consumer confidentiality. Sixth provide customers with a unique user name before allowing them to access the system or cardholder data which in turn will aid in providing greater confidentiality and integrity. This helps in nonrepudiation measures. Seven a banking organization should make use of video cameras for monitoring purposes which is one of a few techniques to restrict physical access to a banking card member’s data and increase confidentiality. Eight one needs to monitor and track access to card member data and network resources to safeguard and provide greater data integrity and to aid in forensic analysis if the system is compromised. For example one should also include maintenance of audit trails for access to all system components. Finally an organization should regularly test security system processes and maintain a policy that addresses information security to provide greater confidentiality, integrity and availability. In measuring the degree of the vulnerabilities we can take a quantitative approach which is Risk = Assets (hardware, software, data information and reputation) × Threats (actions by attackers who try to exploit vulnerabilities to damage assets) × Vulnerabilities (weaknesses of a system that could be accidentally or intentionally exploited to damage assets) and a qualitative initiative where risk is calculated based on rules and advice provided by security experts and rated on a scale (low, moderate, high). For this exercise we will focus more on a qualitative approach with respect to vulnerabilities that are rated high, medium, or low in two dimensions, potential damage and probability of attack. One of the most critical elements of security is taking countermeasures to prevent malware from taking over your operating system such as making use of firewalls to protect the integrity and availability of the system. If you do not do this your operating system is fully compromised and is no longer in your control leading to significant loss and downtime. The second high risk category is not discarding vendor default passwords which can compromise the integrity of the system so that an attacker can reprogram the ATM. Regarding confidentiality by protecting stored cardholder data and encrypt transmission of customer data across open distributed systems attackers will have a more difficult time intercepting information over the network. Therefore it is essential that encryption keys are managed securely and all communications are encrypted. These are just three examples of degree of security risk in an ATM system although you can go on and on in evaluating additional vulnerabilities.
7. What are the essential ingredients of a symmetric cipher?
A symmetric encryption is also known as conventional encryption or single key encryption. This is the more widely used of the two types of encryptions and a symmetric encryption scheme has five ingredients the first being plaintext, the original message or data that is imported into the algorithm as input; the second ingredient is the encryption algorithm that carries out a number of conversions and transformations on the plaintext; The third ingredient is the secret key which is also input to the encryption algorithm and the precise conversions and transformations carried out by the algorithm depend on the key; Fourth is what’s known as the ciphertext, the scrambled or shuffled message produced as output and it depends or relies on the plaintext and the secret key. Two different keys for a given message will generate two different ciphertexts. The final symmetric ingredient is the decryption algorithm which is more or less the encryption algorithm run in reverse that takes the ciphertext and the secret key and produces the original plaintext.
8. How many keys are required for two people to communicate via a symmetric cypher?
In a symmetric cypher the two people communicating with each other use a shared key, signifying both parties have identical key’s which has been given to one another via some secure method. If this is the scenario, only one key is required whereby both parties in the communication use the same key to encrypt and decrypt all messages.
9. What are the two principal requirements for the secure use of symmetric encryption?
There are two principal requirements for the secure use of symmetric encryption. The first requirement being a powerful the encryption algorithm whereby the attacker who is aware of the algorithm and has the means to obtain one or more ciphertexts could not figure out the ciphertext or the key. The attacker here should not be able to decrypt the ciphertext or get a hold of the key even if the party acquires a number of ciphertexts together with the plaintext that generated each ciphertext. The second requirement is that the original sender and the receiver must get ahold of copies of the secret key via a secure method and must inhibit anyone trying to access the key by keeping it secure otherwise if a party finds the key and has knowledge of the algorithm then they will be able to read the communication using the key.
10. List three approaches to method authentication.
Method authentication comes under the category of data integrity which is said to be authentic if the communicating parties confirm that the received or stored message is genuine and came from its alleged source. Certain ways that can be used to verify authenticity and to examine that the message has not been altered and the source is authentic is by verifying a messages timeliness and sequence relative to other messages flowing between two parties. It would seem probable to use symmetric encryption to authenticate a message if the receiver can recognize a valid message via an error detection code, sequence number and time stamp (showing the message has not been delayed beyond the normally expected transit time). Unfortunately symmetric encryption alone is said to not be suitable alone for data authentication because of block reordering being a threat. In using message authentication it typically is a separate function from message encryption. One way to verify that a message is authentic is to broadcast a plaintext message with the appropriate authentication tag associated with it and have the system have a single control center with an associated alarm and if a message is violated the alarm will go off this is particularly useful when you have multiple destinations for a message but a single control center is more efficient. Another scenario is if you have an influx of messages it can become cumbersome so you check messages at random and finally you can have a message authentication tag attached to a computer program to check for authenticity. One technique useful to verify authenticity is the use of the message authentication code (MAC) which is when a secret key generates a small block of data appended to the message. In this case under the assumption both the sender and receiver have the same secret key the sender then uses a specified calculation code with the key and when the receiver gets the communication they use the same calculation to get the code along with their secret key to retrieve the message. This protects the messages authenticity as this is due to the attacker not having the ability to break the code and its assumed the opponent does not know the identity of the secret key. Secondly without the proper key the attacker can’t get the message because they are unable to prepare a message without the proper code and thirdly if the message is embedded with a sequence number the opponent can’t alter the sequence number. In decryption the authentication algorithm must be reversible but with the MAC that is not necessary. Another technique is the use of a one way hash function which turns messages or plaintext as input into a fixed string of digits producing a message digest as output making it very difficult to turn the fixed string back into the text message. In simplified terms a hash is a character used to precede a number. Assuming both the sender and receiver share the same encryption key authenticity is assured as well as is the case using a public key which does not require keys to be distributed to the communicating parties but provides a digital signature and message authentication. Therefore unlike the MAC a hash function does not take in a secret key as input.
11. What is a message authentication code?
A message authentication code (MAC) is when a secret key is used to generate a small block of data appended to the message. In this case under the assumption both the sender and receiver have the same secret key the sender then uses a specified calculation code with the key and when the receiver gets the communication they use the same calculation to get the code along with their secret key to retrieve the message. This protects the messages authenticity as this is due to the attacker not having the ability to break the code and it’s assumed the opponent does not know the identity of the secret key. Secondly without the proper key the attacker can’t get the message because they are unable to prepare a message without the proper code and thirdly if the message is embedded with a sequence number the opponent can’t alter the sequence number. In decryption the authentication algorithm must be reversible but with the MAC that is not necessary.
12. Briefly describe three schemes illustrated in figure 2.4.
There are three schemes illustrated in figure 2.4 which is representative of the use of a message authentication code (MAC) in order for the sender and receiver to authenticate a message by making use of the input message and secret key. The first scheme is the receiver has affirmation the message has not been altered but if the message is altered by an opponent but is unable to alter the code then the receiver s calculation of the code will differ from the received code as the attacker is unable to alter the code to correspond with the alterations in the message if its assumed he/she does not have the secret key. The second scheme is if an attacker does can’t identify the secret key then he/she cannot prepare a message with the proper coder therefore the receiver is assured the message is from the authentic sender. Finally if the message is embedded with a sequence number the opponent can’t alter the sequence number and the receiver can be assured of the proper sequence. In decryption the authentication algorithm must be reversible but with the MAC that is not necessary and also due to the mathematical properties of the MAC’s authentication function it has less of a chance to be broken than encryption.
13. What properties must a hash function have to be useful for message authentication?
The purpose of a hash function is to produce a fingerprint of a file, message, or other block of data and the properties a hash function must have to be useful for message authentication is that H should be applied to a block of data of any size, H should produce a fixed-length output and H(x) being relatively easy to compute for any given x allows both hardware and software implementations to be practical. These first three properties are a requirement for the practical application of a hash function to be useful in providing message authentication. The fourth property is known as the one way or pre-image resistant property which states for any given code h, it is computationally infeasible to find x such that H(x) = h. This one way characteristic is significant because the secret value itself is not transmitted but if the hash function is not disseminated one way the attacker can easily discover the secret value by intercepting the communication the attacker can acquire the message and hash and just invert it to get the secret value. In general a one way hash function hypothesizes that it may be easy to formulate a code given a message but practically an impossibility to generate a message given a code. The fifth property is that for any given block x, it is computationally infeasible to find y != x with H(y) = H(x) in other words it is impossible to find and alternative message with the same hash value as a given message which prevents against forgery when using an encrypted hash code. A hash function with this property is referred to as second pre image resistant. This is sometimes referred to as weak collision resistant. So the first five properties of a hash function are known as being weak that is until the sixth property is applied in which case it is than considered a strong hash function because it now requires one party to generate a message for another party to sign. In this scenario it is computationally infeasible to find any pair (x, y) such that H(x) = H(y). A hash function with this property is referred to as collision resistant. This is sometimes referred to as strong collision resistant being that it is hard to find two inputs that hash to the same output. As an added benefit the sixth element not only provides authenticity but data integrity via a message digest.
14. What are the principal ingredients of a public key cryptosystem?
A public key cryptosystem is asymmetric as it involves the use of two separate keys as opposed to symmetric which requires one key and public key cryptosystem has makes use of more complex mathematical algorithms which is the opposite of symmetric encryption algorithms which uses simple operations on bit patterns. Also to differentiate one another the key in symmetric encryption is known as a secret key and the two keys used for public key encryption are known as public and private keys. A private key is also kept secret and only known to its owners but to avoid confusion with symmetric encryption a public key encryption refers to a private key as opposed to a secret key. A public-key encryption scheme has six ingredients the first being plaintext, the readable message or data that is imported into the algorithm as input; The second is the encryption algorithm which performs various transformations on the plaintext; Third is public and private keys, the pair of keys that have been selected so that if one is used for encryption, the other is used for decryption and the exact transformations performed by the encryption algorithm depend on the public or private key that is provided as input; Fourth is the ciphertext which is the scrambled message produced as output as It depends on the plaintext and the key. For a given message, two different keys will produce two different ciphertexts. The final ingredient is the decryption algorithm which accepts the ciphertext and the matching key and produces the original plaintext.
15. List and briefly define three uses of a public key cryptosystem.
The three uses of public key cryptosystem are encryption/decryption which is when the originator of the message encrypts a message through the transformation or substitution of plain text into ciphertext with the recipient's public key, than there is the digital signature which is when the originator of the communication uses a secure hash function to formulate a hash value for the message and then encrypts the hash code with the originators private key creating a digital signature. Signing is achieved by a cryptographic algorithm applied to the message or to a small block of data that is a function of the message and finally there is the exchanging of keys whereby two sides cooperate to exchange a key in which there are various approaches possible involving the private keys of one or both parties.
16. What is the difference between a private key and a secret key?
The only difference between a private key and a secret key is that a private key is also kept secret and only known to its owners but to avoid confusion with symmetric encryption a public key encryption refers to a private key as opposed to a secret key.
17. What is a digital signature?
A digital signature is one that can be used in public key encryption to provide authenticity that message came from the individual that it was supposed to come from however the downside is that it does not meet the criteria of confidentiality. The way a digital signature is created is the originator of the message wants to send a message to another party and although it is not important that the message be kept secret, the receiver wants to be sure the message is from the intended sender therefore the originator of the message uses a secure hash function to generate a hash value for the message and then encrypts the hash code with his private key, creating a digital signature. The originator than sends the message with the signature attached and when the intended receiver obtains the message plus signature he/she calculates a hash value for the message, decrypts the signature using the originators public key and compares the calculated hash value to the decrypted hash value. If the two hash values match the receiver of the message can be rest assured that the message must have been signed by the intended originator due to the fact that nobody else has the senders private key and therefore no one else could have created a ciphertext that could be decrypted with the originators public key. What also is significant about this approach is that it is impossible to alter the message without access to the sender’s private key so the message is authenticated both in terms of source of origin and data integrity. It is important to note that the digital signature does not provide confidentiality because although it is safe from being altered this is not the case from a passive attacker in eavesdropping as any observer can decrypt the message by using the senders public key.
18. What is a public key certificate?
A public key is used to prevent forgery since a public key encryption is distributed publicly so anyone can see the key. Therefore a public certificate is comprised of a public key inclusive with a user ID if the key owner and the whole block being signed by a trusted party usually known as a certificate authority (CA) that is trusted by the user community. The certificate also has information about CA and the period of validity of the certificate so any party needing the user’s public key can access the certificate and corroborate that it is valid through the added trusted signature.
19. How can public key encryption be used to distribute a secret key?
In using a combination of public key cryptography and secret key cryptography the sender can encrypt the message using a secret key encryption algorithm and a session key created by the sender. The sender then encrypts the session key using the receiver’s public key. The receiver can then obtain the session key by decrypting it using his/her private key and with the session key the receiver can decrypt the message. This way a long message is encrypted very quickly and the original sender can still send the message to the receiver without needing a secure way of agreeing on the key. The two most popular public key cryptography systems using a public key encryption to distribute a secret key are Diffie- Hellman key exchange and the RSA scheme. For example in using the Diffie-Hellman approach you have the two parties that are going to negotiate the secret key with each other across the network. We will call these two people Alice and Bob. Alice and Bob will then need to agree on a prime number and an integer. Let’s say for example the prime number is 13 and the integer is 6. Next each party has to come up with a random secret so Alice for example will use the secret number 3 and Bob will use the secret number 10 and in turn these secret numbers will never be passed across the network. In beginning the generation of the first public key which will be transmitted across the network we need to take the integer 6 for Alice and raise it to the power of her secret number which in Alice’s case is 3. Than take the prime number which is 13 and divide it into the result, therefore you take the 6 to the power of 3 and divide it to get the modulus of the prime number of 13 and come up with the remainder of 8. 8 now becomes the public key which is transmitted across the network to Bob. Now Bob takes this public key of 8 and raises it to the power of his secret number which is 10 and in turn divides the result to his prime number which was 13 to get the modulus which produces the remainder of 12. 12 now becomes Bob’s secret key. If you reverse the process and Bob does the same math transmitting his public key across the network, the same secret key as Alice’s will be generated. For example Bob will take the integer of 6 and take it to the power of 10 which was Bob’s secret key and divide it by the modulus of 13 which was the agreed upon prime number and the result is 4. Alice than takes Bob’s 4 result which is his public key and raises it to Alice’s secret key which is the power of 3 than dividing it by the agreed upon prime number of modulus 13 the result is 12 again producing the secret key. Now despite the fact the other numbers were transmitted in the public domain the random secret numbers generated by Alice and Bob were not therefore it is extremely difficult to reverse factor these numbers and that’s why Alice and Bob have so much security using the Diffie-Hellman algorithm.
20. Suppose that someone suggests the following way to confirm that the two of you are both in possession of the same secret key. You create a random bit string the length of the key, XOR it with the key and send the result over the channel. Your partner XORs the incoming block with the key (which should be the same as your key) and sends it back. You check, and if what you receive is your original random string, you have verified that your partner has the same secret key, yet neither of you has ever transmitted the key. Is there a flaw in this scheme?
There is a flaw to this scheme as an opponent can access the length of the key, than by comparing both transmissions they can obtain the key. In other words the attacker can reconstruct the original key by XORing the encrypted message with its decrypted message. In terms of a simple calculation suppose an attacker wants to gain access to the authenticated sender’s secret key. The attacker sends a random number R to the sender which is not XORed. The sender XORs what he/she got with his key: R XOR K2 and sends this to the attacker. The attacker can now XOR what he/she got from the sender with R, to get his key: R XOR K2 XOR R = K2 and now the opponent now has Bob's secret key.
21. In general terms what are the four means of authenticating a user’s identity?
The four means of authenticating a user’s identity are something the individual know such as a password, a personal identification number (PIN) or answers to a prearranged set of questions; Something the individual possesses such as electronic keycards, smart cards, and physical keys which are all forms of authentication referred to as a token; Something the individual is (static biometrics)inclusive is fingerprint recognition, retina recognition and facial recognition all known to encompass the area of static biometrics; Finally something the individual does which includes voice pattern recognition, handwriting characteristic recognition and typing rhythm recognition all under the category of dynamic biometrics. The downside to the authenticating schemes described above are attackers being able to guess or steal passwords, an attacker being able to forge or steal tokens, user’s potentially losing a token or forgetting a password, the significant overhead of password management or token information on systems and security. In reference to biometric technology some problems that exist also include a great deal of overhead, user acceptance, convenience, false positives and false negatives.
22. List and briefly describe the principal threats to the secrecy of passwords.
Principal threats to the secrecy of passwords include offline dictionary attack whereby the attacker by passes access controls to obtain access to the system password file then comparing the password hashes against hashes of commonly used passwords and if a match is found the attacker can gain access by making use of the ID/password combination. Countermeasures against this type of threat is controls preventing unauthorized access to the password file, intrusion detection restraints and active reissuance of passwords; Specific account attack is another threat whereby the opponent targets a specific account tries to guess the password numerous times until the correct password is accessed. A countermeasure to this threat is to use a lockout mechanism whereby the system only allows the user a number of password guesses until eventually they are locked out helping to thwart any further attempt made; Popular password attack is another attack is to make use of popular passwords that many people tend to choose as its easy for them to remember and test it against a number of user IDs making the password easy to guess. Countermeasures that may help avoid such an attack at being successful is disallowing the user to choose certain common passwords and analyzing IP addresses of authentication requests and submission patterns; Password guessing against single user is an attack whereby an opponent tries to gain information about the account holder and system password policies in an effort to use this knowledge to guess the password. Countermeasures against this type of attack would be training users that make passwords difficult to guess such as proper password characters, minimum length of passwords and changing a password frequently just to name a few; Workstation hijacking is another threat whereby the attacker waits until a logged-in workstation is unattended however one way of counter measuring such an opponent is to log someone off when the workstation has been inactive for a certain period of time; Exploiting user mistakes is another way to compromise a user as they are more likely to write it down their issued password to remember it allowing an opponent to take advantage by now easily having the ability to access the password. The user may also willingly share a password with perhaps a pier or an opponent may make use of social engineering tactics deceive a user or an account manager into revealing a password. Countermeasures against such exploitive mistakes include user training, intrusion detection and simple passwords combined with another authentication component; Exploiting multiple password use can become a threat if different network devices share the same or a similar password for a given user however this can be thwarted by disallowing the same or similar passwords to be used on particular network appliances; Finally there is what’s known as electronic monitoring whereby If a password is communicated across a network to log on to a remote system it is vulnerable to eavesdropping by an opponent for example making use of traffic analysis. Unfortunately less complex encryption schemes will not counter this problem because the encrypted password can be observed and reused by an attacker.
23. What are two common techniques used to protect a password file?
Two common techniques used to protect a password file are the use of hashed passwords and a salt value which can be found on virtually all UNIX variants as well as on a number of other operating systems. In this scenario a user selects a password from the system and combines it with a salt value in which the value is related to the time at which the user selects a password or via the use of pseudorandom or random number. Both the password and salt value serve as inputs to a hashing algorithm producing a fixed length hash code and the hashed password is than stored together with a plaintext copy of the salt in the password file for the corresponding user ID. When the user submits their password and ID to the system in order to log on the operating system uses the ID to index into the password file and retrieve the plaintext salt and encrypted password. The salt and user password are supplied as input to the encryption routine and if the result matches the stored value the password is accepted. The significance of the salt in this scheme is that it prevents duplicate passwords from being visible in the password file as the passwords are assigned different salt values and the hash values of the users will differ, the salt increases the difficulty of offline dictionary attacks do to the size of its bits and finally it is extremely difficult to find out whether a person with passwords on two or more systems has used the same password on all of them.
24. List and briefly describe four common techniques for selecting or assigning passwords.
The four common techniques for selecting passwords are user education, computer-generated passwords, reactive password checking and proactive password checking. User education which many believe is a technique unlikely to succeed at most installations, particularly where there is a large user population or a lot of turnover as they tend to be not well versed in picking good passwords or simply ignore guidelines however it still good practice to provide users with guidelines. Some guidelines may be do not pick well known phrases for passwords but phrases never the less can be useful and an easy way to remember them is to use a phrases acronym. Computer generated passwords also have negative qualities due to the fact that if the passwords are to random in nature users will have difficulty remembering them however the best known password generator is FIPS PUB 181 which generates words by constructing pronounceable syllables and linking them to form a word. A random number generator produces a random stream of characters used to formulate syllables and words. Reactive password checking deploys a strategy in which the system periodically runs its own password cracker to find guessable passwords than cancels the passwords that are guessed notifying the user. The main problem with this technique is the added cost burden and it being resource intensive as well as passwords still remains extremely vulnerable to attackers until the reactive password checker finds the easily guessable passwords. The fourth technique in selecting passwords is proactive password checking whereby the user is allowed to select his or her password as this technique is said to aid in improving security because the user’s password selection is checked by the system to see if the password is allowable. The proactive password checker provides the user guidance in choosing a password they can remember from a large password space that is not likely to be guessed helping to reach a balance between user acceptability and password strength.
25. Explain the difference between a simple memory card and a smart card.
The difference between the simple memory card and the smart card which are both deemed as tokens, is that a simple memory card just can store but not process data such as a bank card which has a magnetic strip to store a simple security code to be read by an inexpensive card reader and unlike a smart card usually requires access to remote databases at the time of a transaction whereas a smart card can hold a large amount of information and contains an entire microprocessor which includes processor, memory and I/O ports that are directly accessible by a compatible reader via exposed electrical contacts or the smart card can rely on embedded wireless communication with the reader. Smart card applications include the area of health care records where a person’s medical history can be readily available, passport applications, configure cell phones, pay television and connect to your computers via telephone just to name a few. Smart cards unlike simple memory cards without question provide greater versatility.
26. List and briefly describe the principal physical characteristics used for biometric identification.
A Biometric identification system authenticates an individual based on their unique physiological / behavioral characteristics which come under two categories one being static characteristics and the other dynamic characteristics. Static biometric characteristics comprise of facial recognition which defines characteristics and shape of key facial features such as eyes, eyebrows, nose, lips and chin shape and an infrared camera can also be used to produce a facial thermogram that correlates with the underlying vascular system in the human face. Fingerprint recognition is another static characteristic believed to be unique across the entire human population which extracts a number of features from a fingerprint such as the pattern of ridges/furrows on the surface of the fingertip for storage as a numerical substitute for the full fingerprint pattern. Hand geometry is another static biometric characteristic whereby hand geometry systems identify features of the hand including shape, length and width of fingers. Retinal pattern also under static biometrics is a biometric system that accesses a digital image of the retinal pattern (veins beneath the retinal surface) by projecting a low intensity beam of visual or infrared light into the eye. The final static biometric is making use of the iris and its unique physical characteristics and detailed structure. Dynamic biometric characteristics comprise of a signature which is an individual’s unique style of handwriting however multiple signature examples from a single individual may vary therefore this complicates the task of developing a computer representation of the signature. Finally voice recognition also falls under the category of a dynamic biometric characteristic and is also considered to be a complicated biometric recognition task because although voice patterns are closely tied to physical and anatomical characteristics of the speaker there still can be variations in samples over time. According to the chart depicted on page 89, figure 3.5 iris recognition provides the greatest accuracy but the downside is it is most expensive out of all the characteristics described above so in essence fingerprint recognition looks as though it is the most viable as it reaches a good balance between cost and accuracy.
27. In the context of biometric user authentication, explain the terms, enrollment, verification, and identification.
Enrollment is where each individual who is to be included in the database of authorized users must first be enrolled in the system by which they are assigned a password. The user presents a password or pin to the system which in turn senses some type of biometric characteristic of the user which digitizes the input and then extracts a set of features that can be stored as a set of numbers known as the users template representing the users unique characteristic. The system now has a username, password or pin and the biometric value. Verification is analogous to a user logging on to a system by using a memory card or smart card coupled with a password or PIN and for biometric verification the user enters a pin and makes use of a biometric sensor and its compared to the template stored for a particular user. If there is a match the user is than authenticated as being genuine. In identification the individual uses the biometric sensor but presents no additional information and then compares the represented template with the stored template to see if it’s a match if it’s not the user is rejected.
28. Define the terms false match rate and false non-match rate, and explain the use of a threshold in relationship to these two rates.
The term false match rate takes place when the characteristics of a biometric reading closely match the characteristics of a non-corresponding template in the database producing a matching score high enough to qualify as a match however as a result the user to be identified who can either be legitimate or an unauthorized imposter is matched with a wrong identity. The term false non match is when the measured distance between the current biometric readings of an authorized user and the corresponding template in the database fall out of the acceptable tolerances due to the condition of the biometric feature or other incidental reason such as in the case of a fingerprint whose results may vary due to sensor noise, changes in the print due to dryness, swelling and so on. In essence the matching score is too low to be accepted as a match in turn resulting in the denial of an authentic claim. In biometrics there is often difficulty in matching an imposter from a genuine user as given reference templates are likely to overlap therefore a threshold value is selected and depicted in a bell curve chart. It’s assumed that if the value of the matching score is greater than the threshold it’s a match otherwise if the score is less than the threshold it is a mismatch. In a chart depicted on page 91, figure 3.7 the shaded part to the right of the threshold line indicates a false match is possible and the shaded part to the left of the threshold line indicates a range of values where a non-match is possible whereby the area of each shaded part represents the probability of a false match or non-match. By moving the threshold to the left or right the probabilities can be altered therefore a decrease in a false match rate results in an increase in a false non-match and vice versa. The rate refers to the probability of a false match or false mismatch. Therefore for a given biometric scheme you can plot the false match versus the false non-match rate called the operating characteristic curve. The curve plots the effectiveness of a system with reference to correct identifications and misidentifications. In using an example the book depicts characteristic curves for face, fingerprint, voice, hand and iris characteristics. The bottom horizontal line of the chart from left to right ranges in values for false match rates from .0001% to 100% and the left vertical line which is connected to the bottom horizontal line represents false non-match rates ranging from 0.1% at the bottom to 100% at the top. The chart shows quite clearly that the iris had no false matches however over a broad range of false match rates the facial biometric is clearly the worst performer.
29 Describe the general concept of a challenge-response protocol.
A challenge-response protocol is an authentication protocol used with smart tokens whereby a computer system creates a challenge, such as a random string of numbers. The smart token generates a response based on the challenge. For example public-key cryptography could be used and the token could encrypt the challenge string with the token´s private key. An easier way in explaining a challenge-response protocol is that one party presents a question and another party must provide a valid answer to be authenticated.
30. Explain the suitability or unsuitability of the following passwords:
a. YK334 password is not suitable because it’s less than six characters in length.
b. mfmitm (for my favorite movie is tender mercies) is not a well-known phrase so it is suitable as long as it’s not difficult for the person to remember the “m” placed in front of the full acronym.
c. Natalie1 is not suitable as it is easy to guess one’s own name or among familiar names. These are the highest crack able passwords.
d. Washington is not fully suitable as it is too common of a name. The only positive is that the “W” is capitalized.
e. Aristotle is not fully suitable as it is too common of a name that can be looked up in a dictionary. The only positive is the capitalized letter “A”.
f. tv9stove is easy to remember and the 9 in between the words TV and stove add additional security making this a suitable password.
g. 12345678 has to great of a risk to be matched so this is not suitable at all and in fact it ranks in the category of number 3 for being one of the most cracked password.
h. dribgib is too difficult of a password to remember unless it’s an acronym you are familiar with.
31. Assume that passwords are selected from four character combinations of 26 alphabetic characters. Assume that an adversary is able to attempt passwords at a rate of one per second.
a. Assuming no feedback to the adversary until each attempt has been completed, what is the expected time to discover the correct password?
The calculation would be T=26 to the power of 4 which is 456,976 divided by two giving you 228,488. So it would take 228,448 seconds to discover the correct password.
b. Assuming feedback to the adversary flagging an error as each incorrect character is entered, what is the expected time to discover the correct password?
The calculation for this would be T= 13x4 which is 52 seconds so that would be the expected time to discover the correct password assuming feedback to the adversary flagging an error as each incorrect character is entered