Google search

Search IT Security Blog:


Tuesday, October 27, 2009

Cyberlaw in Malaysia

Malaysian Internet users are expected to reach four million
(Long, 2000), a healthy precursor to the adoption of e-commerce
transactions, which is encouraged by the incumbent government.
These transactions executed between business to business on the
one hand, and business to consumer on the other hand
(e-commerce) take place in cyber space, which is a virtual
region that defies geographical boundaries and national laws.
Inevitably, cyber laws must be enacted to regulate the
authenticity and security of business transactions in virtual
space. In addition, it must ensure that the rights and duties
of interacting parties in cyberspace are determined and capable
of enforcement.

The Malaysian Cyber Law Act 1997 (referred to as the said Act)
which includes the Digital Signature Act 1997 (DSA), Computer
Crimes Act 1997 (CCA) and the Telemedicine Act 1997 (TA) was
enacted to facilitate, regulate and spur the growth of
e-commerce in Malaysia. Laws, regulations and advocacy
agencies do encourage the growth of e-commerce

Deficiencies are inherent in the said Act. Shortcomings of the
DSA are as follows: certification authorities are not
responsible to ensure security and confidentiality of the
private key which is used for authentication and validation
purposes as responsibility lies with the private key holder

Internet usage by Malaysian legal practitioners in Malaysia is
not widespread despite its vast potential in expediting
communication and research efforts

Exposure to the Internet does not have a significant positive
influence on legal practitioners' knowledge of cyber laws. The
reason perhaps lies in the fact that legal practitioners do not
use the Internet extensively for purposes other than accessing
information. During interviews with a number of lawyers, it was
revealed that most lawyers do not even have an e-mail address.

IT-related experience does not have a significant positive
influence on legal practitioners' knowledge of cyber laws. It's
important to note that this variable is measured on two
dimensions: familiarity with cases pertaining to cyber laws and
ability to use the technology that would be governed by cyber
laws. The reason for the finding could be the fact that in
Malaysia legal practitioners are just coming to grips with the
advent of IT. To date, litigation pertaining to computer
technology or computer-related technology has yet to become a
staple diet of the Malaysian legal practitioner. In addition,
if legal practitioners already suffer from technological
phobia, their knowledge of the application of information
communication technology is very much doubted.

Influence of Knowledge of Cyber Laws as an Intervening Variable
between the Independent Variables and Adequacy of Cyber Laws

Knowledge of cyber laws does not have an intervening effect on
the relationship between the independent variables and adequacy
of cyber laws. This could be attributed to the fact that the
results also show that legal practitioners have a low level of
knowledge pertaining to cyber laws. Lack of such knowledge
rests in many factors: legal practitioners were not involved in
the formulation of cyber laws; they have not taken up cases
dealing with cyber laws; and the), are not comfortable with the
use of IT-related technologies. In the absence of knowledge of
cyber laws as an intervening factor, the results show that
IT-related experience has a significant positive bearing on
legal practitioners' perception of the adequacy of cyber laws.

What is Cyberlaw?

What is Cyberlaw ?

When Internet was developed, the founding fathers of Internet hardly had any inclination that Internet could transform itself into an all pervading revolution which could be misused for criminal activities and which required regulation. Today, there are many disturbing things happening in cyberspace. Due to the anonymous nature of the Internet, it is possible to engage into a variety of criminal activities with impunity and people with intelligence, have been grossly misusing this aspect of the Internet to perpetuate criminal activities in cyberspace. Hence the need for Cyberlaws.

Internet is believed to be full of anarchy and a system of law and regulation therein seems contradictory. However, cyberspace is being governed by a system of law and regulation called Cyberlaw. There is no one exhaustive definition of the term “Cyberlaw”. Simply speaking, Cyberlaw is a generic term which refers to all the legal and regulatory aspects of Internet and the World Wide Web. Anything concerned with or related to or emanating from any legal aspects or issues concerning any activity of netizens and others, in Cyberspace comes within the ambit of Cyberlaw. The growth of Electronic Commerce has propelled the need for vibrant and effective regulatory mechanisms which would further strengthen the legal infrastructure, so crucial to the success of Electronic Commerce. All these regulatory mechanisms and legal infrastructures come within the domain of Cyberlaw.

What is the importance of Cyberlaw ?

Cyberlaw is important because it touches almost all aspects of transactions and activities on and concerning the Internet, the World Wide Web and Cyberspace. Initially it may seem that Cyberlaws is a very technical field and that it does not have any bearing to most activities in Cyberspace. But the actual truth is that nothing could be further than the truth. Whether we realize it or not, every action and every reaction in Cyberspace has some legal and Cyber legal perspectives.

Today, the awareness about Cyberlaw is beginning to grow. Many technical experts in the beginning felt that legal regulation of Internet is not necessary. But with the rapid growth of technologies and Internet, it is crystal clear that no activity on Internet can remain free from the influence of Cyberlaw. Publishing a Web page is an excellent way for any commercial business or entity to vastly increase its exposure to millions of persons, organisations and governments world-wide. It is that feature of the Internet which is causing much controversy in the legal community.

Cyberlaw

Since the beginning of civilization, man has always been motivated by the need to make progress and better the existing technologies. This has led to tremendous development and progress which has been a launching pad for further development. Of all the significant advances made by mankind from the beginning till date, probably the important of them is the development of Internet. To put in a common man’s language, Internet is a global network of computers, all of them speaking the same language. In 1969, America's Department of Defense commissioned the construction of a Super network called ARPANET. The Advanced Research Projects Agency Network (ARPANET), basically intended as a military network of 40 computers connected by a web of links & lines. This network slowly grew and the Internet was born. By 1981, over 200 computers were connected from all around the world. Now the figure runs into millions.

The real power of today's Internet is that it is available to anyone with a computer and a telephone line. Internet places at an individual's hands the immense and invaluable power of information and communication.

Internet usage has significantly increased over the past few years. The number of data packets which flowed through the Internet have increased dramatically. According to International Data Corporation ("IDC"), approximately 163 million individuals or entities will use the Internet by the end of this year as opposed to 16.1 million in 1995. If left to its own measure, it is highly unlikely that such a trend can reverse itself. Given this present state of the Internet, the necessity of Cyberlaws becomes all the more important.

Computer Crime

Computer crime can broadly be defined as criminal activity involving an information technology infrastructure, including illegal access (unauthorized access), illegal interception (by technical means of non-public transmissions of computer data to, from or within a computer system), data interference (unauthorized damaging, deletion, deterioration, alteration or suppression of computer data), systems interference (interfering with the functioning of a computer system by inputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer data), misuse of devices, forgery (ID theft), and electronic fraud.

A computer can be :
#attacked
#used to attack
#used as a means to commit crime

Computer crime is hard to prosecute because:
#low computer literacy (lack of understanding)
#no physical clues (lack of physical evidence)
#intangible forms of assets
#considered as juvenile crime
#Lack of political impact

Examining a Case for Ethical Issues
1. Understand the situation. Determine the issues involved.
2. Know several theories of ethical reasoning
3. List the ethical principles involved
4. Determine which principles outweigh others.

Summary
Laws are formally adopted rules for acceptable behavior in modern society. Ethics are socially acceptable behaviors. The key difference between laws and ethics is that laws carry the sanction of a governing authority and ethics do not.
Organizations formalize desired behaviors in documents called policies. Policies must be read and agreed to before they are binding.
Civil law represents a wide variety of laws that are used to govern a nation or state. Criminal law addresses violations that harm society and are enforced by agents of the state or nation. Tort law is conducted by means of individual lawsuits rather than criminal prosecution by the state.

Protecting Programs and Data

Copyrights
designed to protect the expression of ideas applies to a creative work such as a story and song. intended to allow regular and free exchange of ideas must apply to an original work and it must be in some tangible medium of expression to cover works in the arts, literature and written scholarship

Patents
applies to the result of science, technology and engineering can protect a “new and useful process, machine, manufacture or composition of matter” designed to protect the device or process for carrying out an idea, not the idea itself

Trade Secret
must be kept a secret the owner must protect the secret by any means, such as by storing it in a safe, encrypting it and by making employees sign a statement that they will not disclose the secret trade secret protection can also vanish through reverse engineering

Information and The Law
Information as an Object not depletable
Information can be sold again and again without depleting stock or diminishing quality
Information has the value not the medium can be replicated
Can use the information and sell it many times minimal margin cost
The cost to produce another one after having produced others is small value is timely
The value of information often depends on when you know it often transferred intangibly
Information is being delivered as bits on a cable

Legal Issues Related to Information
Database
Problem:
Difficult to determine that a set of data came from a particular database so that the database can claim compensation electronic commerce
Goods are ordered electronically
Technical protection available:
Digital signatures and other cryptographic protocols
Problem:
How to prove conditions of delivery

Rights of Employees and Employers ownership of a patent
The person who owns a work under patent and copyright law is inventor (producer)
ownership of a copyright

Similar to ownership of a patent
The programmer is the presumed owner of the work
The owner has all rights to an object work for hire
The employer is considered the author of a work not the employee

Licenses
An alternative to ‘work for hire’ arrangement
Programmer develops and retain full ownership of the software
The programmer grants a license to a company to use the program

License can be:
For a copy or unlimited copies
To be used at one location or many etc
trade secret protection Trade secret is not registered
The ownership must be established
The information as confidential data

employment contracts
Will express the rights of ownership
Specifies:
The employee is hired to work as a programmer exclusively for the benefit of the company
The company states that it is a work for hire situation
The company claims all rights to any programs developed including all copyrights and the right to market
The employee receives access to certain trade secrets as a part of employment and the employees agrees not to reveal those secrets
Sometimes an agreement not to compute is included such as the employee is not to compete by working in the same field for a set period of time after termination

Legal & Ethical

Law
a rule of conduct or action prescribed or formally recognized as binding or enforced by a controlling authority implies imposition by a sovereign authority and the obligation of obedience on the part of all subject to that authority

Ethics
a set of moral principles or values the principles of conduct governing an individual or a group an objectively defined standard of right and wrong

Categories of Law
Civil law: represents a wide variety of laws that govern a nation or state
Criminal law: addresses violations harmful to society and is actively enforced through prosecution by the state
Tort law enables individuals to seek recourse against others in the event of personal, physical, or financial injury.
Torts are enforced via individual lawsuits rather than criminal prosecutions by the state. When someone brings a legal action under tort law, personal attorneys present the evidence and argue the details rather than representatives of the state, who prosecute criminal cases.
The categories of laws that affect the individual in the workplace are private law and public law.
Private law regulates the relationship between the individual and the organization, and encompasses family law, commercial law, and labor law.
Public law regulates the structure and administration of government agencies and their relationships with citizens, employees, and other governments, providing careful checks and balances. Examples of public law include criminal, administrative, and constitutional law

Law and Ethics
Laws are rules that mandate or prohibit certain behavior in society
ethics, which define socially acceptable behaviors.
The key difference between laws and ethics is that laws carry the sanctions of a governing authority and ethics do not. Ethics in turn are based on cultural mores: the fixed moral attitudes or customs of a particular group.
Some ethics are recognized as universal. For example,murder, theft, assault, and arson are commonly accepted as actions that deviate from ethical and legal codes in the civilized world.

Differences between Laws and Ethics
LAW
+Formal, documented
+Interpreted by courts
+Established by legislature representing everyone
+Applicable to everyone
+Priority determined by courts if two laws conflict
+Enforceable by police and courts

ETHIC
+Described by unwritten principles
+Interpreted by individuals
+Presented by philosophers, religions, professional group
+Personal choice
+Priority determined by individual if two principles conflict

Ethics Concept in Information Security

Ethical Differences Across Cultures
Cultural differences can make it difficult to determine what is and is not ethical especially when considering the use of computers.
individuals of different nationalities have different perspectives; difficulties arise when one nationality’s ethical behavior conflicts with the ethics of another national group

For example, to Western cultures, many of the ways in which Asian cultures use computer technology is software piracy. This ethical conflict arises out of Asian traditions of collective ownership, which clash with the protection of intellectual property

Software License Infringement
the individuals surveyed understood what software license infringement was but felt either that their use was not piracy, or that their society permitted this piracy in some way the lack of legal disincentives, the lack of punitive measures, or any one of a number of other reasons could also explain why these alleged piracy centers were not oblivious to intellectual property laws

Illicit Use
The individuals studied unilaterally condemned viruses, hacking, and other forms of system abuse as unacceptable behavior
The low overall degree of tolerance for illicit system use may be a function of the easy association between the common crimes of breaking and entering, trespassing, theft, and destruction of property to their computer-related counterparts

Misuse of Corporate Resources
Individuals displayed a rather lenient view of personal use of company equipment.
A range of views within the acknowledgement of ethical versus unethical behavior as to whether or not some actions are moderately or highly acceptable

Ethics and Education
Differences in the ethics of computer use are not exclusively international.
Differences are found among individuals within the same country, within the same social class, and within the same company

Deterrence to Unethical and Illegal Behavior
It is the responsibility of information security personnel to do everything in their power to deter these acts and to use policy, education and training, and technology to protect information and systems
Three general categories of unethical and illegal behavior:
Ignorance
Accident
Intent

Three general categories of unethical and illegal behavior:
Ignorance
ignorance of the law is no excuse, however ignorance of policy and procedures is
Accident
Individuals with authorization and privileges to manage information within the organization are most likely to cause harm or damage by accident
Intent
Intent is often the cornerstone of legal defense, when it becomes necessary to determine whether or not the offender acted out of ignorance, by accident, or with specific intent to cause harm or damage

Deterrence
Deterrence is the best method for preventing an illegal or unethical activity. Laws, policies, and technical controls are all examples of deterrents. However, it is generally agreed that laws and policies and their associated penalties only deter if three conditions are present
Fear of penalty: The individual intending to commit the act must fear the penalty. Threats of informal reprimand or verbal warnings may not have the same impact as the threat of imprisonment or forfeiture of pay.
Probability of being caught: The individual has to believe there is a strong possibility of being caught performing the illegal or unethical act. Penalties can be severe, but the penalty will not deter the behavior unless there is an expectation of being caught.
Probability of penalty being administered: The individual must believe that the penalty will in fact be administered.

Wireless LAN security

One issue with corporate wireless networks in general, and WLANs in particular, involves the need for security. Many early access points could not discern whether or not a particular user had authorization to access the network. Although this problem reflects issues that have long troubled many types of wired networks (it has been possible in the past for individuals to plug computers into randomly available Ethernet jacks and get access to a local network), this did not usually pose a significant problem, since many organizations had reasonably good physical security. However, the fact that radio signals bleed outside of buildings and across property lines makes physical security largely irrelevant to Piggybackers. Such corporate issues are covered in wireless security

Security options
There are three principal ways to secure a wireless network.

For closed networks (like home users and organizations) the most common way is to configure access restrictions in the access points. Those restrictions may include encryption and checks on MAC address. Another option is to disable ESSID broadcasting, making the access point difficult for outsiders to detect. Wireless Intrusion Prevention Systems can used to provide wireless LAN security in this network model.

For commercial providers, hotspots, and large organizations, the preferred solution is often to have an open and unencrypted, but completely isolated wireless network. The users will at first have no access to the Internet nor to any local network resources. Commercial providers usually forward all web traffic to a captive portal which provides for payment and/or authorization. Another solution is to require the users to connect securely to a privileged network using VPN.
Wireless networks are less secure than wired ones; in many offices intruders can easily visit and hook up their own computer to the wired network without problems, gaining access to the network, and it's also often possible for remote intruders to gain access to the network through backdoors like Back Orifice. One general solution may be end-to-end encryption, with independent authentication on all resources that shouldn't be available to the public.
[edit] Access Control at the Access Point level
One of the simplest techniques is to only allow access from known, approved MAC addresses. However, this approach gives no security against sniffing, and client devices can easily spoof MAC addresses, leading to the need for more advanced security measures.

Another very simple technique is to have a secret ESSID (id/name of the wireless network), though anyone who studies the method will be able to sniff the ESSID.

Today all (or almost all) access points incorporate Wired Equivalent Privacy (WEP) encryption and most wireless routers are sold with WEP turned on. However, security analysts have criticized WEP's inadequacies, and the U.S. FBI has demonstrated the ability to break WEP protection in only three minutes using tools available to the general public (see aircrack).

The Wi-Fi Protected Access (WPA and WPA2) security protocols were later created to address these problems. If a weak password, such as a dictionary word or short character string is used, WPA and WPA2 can be cracked. Using a long enough random password (e.g. 14 random letters) or passphrase (e.g. 5 randomly chosen words) makes pre-shared key WPA virtually uncrackable. The second generation of the WPA security protocol (WPA2) is based on the final IEEE 802.11i amendment to the 802.11 standard and is eligible for FIPS 140-2 compliance. With all those encryption schemes, any client in the network that knows the keys can read all the traffic.

Restricted access networks
Solutions include a newer system for authentication, IEEE 802.1x, that promises to enhance security on both wired and wireless networks. Wireless access points that incorporate technologies like these often also have routers built in, thus becoming wireless gateways.

End-to-End encryption
One can argue that both layer 2 and layer 3 encryption methods are not good enough for protecting valuable data like passwords and personal emails. Those technologies add encryption only to parts of the communication path, still allowing people to spy on the traffic if they have gained access to the wired network somehow. The solution may be encryption and authorization in the application layer, using technologies like SSL, SSH, GnuPG, PGP and similar.

The disadvantage with the end to end method is, it may fail to cover all traffic. With encryption on the router level or VPN, a single switch encrypts all traffic, even UDP and DNS lookups. With end-to-end encryption on the other hand, each service to be secured must have its encryption "turned on," and often every connection must also be "turned on" separately. For sending emails, every recipient must support the encryption method, and must exchange keys correctly. For Web, not all web sites offer https, and even if they do, the browser sends out IP addresses in clear text.

The most prized resource is often access to Internet. An office LAN owner seeking to restrict such access will face the non trivial enforcement task of having each user authenticate himself for the router.

Wireless communication

Wireless communication
Wireless communication is the transfer of information over a distance without
the use of electrical conductors or "wires". Wireless technology include GPS units, garage door peners and or garage doors, wireless computer mice, keyboards and headsets, atellite television and cordless telephones.

Wireless networking
Wireless networking (i.e. the various types of unlicensed 2.4 GHz WiFi devices)
is used to meet many needs. Perhaps the most common use is to connect laptop
users who travel from location to location. Another common use is for mobile
networks that connect via satellite. A wireless transmission method is a logical
choice to network a LAN segment that must frequently change locations.

Wireless LAN
The notebook is connected to the wireless access point using a PC card wireless card.
A diagram showing a Wi-Fi networkA wireless local area network (WLAN) links two or more devices using some wireless istribution method, and usually providing a connection through an access point to the wider internet. This gives

users the mobility to move around within a local coverage area and still be connected to the network.

Types of wireless LANs

Peer-to-peer
Peer-to-Peer or ad-hoc wireless LANAn ad-hoc network is a network where stations communicate only peer to peer(P2P). There is no base and no one gives permission to talk. This is accomplished using the Independent Basic

Service Set (IBSS).
A peer-to-peer (P2P) network allows wireless devices to directly communicate with each other. Wireless devices within range of each other can discover and communicate directly without involving central access points. This method is typically used by two computers so that they can connect to each other to form a network.

Bridge
A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the Wireless LAN.

Wireless distribution system
A Wireless Distribution System is a system that enables the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the need for a wired backbone to link them, as is traditionally required. The notable advantage of WDS over other solutions is that it preserves the MAC addresses of client packets across links between access points.

WLAN standards
Several standards for WLAN hardware exist:

802.11a, b, and g
The 802.11a, b, and g standards are the most common for home wireless access points and large business wireless systems. The differences are:

•802.11a: With data transfer rates up to 54Mbps, it is faster than 802.11b and can support more simultaneous connections. Because it operates in a more regulated frequency, it gets less signal interference from other devices and is considered to be better at maintaining connections. In areas with major radio interference (e.g., airports, business call centers), 802.11a will outperform 802.11b. It has the shortest range of the three standards (generally around 60 to 100 feet), broadcasts in the 5GHz frequency, and is less able to penetrate physical barriers, such as walls.

•802.11b: It supports data transfer speeds up to 11Mbps. It's better than 802.11a at penetrating physical barriers, but doesn't support as many simultaneous connections. It has better range than 802.11a (up to 300 feet in ideal circumstances; tests by independent reviewers commonly achieve between 70 and 150 feet), and uses hardware that tends to be less expensive. It's more susceptible to interference, because it operates on the same frequency (2.4GHz) as many cordless phones and other appliances. Therefore, it's not considered a good technology for applications that require absolutely reliable connections, such as live video streaming.

•802.11g: It's faster than 802.11b, supporting data transfer rates up to 54Mbps. It has a slightly shorter range than 802.11b, but still better than 802.11a. Most independent reviews report around 65 to 120 feet in real-world situations. It is backward-compatible with 802.11b products, but will run only at 802.11b speeds when operating with them. It uses the 2.4GHz frequency, so it has the same problems with interference as 802.11b.

802.11n
The Institute of Electrical and Electronics Engineers (IEEE) has not yet ratified the 802.11.n standard. Because of this, some manufacturers advertise their 802.11n equipment as "draft" devices.

Though specifications may change once the standard is finalized, it is expected to allow data transfer rates up to 600Mbps. Product manufacturers are advertising ranges twice as large as those of as 802.11b/g devices, but as with any wireless devices, range ultimately depends more on the manufacturer and the environment than the standard.

Monday, October 26, 2009

IEEE 802.11

IEEE 802.11 is a set of standards carrying out wireless local area network (WLAN) computer communication in the 2.4, 3.6 and 5 GHz frequency bands. They are implemented by the IEEE LAN/MAN Standards Committee (IEEE 802).

IEEE 802.11a-1999

IEEE 802.11a-1999 or 802.11a is an amendment to the IEEE 802.11 specification that added a higher data rate of up to 54 Mbit/s using the 5 GHz band. It has seen widespread worldwide implementation, particularly within the corporate workspace. The amendment has been incorporated into the published IEEE 802.11-2007 standard.

802.11 is a set of IEEE standards that govern wireless networking transmission methods. They are commonly used today in their 802.11a, 802.11b, 802.11g and 802.11n versions to provide wireless connectivity in the home, office and some commercial establishments.

802.11 Architecture

The 802.11 logical architecture contains several main components: station (STA), wireless access point (AP), independent basic service set (IBSS), basic service set (BSS), distribution system (DS), and extended service set (ESS). Some of the components of the 802.11 logical architecture map directly to hardware devices, such as STAs and wireless APs. The wireless STA contains an adapter card, PC Card, or an embedded device to provide wireless connectivity. The wireless AP functions as a bridge between the wireless STAs and the existing network backbone for network access.

An IBSS is a wireless network, consisting of at least two STAs, used where no access to a DS is available. An IBSS is also sometimes referred to as an ad hoc wireless network.

A BSS is a wireless network, consisting of a single wireless AP supporting one or multiple wireless clients. A BSS is also sometimes referred to as an infrastructure wireless network. All STAs in a BSS communicate through the AP. The AP provides connectivity to the wired LAN and provides bridging functionality when one STA initiates communication to another STA or a node on the DS.

An ESS is a set of two or more wireless APs connected to the same wired network that defines a single logical network segment bounded by a router (also known as a subnet).

The APs of multiple BSSs are interconnected by the DS. This allows for mobility, because STAs can move from one BSS to another BSS. APs can be interconnected with or without wires; however, most of the time they are connected with wires. The DS is the logical component used to interconnect BSSs. The DS provides distribution services to allow for the roaming of STAs between BSSs.

Architecture
Architecture is designed to support a network where mobile station is responsible for the decision making.
Advantages are :

-very tolerant of faults in all of the WLAN equipment.
-eliminates any possible bottlenecks a centralized architecture would introduce.
Architecture has power-saving modes of operation built into the protocol to prolong the battery life of mobile equipment without losing network connectivity.

Components
Station the component that connects to the wireless medium. Supported services are authentication, deauthentication, privacy, and delivery of the data.
Basic Service Set A BSS is a set of stations that communicate with one another. A BSS does not generally refer to a particular area, due to the uncertainties of electromagnetic propagation. When all of the stations int the BSS are mobile stations and there is no connection to a wired network, the BSS is called independent BSS (IBSS). IBSS is typically short-lived network, with a small number of stations, that is created for a particular purpose. When a BSS includes an access point (AP), the BSS is called infrastructure BSS.

When there is a AP, If one mobile station in the BSS must communicate with another mobile station, the communication is sent first to the AP and then from the AP to the other mobile station. This consume twice the bandwidth that the same communication. While this appears to be a significant cost, the benefits provided by the AP far outweigh this cost. One of them is, AP buffers the traffic of mobile while that station is operating in a very low power state.

Extended Service Set (ESS) A ESS is a set of infrastructure BSSs, where the APs communicate among themselves to forward traffic from one BSS to another and to facilitate the movement of mobile stations from one BSS to another. The APs perform this communication via an abstract medium called the distribution system (DS). To network equipment outside of the ESS, the ESS and all of its mobile stations appears to be a single MAC-layer network where all stations are physically stationary. Thus, the ESS hides the mobility of the mobile stations from everything outside the ESS.

Distribution System the distribution system (DS) is the mechanism by which one AP communicates with another to exchange frames for stations in their BSSs, forward frames to follow mobile stations from one BSS to another, and exchange frames with wired network.

Services
- Station Services: Authentication, De-authentication, privacy, delivery of data
- Distribution Services: Association, Disassociation, Reassociation, Distribution, Integration

Station Services Similar functions to those that are expected of a wired network. The wired network function of physically connecting to the network cable is similar to the authentication and de-authentication services. Privacy is for data security. Data delivery is the reliable delivery of data frames from the MAC in one station to the MAC in one or more other station, with minimal duplication and minimal ordering.

Distribution Services provide services necessary to allow mobile stations to roam freely within an ESS and allow an IEEE 802.11 WLAN to connect with the wired LAN infrastructure. A thin layer between MAC and LLC sublayer that are invoked to determine how to forward frames within the IEEE 802.11 WLAN and also how to deliver frames from the IEEE 802.11 WLAN to network destinations outside of the WLAN.

Sunday, October 25, 2009

WEP IEEE 802.11

WEP
WEP provides data confidentiality services by encrypting the data sent between wireless nodes. Setting a WEP flag in the MAC header of the 802.11 frame indicates that the frame is encrypted with WEP encryption. WEP provides data integrity by including an integrity check value (ICV) in the encrypted portion of the wireless frame.

WEP defines two shared keys:
Multicast/global key. The multicast/global key is an encryption key that protects multicast and broadcast traffic from a wireless AP to all of its connected wireless clients.

Unicast session key. The unicast session key is an encryption key that protects unicast traffic between a wireless client and a wireless AP and multicast and broadcast traffic sent by the wireless client to the wireless AP.

WEP encryption uses the RC4 symmetric stream cipher with 40-bit and 104-bit encryption keys. Although 104-bit encryption keys are not specified in the 802.11 standard, many wireless AP vendors support them.

WEP Encryption
The WEP encryption process is shown in the following figure.

WEP Encryption Process
To encrypt the payload of an 802.11 frame, the following process is used:

1.A 32-bit integrity check value (ICV) is calculated for the frame data.
2.The ICV is appended to the end of the frame data.
3.A 24-bit initialization vector (IV) is generated and appended to the WEP encryption key.
4.The combination of initialization vector and WEP encryption key is used as the input of a pseudo-random number generator (PRNG) to generate a bit sequence that is the same size as the combination of data and ICV.
5.The PRNG bit sequence, also known as the key stream, is bit-wise exclusive ORed (XORed) with the combination of data and ICV to produce the encrypted portion of the payload that is sent between the wireless access point (AP) and the wireless client.
6.To create the payload for the wireless MAC frame, the IV is added to the front of the encrypted combination of the data and ICV, along with other fields.

WEP Decryption
The WEP decryption process is shown in the following figure.

WEP Decryption Process
To decrypt the 802.11 frame data, the following process is used:

1.The initialization vector (IV) is obtained from the front of the MAC payload.
2.The IV is appended to the WEP encryption key.
3.The combination of initialization vector and WEP encryption key is used as the input of the same PRNG to generate a bit sequence of the same size as the combination of the data and the ICV. This process produces the same key stream as that of the sending wireless node.
4.The PRNG bit sequence is XORed with the encrypted combination of the data and ICV] to decrypt the combined data and ICV portion of the payload.
5.The ICV calculation for the data portion of the payload is run, and its result is compared with the value included in the incoming frame. If the values match, the data is considered to be valid (sent from the wireless client and unmodified in transit). If they do not match, the frame is silently discarded.

Security Issues with WEP and IEEE 802.11
The main problem with WEP is that the determination and distribution of WEP encryption keys are not defined. WEP keys must be distributed by using a secure channel outside of the 802.11 protocol. In practice, WEP keys are text strings that must be manually configured using a keyboard for both the wireless AP and wireless clients. However, this key distribution system does not scale well to an enterprise organization and is not secure.

Additionally, there is no defined mechanism for changing the WEP encryption keys either per authentication or periodically for an authenticated connection. All wireless APs and clients use the same manually configured WEP key for multiple sessions. With multiple wireless clients sending a large amount of data, an attacker can remotely capture large amounts of WEP ciphertext and use cryptanalysis methods to determine the WEP key.

The lack of a WEP key management protocol is a principal limitation to providing 802.11 security, especially in infrastructure mode with a large number of stations. Some examples of this type of network include corporate and educational institutional campuses and public places such as airports and malls. The lack of automated authentication and key determination services also affects operation in ad hoc mode, in which users might want to use in peer-to-peer collaborative communication in areas such as conference rooms.

WPA
Although 802.1X addresses many of the security issues of the original 802.11 standard, issues still exist with regard to weaknesses in the WEP encryption and data integrity methods. The long-term solution to these problems is the IEEE 802.11i standard, which is currently in draft form.

Until the IEEE 802.11i standard is ratified, wireless vendors have agreed on an interoperable interim standard known as Wi-Fi Protected Access (WPA). The goals of WPA are the following:

To require secure wireless networking. WPA requires secure wireless networking by requiring 802.1X authentication, encryption, and unicast and multicast/global encryption key management.

To address WEP issues with a software upgrade. The implementation of the RC4 stream cipher within WEP is vulnerable to known plaintext attacks. Additionally, the data integrity provided with WEP is relatively weak. WPA solves all the remaining security issues with WEP, yet only requires firmware updates in wireless equipment and an update for wireless clients. Existing wireless equipment is not expected to require replacement.

To provide a secure wireless networking solution for small office/home office (SOHO) wireless users. For the SOHO, there is no RADIUS server to provide 802.1X authentication with an EAP type. SOHO wireless clients must use either shared key authentication (highly discouraged) or open system authentication (recommended) with a single static WEP key for both unicast and multicast traffic. WPA provides a pre-shared key option intended for SOHO configurations. The pre-shared key is configured on the wireless AP and each wireless client. The initial unicast encryption key is derived from the authentication process, which verifies that both the wireless client and the wireless AP have the pre-shared key.


To be compatible with the upcoming IEEE 802.11i standard. WPA is a subset of the security features in the proposed IEEE 802.11i standard. All the features of WPA are described in the current draft of the 802.11i standard.


To be available today. WPA upgrades to wireless equipment and for wireless clients were available beginning in February 2003.

WPA Security Features
WPA contains enhancements or replacements for the following security features:

Authentication
Encryption
Data integrity

Authentication
With 802.11, 802.1X authentication is optional; with WPA, 802.1X authentication is required. Authentication with WPA is a combination of open system and 802.1X authentication, which uses the following phases:

The first phase uses open system authentication to indicate to the wireless client that it can send frames to the wireless AP.

The second phase uses 802.1X to perform a user-level authentication. For environments without a RADIUS infrastructure, WPA supports the use of a pre-shared key; for environments with a RADIUS infrastructure, WPA supports EAP and RADIUS.


Encryption
With 802.1X, rekeying of unicast encryption keys is optional. Additionally, 802.11 and 802.1X provide no mechanism to change the global encryption key that is used for multicast and broadcast traffic. With WPA, rekeying of both unicast and global encryption keys is required. The Temporal Key Integrity Protocol (TKIP) changes the unicast encryption key for every frame, and each change is synchronized between the wireless client and the wireless AP. For the multicast/global encryption key, WPA includes a facility for the wireless AP to advertise changes to the connected wireless clients.

TKIP
For 802.11, WEP encryption is optional. For WPA, encryption using TKIP is required. TKIP replaces WEP with a new encryption algorithm that is stronger than the WEP algorithm, yet can be performed using the calculation facilities present on existing wireless hardware. TKIP also provides for the following:

The verification of the security configuration after the encryption keys are determined.


The synchronized changing of the unicast encryption key for each frame.

The determination of a unique starting unicast encryption key for each pre-shared key authentication.

AES
WPA defines the use of the Advanced Encryption Standard (AES) as an optional replacement for WEP encryption. Because adding AES support by using a firmware update might not be possible for existing wireless equipment, support for AES on wireless network adapters and wireless APs is not required.

Data Integrity
With 802.11 and WEP, data integrity is provided by a 32-bit ICV that is appended to the 802.11 payload and encrypted with WEP. Although the ICV is encrypted, it is possible through cryptanalysis to change bits in the encrypted payload and update the encrypted ICV without being detected by the receiver.

With WPA, a method known as Michael specifies a new algorithm that calculates an 8-byte message integrity code (MIC) with the calculation facilities available on existing wireless hardware. The MIC is placed between the data portion of the 802.11 frame and the 4-byte ICV. The MIC field is encrypted along with the frame data and the ICV.

Michael also provides replay protection through the use of a frame counter field in the 802.11 MAC header.

Saturday, October 10, 2009

Network Security

Introduction to Network Security

Abstract:

Network security is a complicated subject, historically only tackled by well-trained and experienced experts. However, as more and more people become wired, an increasing number of people need to understand the basics of security in a networked world. This document was written with the basic computer user and information systems manager in mind, explaining the concepts needed to read through the hype in the marketplace and understand risks and how to deal with them.
Some history of networking is included, as well as an introduction to TCP/IP and internetworking . We go on to consider risk management, network threats, firewalls, and more special-purpose secure networking devices.
This is not intended to be a frequently asked questions'' reference, nor is it a hands-on document describing how to accomplish specific functionality.
It is hoped that the reader will have a wider perspective on security in general, and better understand how to reduce and manage risk personally, at home, and in the workplace.

Introduction to Networking

A basic understanding of computer networks is requisite in order to understand the principles of network security. In this section, we'll cover some of the foundations of computer networking, then move on to an overview of some popular networks. Following that, we'll take a more in-depth look at TCP/IP, the network protocol suite that is used to run the Internet and many intranets.
Once we've covered this, we'll go back and discuss some of the threats that managers and administrators of computer networks need to confront, and then some tools that can be used to reduce the exposure to the risks of network computing.
What is a Network?

A network has been defined as any set of interlinking lines resembling a net, a network of roads || an interconnected system, a network of alliances.'' This definition suits our purpose well: a computer network is simply a system of interconnected computers. How they're connected is irrelevant, and as we'll soon see, there are a number of ways to do this.

The ISO/OSI Reference Model

The International Standards Organization (ISO) Open Systems Interconnect (OSI) Reference Model defines seven layers of communications types, and the interfaces among them. (See Figure 1.) Each layer depends on the services provided by the layer below it, all the way down to the physical network hardware, such as the computer's network interface card, and the wires that connect the cards together.
An easy way to look at this is to compare this model with something we use daily: the telephone. In order for you and I to talk when we're out of earshot, we need a device like a telephone. (In the ISO/OSI model, this is at the application layer.) The telephones, of course, are useless unless they have the ability to translate the sound into electronic pulses that can be transferred over wire and back again. (These functions are provided in layers below the application layer.) Finally, we get down to the physical connection: both must be plugged into an outlet that is connected to a switch that's part of the telephone system's network of switches.
If I place a call to you, I pick up the receiver, and dial your number. This number specifies which central office to which to send my request, and then which phone from that central office to ring. Once you answer the phone, we begin talking, and our session has begun. Conceptually, computer networks function exactly the same way.
It isn't important for you to memorize the ISO/OSI Reference Model's layers; but it's useful to know that they exist, and that each layer cannot work without the services provided by the layer below it.

Figure 1: The ISO/OSI Reference Model

What are some Popular Networks?

Over the last 25 years or so, a number of networks and network protocols have been defined and used. We're going to look at two of these networks, both of which are ``public'' networks. Anyone can connect to either of these networks, or they can use types of networks to connect their own hosts (computers) together, without connecting to the public networks. Each type takes a very different approach to providing network services.
UUCP

UUCP (Unix-to-Unix CoPy) was originally developed to connect Unix (surprise!) hosts together. UUCP has since been ported to many different architectures, including PCs, Macs, Amigas, Apple IIs, VMS hosts, everything else you can name, and even some things you can't. Additionally, a number of systems have been developed around the same principles as UUCP.

Batch-Oriented Processing.

UUCP and similar systems are batch-oriented systems: everything that they have to do is added to a queue, and then at some specified time, everything in the queue is processed.
Implementation Environment.

UUCP networks are commonly built using dial-up (modem) connections. This doesn't have to be the case though: UUCP can be used over any sort of connection between two computers, including an Internet connection.

Building a UUCP network is a simple matter of configuring two hosts to recognize each other, and know how to get in touch with each other. Adding on to the network is simple; if hosts called A and B have a UUCP network between them, and C would like to join the network, then it must be configured to talk to A and/or B. Naturally, anything that C talks to must be made aware of C's existence before any connections will work. Now, to connect D to the network, a connection must be established with at least one of the hosts on the network, and so on.

The Internet
Internet: This is a word that I've heard way too often in the last few years. Movies, books, newspapers, magazines, television programs, and practically every other sort of media imaginable has dealt with the Internet recently.

What is the Internet?
The Internet is the world's largest network of networks . When you want to access the resources offered by the Internet, you don't really connect to the Internet; you connect to a network that is eventually connected to the Internet backbone , a network of extremely fast (and incredibly overloaded!) network components. This is an important point: the Internet is a network of networks -- not a network of hosts.
A simple network can be constructed using the same protocols and such that the Internet uses without actually connecting it to anything else. Such a basic network is shown in Figure 2.


Figure 2: A Simple Local Area Network

I might be allowed to put one of my hosts on one of my employer's networks. We have a number of networks, which are all connected together on a backbone , that is a network of our networks. Our backbone is then connected to other networks, one of which is to an Internet Service
Provider (ISP) whose backbone is connected to other networks, one of which is the Internet backbone.
If you have a connection ``to the Internet'' through a local ISP, you are actually connecting your computer to one of their networks, which is connected to another, and so on. To use a service from my host, such as a web server, you would tell your web browser to connect to my host. Underlying services and protocols would send packets (small datagrams) with your query to your ISP's network, and then a network they're connected to, and so on, until it found a path to my employer's backbone, and to the exact network my host is on. My host would then respond appropriately, and the same would happen in reverse: packets would traverse all of the connections until they found their way back to your computer, and you were looking at my web page.

In Figure 3, the network shown in Figure 2 is designated ``LAN 1'' and shown in the bottom-right of the picture. This shows how the hosts on that network are provided connectivity to other hosts on the same LAN, within the same company, outside of the company, but in the same ISP cloud , and then from another ISP somewhere on the Internet.


Figure 3: A Wider View of Internet-connected Networks

The Internet is made up of a wide variety of hosts, from supercomputers to personal computers, including every imaginable type of hardware and software. How do all of these computers understand each other and work together?

TCP/IP: The Language of the Internet
TCP/IP (Transport Control Protocol/Internet Protocol) is the ``language'' of the Internet. Anything that can learn to ``speak TCP/IP'' can play on the Internet. This is functionality that occurs at the Network (IP) and Transport (TCP) layers in the ISO/OSI Reference Model. Consequently, a host that has TCP/IP functionality (such as Unix, OS/2, MacOS, or Windows NT) can easily support applications (such as Netscape's Navigator) that uses the network.

Open Design
One of the most important features of TCP/IP isn't a technological one: The protocol is an ``open'' protocol, and anyone who wishes to implement it may do so freely. Engineers and scientists from all over the world participate in the IETF (Internet Engineering Task Force) working groups that design the protocols that make the Internet work. Their time is typically donated by their companies, and the result is work that benefits everyone.

IP
As noted, IP is a ``network layer'' protocol. This is the layer that allows the hosts to actually ``talk'' to each other. Such things as carrying datagrams, mapping the Internet address (such as 10.2.3.4) to a physical network address (such as 08:00:69:0a:ca:8f), and routing, which takes care of making sure that all of the devices that have Internet connectivity can find the way to each other.

Understanding IP
IP has a number of very important features which make it an extremely robust and flexible protocol. For our purposes, though, we're going to focus on the security of IP, or more specifically, the lack thereof.

Attacks Against IP
A number of attacks against IP are possible. Typically, these exploit the fact that IP does not perform a robust mechanism for authentication , which is proving that a packet came from where it claims it did. A packet simply claims to originate from a given address, and there isn't a way to be sure that the host that sent the packet is telling the truth. This isn't necessarily a weakness, per se , but it is an important point, because it means that the facility of host authentication has to be provided at a higher layer on the ISO/OSI Reference Model. Today, applications that require strong host authentication (such as cryptographic applications) do this at the application layer.

IP Spoofing.
This is where one host claims to have the IP address of another. Since many systems (such as router access control lists) define which packets may and which packets may not pass based on the sender's IP address, this is a useful technique to an attacker: he can send packets to a host, perhaps causing it to take some sort of action.
Additionally, some applications allow login based on the IP address of the person making the request (such as the Berkeley r-commands )[2]. These are both good examples how trusting untrustable layers can provide security that is -- at best -- weak.
IP Session Hijacking.

This is a relatively sophisticated attack, first described by Steve Bellovin [3]. This is very dangerous, however, because there are now toolkits available in the underground community that allow otherwise unskilled bad-guy-wannabes to perpetrate this attack. IP Session Hijacking is an attack whereby a user's session is taken over, being in the control of the attacker. If the user was in the middle of email, the attacker is looking at the email, and then can execute any commands he wishes as the attacked user. The attacked user simply sees his session dropped, and may simply login again, perhaps not even noticing that the attacker is still logged in and doing things.

For the description of the attack, let's return to our large network of networks in Figure 3. In this attack, a user on host A is carrying on a session with host G. Perhaps this is a telnet session, where the user is reading his email, or using a Unix shell account from home. Somewhere in the network between A and G sits host H which is run by a naughty person. The naughty person on host H watches the traffic between A and G, and runs a tool which starts to impersonate A to G, and at the same time tells A to shut up, perhaps trying to convince it that G is no longer on the net (which might happen in the event of a crash, or major network outage). After a few seconds of this, if the attack is successful, naughty person has ``hijacked'' the session of our user. Anything that the user can do legitimately can now be done by the attacker, illegitimately. As far as G knows, nothing has happened.
This can be solved by replacing standard telnet-type applications with encrypted versions of the same thing. In this case, the attacker can still take over the session, but he'll see only ``gibberish'' because the session is encrypted. The attacker will not have the needed cryptographic key(s) to decrypt the data stream from G, and will, therefore, be unable to do anything with the session.

TCP
TCP is a transport-layer protocol. It needs to sit on top of a network-layer protocol, and was designed to ride atop IP. (Just as IP was designed to carry, among other things, TCP packets.) Because TCP and IP were designed together and wherever you have one, you typically have the other, the entire suite of Internet protocols are known collectively as ``TCP/IP.'' TCP itself has a number of important features that we'll cover briefly.

Guaranteed Packet Delivery
Probably the most important is guaranteed packet delivery. Host A sending packets to host B expects to get acknowledgments back for each packet. If B does not send an acknowledgment within a specified amount of time, A will resend the packet.

Applications on host B will expect a data stream from a TCP session to be complete, and in order. As noted, if a packet is missing, it will be resent by A, and if packets arrive out of order, B will arrange them in proper order before passing the data to the requesting application.
This is suited well toward a number of applications, such as a telnet session. A user wants to be sure every keystroke is received by the remote host, and that it gets every packet sent back, even if this means occasional slight delays in responsiveness while a lost packet is resent, or while out-of-order packets are rearranged.

It is not suited well toward other applications, such as streaming audio or video, however. In these, it doesn't really matter if a packet is lost (a lost packet in a stream of 100 won't be distinguishable) but it does matter if they arrive late (i.e., because of a host resending a packet presumed lost), since the data stream will be paused while the lost packet is being resent. Once the lost packet is received, it will be put in the proper slot in the data stream, and then passed up to the application.
UDP

UDP (User Datagram Protocol) is a simple transport-layer protocol. It does not provide the same features as TCP, and is thus considered ``unreliable.'' Again, although this is unsuitable for some applications, it does have much more applicability in other applications than the more reliable and robust TCP.

Lower Overhead than TCP
One of the things that makes UDP nice is its simplicity. Because it doesn't need to keep track of the sequence of packets, whether they ever made it to their destination, etc., it has lower overhead than TCP. This is another reason why it's more suited to streaming-data applications: there's less screwing around that needs to be done with making sure all the packets are there, in the right order, and that sort of thing.

Risk Management: The Game of Security
It's very important to understand that in security, one simply cannot say ``what's the best firewall?'' There are two extremes: absolute security and absolute access. The closest we can get to an absolutely secure machine is one unplugged from the network, power supply, locked in a safe, and thrown at the bottom of the ocean. Unfortunately, it isn't terribly useful in this state. A machine with absolute access is extremely convenient to use: it's simply there, and will do whatever you tell it, without questions, authorization, passwords, or any other mechanism. Unfortunately, this isn't terribly practical, either: the Internet is a bad neighborhood now, and it isn't long before some bonehead will tell the computer to do something like self-destruct, after which, it isn't terribly useful to you.

This is no different from our daily lives. We constantly make decisions about what risks we're willing to accept. When we get in a car and drive to work, there's a certain risk that we're taking. It's possible that something completely out of control will cause us to become part of an accident on the highway. When we get on an airplane, we're accepting the level of risk involved as the price of convenience. However, most people have a mental picture of what an acceptable risk is, and won't go beyond that in most circumstances. If I happen to be upstairs at home, and want to leave for work, I'm not going to jump out the window. Yes, it would be more convenient, but the risk of injury outweighs the advantage of convenience.

Every organization needs to decide for itself where between the two extremes of total security and total access they need to be. A policy needs to articulate this, and then define how that will be enforced with practices and such. Everything that is done in the name of security, then, must enforce that policy uniformly.

Types And Sources Of Network Threats
Now, we've covered enough background information on networking that we can actually get into the security aspects of all of this. First of all, we'll get into the types of threats there are against networked computers, and then some things that can be done to protect yourself against various threats.

Denial-of-Service
DoS (Denial-of-Service) attacks are probably the nastiest, and most difficult to address. These are the nastiest, because they're very easy to launch, difficult (sometimes impossible) to track, and it isn't easy to refuse the requests of the attacker, without also refusing legitimate requests for service.

The premise of a DoS attack is simple: send more requests to the machine than it can handle. There are toolkits available in the underground community that make this a simple matter of running a program and telling it which host to blast with requests. The attacker's program simply makes a connection on some service port, perhaps forging the packet's header information that says where the packet came from, and then dropping the connection. If the host is able to answer 20 requests per second, and the attacker is sending 50 per second, obviously the host will be unable to service all of the attacker's requests, much less any legitimate requests (hits on the web site running there, for example).

Such attacks were fairly common in late 1996 and early 1997, but are now becoming less popular.
Some things that can be done to reduce the risk of being stung by a denial of service attack include
• Not running your visible-to-the-world servers at a level too close to capacity
• Using packet filtering to prevent obviously forged packets from entering into your network address space.
Obviously forged packets would include those that claim to come from your own hosts, addresses reserved for private networks as defined in RFC 1918 [4], and the loopback network (127.0.0.0).
• Keeping up-to-date on security-related patches for your hosts' operating systems.
Unauthorized Access
``Unauthorized access'' is a very high-level term that can refer to a number of different sorts of attacks. The goal of these attacks is to access some resource that your machine should not provide the attacker. For example, a host might be a web server, and should provide anyone with requested web pages. However, that host should not provide command shell access without being sure that the person making such a request is someone who should get it, such as a local administrator.

Executing Commands Illicitly
It's obviously undesirable for an unknown and untrusted person to be able to execute commands on your server machines. There are two main classifications of the severity of this problem: normal user access, and administrator access. A normal user can do a number of things on a system (such as read files, mail them to other people, etc.) that an attacker should not be able to do. This might, then, be all the access that an attacker needs. On the other hand, an attacker might wish to make configuration changes to a host (perhaps changing its IP address, putting a start-up script in place to cause the machine to shut down every time it's started, or something similar). In this case, the attacker will need to gain administrator privileges on the host.

Confidentiality Breaches
We need to examine the threat model: what is it that you're trying to protect yourself against? There is certain information that could be quite damaging if it fell into the hands of a competitor, an enemy, or the public. In these cases, it's possible that compromise of a normal user's account on the machine can be enough to cause damage (perhaps in the form of PR, or obtaining information that can be used against the company, etc.)
While many of the perpetrators of these sorts of break-ins are merely thrill-seekers interested in nothing more than to see a shell prompt for your computer on their screen, there are those who are more malicious, as we'll consider next. (Additionally, keep in mind that it's possible that someone who is normally interested in nothing more than the thrill could be persuaded to do more: perhaps an unscrupulous competitor is willing to hire such a person to hurt you.)

Destructive Behavior
Among the destructive sorts of break-ins and attacks, there are two major categories.

Data Diddling.
The data diddler is likely the worst sort, since the fact of a break-in might not be immediately obvious. Perhaps he's toying with the numbers in your spreadsheets, or changing the dates in your projections and plans. Maybe he's changing the account numbers for the auto-deposit of certain paychecks. In any case, rare is the case when you'll come in to work one day, and simply know that something is wrong. An accounting procedure might turn up a discrepancy in the books three or four months after the fact. Trying to track the problem down will certainly be difficult, and once that problem is discovered, how can any of your numbers from that time period be trusted? How far back do you have to go before you think that your data is safe?

Data Destruction.
Some of those perpetrate attacks are simply twisted jerks who like to delete things. In these cases, the impact on your computing capability -- and consequently your business -- can be nothing less than if a fire or other disaster caused your computing equipment to be completely destroyed.

Where Do They Come From?
How, though, does an attacker gain access to your equipment? Through any connection that you have to the outside world. This includes Internet connections, dial-up modems, and even physical access. (How do you know that one of the temps that you've brought in to help with the data entry isn't really a system cracker looking for passwords, data phone numbers, vulnerabilities and anything else that can get him access to your equipment?)
In order to be able to adequately address security, all possible avenues of entry must be identified and evaluated. The security of that entry point must be consistent with your stated policy on acceptable risk levels.

Lessons Learned
From looking at the sorts of attacks that are common, we can divine a relatively short list of high-level practices that can help prevent security disasters, and to help control the damage in the event that preventative measures were unsuccessful in warding off an attack.
Hope you have backups
This isn't just a good idea from a security point of view. Operational requirements should dictate the backup policy, and this should be closely coordinated with a disaster recovery plan, such that if an airplane crashes into your building one night, you'll be able to carry on your business from another location. Similarly, these can be useful in recovering your data in the event of an electronic disaster: a hardware failure, or a breakin that changes or otherwise damages your data.

Don't put data where it doesn't need to be
Although this should go without saying, this doesn't occur to lots of folks. As a result, information that doesn't need to be accessible from the outside world sometimes is, and this can needlessly increase the severity of a break-in dramatically.

Avoid systems with single points of failure
Any security system that can be broken by breaking through any one component isn't really very strong. In security, a degree of redundancy is good, and can help you protect your organization from a minor security breach becoming a catastrophe.
Stay current with relevant operating system patches
Be sure that someone who knows what you've got is watching the vendors' security advisories. Exploiting old bugs is still one of the most common (and most effective!) means of breaking into systems.

Watch for relevant security advisories
In addition to watching what the vendors are saying, keep a close watch on groups like CERT and CIAC. Make sure that at least one person (preferably more) is subscribed to these mailing lists

Have someone on staff be familiar with security practices
Having at least one person who is charged with keeping abreast of security developments is a good idea. This need not be a technical wizard, but could be someone who is simply able to read advisories issued by various incident response teams, and keep track of various problems that arise. Such a person would then be a wise one to consult with on security related issues, as he'll be the one who knows if web server software version such-and-such has any known problems, etc.
This person should also know the "dos'' and "don'ts'' of security, from reading such things as the "Site Security Handbook.''

Firewalls
As we've seen in our discussion of the Internet and similar networks, connecting an organization to the Internet provides a two-way flow of traffic. This is clearly undesirable in many organizations, as proprietary information is often displayed freely within a corporate intranet (that is, a TCP/IP network, modeled after the Internet that only works within the organization).
In order to provide some level of separation between an organization's intranet and the Internet, firewalls have been employed. A firewall is simply a group of components that collectively form a barrier between two networks.
A number of terms specific to firewalls and networking are going to be used throughout this section, so let's introduce them all together.

Bastion host.
A general-purpose computer used to control access between the internal (private) network (intranet) and the Internet (or any other untrusted network). Typically, these are hosts running a flavor of the Unix operating system that has been customized in order to reduce its functionality to only what is necessary in order to support its functions. Many of the general-purpose features have been turned off, and in many cases, completely removed, in order to improve the security of the machine.

Router.
A special purpose computer for connecting networks together. Routers also handle certain functions, such as routing , or managing the traffic on the networks they connect.
Access Control List (ACL).
Many routers now have the ability to selectively perform their duties, based on a number of facts about a packet that comes to it. This includes things like origination address, destination address, destination service port, and so on. These can be employed to limit the sorts of packets that are allowed to come in and go out of a given network.

Demilitarized Zone (DMZ).
The DMZ is a critical part of a firewall: it is a network that is neither part of the untrusted network, nor part of the trusted network. But, this is a network that connects the untrusted to the trusted. The importance of a DMZ is tremendous: someone who breaks into your network from the Internet should have to get through several layers in order to successfully do so. Those layers are provided by various components within the DMZ.

Proxy.
This is the process of having one host act in behalf of another. A host that has the ability to fetch documents from the Internet might be configured as a proxy server , and host on the intranet might be configured to be proxy clients . In this situation, when a host on the intranet wishes to fetch the web page, for example, the browser will make a connection to the proxy server, and request the given URL. The proxy server will fetch the document, and return the result to the client. In this way, all hosts on the intranet are able to access resources on the Internet without having the ability to direct talk to the Internet.

Types of Firewalls
There are three basic types of firewalls, and we'll consider each of them.

Application Gateways
The first firewalls were application gateways, and are sometimes known as proxy gateways. These are made up of bastion hosts that run special software to act as a proxy server. This software runs at the Application Layer of our old friend the ISO/OSI Reference Model, hence the name. Clients behind the firewall must be proxitized (that is, must know how to use the proxy, and be configured to do so) in order to use Internet services. Traditionally, these have been the most secure, because they don't allow anything to pass by default, but need to have the programs written and turned on in order to begin passing traffic.


Figure 4: A sample application gateway

These are also typically the slowest, because more processes need to be started in order to have a request serviced. Figure 4 shows a application gateway.

Packet Filtering
Packet filtering is a technique whereby routers have ACLs (Access Control Lists) turned on. By default, a router will pass all traffic sent it, and will do so without any sort of restrictions. Employing ACLs is a method for enforcing your security policy with regard to what sorts of access you allow the outside world to have to your internal network, and vice versa.

There is less overhead in packet filtering than with an application gateway, because the feature of access control is performed at a lower ISO/OSI layer (typically, the transport or session layer). Due to the lower overhead and the fact that packet filtering is done with routers, which are specialized computers optimized for tasks related to networking, a packet filtering gateway is often much faster than its application layer cousins. Figure 5 shows a packet filtering gateway.
Because we're working at a lower level, supporting new applications either comes automatically, or is a simple matter of allowing a specific packet type to pass through the gateway. (Not that the possibility of something automatically makes it a good idea; opening things up this way might very well compromise your level of security below what your policy allows.)

There are problems with this method, though. Remember, TCP/IP has absolutely no means of guaranteeing that the source address is really what it claims to be. As a result, we have to use layers of packet filters in order to localize the traffic. We can't get all the way down to the actual host, but with two layers of packet filters, we can differentiate between a packet that came from the Internet and one that came from our internal network. We can identify which network the packet came from with certainty, but we can't get more specific than that.

Hybrid Systems
In an attempt to marry the security of the application layer gateways with the flexibility and speed of packet filtering, some vendors have created systems that use the principles of both.


Figure 5: A sample packet filtering gateway

In some of these systems, new connections must be authenticated and approved at the application layer. Once this has been done, the remainder of the connection is passed down to the session layer, where packet filters watch the connection to ensure that only packets that are part of an ongoing (already authenticated and approved) conversation are being passed.
Other possibilities include using both packet filtering and application layer proxies. The benefits here include providing a measure of protection against your machines that provide services to the Internet (such as a public web server), as well as provide the security of an application layer gateway to the internal network. Additionally, using this method, an attacker, in order to get to services on the internal network, will have to break through the access router, the bastion host, and the choke router.

So, what's best for me?
Lots of options are available, and it makes sense to spend some time with an expert, either in-house, or an experienced consultant who can take the time to understand your organization's security policy, and can design and build a firewall architecture that best implements that policy. Other issues like services required, convenience, and scalability might factor in to the final design.

Some Words of Caution
The business of building firewalls is in the process of becoming a commodity market. Along with commodity markets come lots of folks who are looking for a way to make a buck without necessarily knowing what they're doing. Additionally, vendors compete with each other to try and claim the greatest security, the easiest to administer, and the least visible to end users. In order to try to quantify the potential security of firewalls, some organizations have taken to firewall certifications. The certification of a firewall means nothing more than the fact that it can be configured in such a way that it can pass a series of tests. Similarly, claims about meeting or exceeding U.S. Department of Defense ``Orange Book'' standards, C-2, B-1, and such all simply mean that an organization was able to configure a machine to pass a series of tests. This doesn't mean that it was loaded with the vendor's software at the time, or that the machine was even usable. In fact, one vendor has been claiming their operating system is ``C-2 Certified'' didn't make mention of the fact that their operating system only passed the C-2 tests without being connected to any sort of network devices.

Such gauges as market share, certification, and the like are no guarantees of security or quality. Taking a little bit of time to talk to some knowledgeable folks can go a long way in providing you a comfortable level of security between your private network and the big, bad Internet.
Additionally, it's important to note that many consultants these days have become much less the advocate of their clients, and more of an extension of the vendor. Ask any consultants you talk to about their vendor affiliations, certifications, and whatnot. Ask what difference it makes to them whether you choose one product over another, and vice versa. And then ask yourself if a consultant who is certified in technology XYZ is going to provide you with competing technology ABC, even if ABC best fits your needs.

Single Points of Failure
Many ``firewalls'' are sold as a single component: a bastion host, or some other black box that you plug your networks into and get a warm-fuzzy, feeling safe and secure. The term ``firewall'' refers to a number of components that collectively provide the security of the system. Any time there is only one component paying attention to what's going on between the internal and external networks, an attacker has only one thing to break (or fool!) in order to gain complete access to your internal networks.

Secure Network Devices
It's important to remember that the firewall is only one entry point to your network. Modems, if you allow them to answer incoming calls, can provide an easy means for an attacker to sneak around (rather than through ) your front door (or, firewall). Just as castles weren't built with moats only in the front, your network needs to be protected at all of its entry points.

Secure Modems; Dial-Back Systems
If modem access is to be provided, this should be guarded carefully. The terminal server , or network device that provides dial-up access to your network needs to be actively administered, and its logs need to be examined for strange behavior. Its passwords need to be strong -- not ones that can be guessed. Accounts that aren't actively used should be disabled. In short, it's the easiest way to get into your network from remote: guard it carefully.

There are some remote access systems that have the feature of a two-part procedure to establish a connection. The first part is the remote user dialing into the system, and providing the correct userid and password. The system will then drop the connection, and call the authenticated user back at a known telephone number. Once the remote user's system answers that call, the connection is established, and the user is on the network. This works well for folks working at home, but can be problematic for users wishing to dial in from hotel rooms and such when on business trips.

Other possibilities include one-time password schemes, where the user enters his userid, and is presented with a ``challenge,'' a string of between six and eight numbers. He types this challenge into a small device that he carries with him that looks like a calculator. He then presses enter, and a ``response'' is displayed on the LCD screen. The user types the response, and if all is correct, he login will proceed. These are useful devices for solving the problem of good passwords, without requiring dial-back access. However, these have their own problems, as they require the user to carry them, and they must be tracked, much like building and office keys.
No doubt many other schemes exist. Take a look at your options, and find out how what the vendors have to offer will help you enforce your security policy effectively.

Crypto-Capable Routers
A feature that is being built into some routers is the ability to use session encryption between specified routers. Because traffic traveling across the Internet can be seen by people in the middle who have the resources (and time) to snoop around, these are advantageous for providing connectivity between two sites, such that there can be secure routes.
See the Snake Oil FAQ for a description of cryptography, ideas for evaluating cryptographic products, and how to determine which will most likely meet your needs.

Virtual Private Networks
Given the ubiquity of the Internet, and the considerable expense in private leased lines, many organizations have been building VPNs (Virtual Private Networks). Traditionally, for an organization to provide connectivity between a main office and a satellite one, an expensive data line had to be leased in order to provide direct connectivity between the two offices. Now, a solution that is often more economical is to provide both offices connectivity to the Internet. Then, using the Internet as the medium, the two offices can communicate.
The danger in doing this, of course, is that there is no privacy on this channel, and it's difficult to provide the other office access to ``internal'' resources without providing those resources to everyone on the Internet.

VPNs provide the ability for two offices to communicate with each other in such a way that it looks like they're directly connected over a private leased line. The session between them, although going over the Internet, is private (because the link is encrypted), and the link is convenient, because each can see each others' internal resources without showing them off to the entire world.
A number of firewall vendors are including the ability to build VPNs in their offerings, either directly with their base product, or as an add-on. If you have need to connect several offices together, this might very well be the best way to do it.

Conclusions
Security is a very difficult topic. Everyone has a different idea of what ``security'' is, and what levels of risk are acceptable. The key for building a secure network is to define what security means to your organization . Once that has been defined, everything that goes on with the network can be evaluated with respect to that policy. Projects and systems can then be broken down into their components, and it becomes much simpler to decide whether what is proposed will conflict with your security policies and practices.
Many people pay great amounts of lip service to security, but do not want to be bothered with it when it gets in their way. It's important to build systems and networks in such a way that the user is not constantly reminded of the security system around him. Users who find security policies and systems too restrictive will find ways around them. It's important to get their feedback to understand what can be improved, and it's important to let them know why what's been done has been, the sorts of risks that are deemed unacceptable, and what has been done to minimize the organization's exposure to them.
Security is everybody's business, and only with everyone's cooperation, an intelligent policy, and consistent practices, will it be achievable.




Introduction to Network

Definition

A computing network is a computing environment with more than one independent processors
May be multiple users per system
Distance between computing systems is not considered (a communications media problem)
Size of computing systems is not relevant

Network resources
  • Computers
  • Operating system
  • Programs
  • Processes
  • People
What is a network can provide?
  • Logical interface function
  • Sending messages
  • Receiving messages
  • Executing program
  • Obtaining status information
  • Obtaining status information on other network users and their status
Terminology

-Node
Single computing system in a network.
-Host
A single computing system's processor.
-Link
A connection between two hosts.
-Topology
The pattern of links in a network.

Bus Topology

To provide a single communication network on which any node can place information and from which any code can retrieve information
Attachments to the bus do not impact the other nodes on the bus

Star Topology

Has a central switch
All nodes wishing to communicate do so through the central host
The central host receives all messages, identifies the addresses, selects the link appropriate for that addresses and forwards the messages

Ring Topology

To connect a sequence of nodes in a loop or ring
Can be implemented with minimum cabling
Containing a token can control a “synchronous” loop

Mesh Topology

Each node can conceptually be connected directly to each other node
Has integrity and routing advantages
Not easily subject to destructive failures
Routing logic can be used to select the most efficient route through multiple nodes

ISO Reference Model
  • Open Systems Interconnection (OSI)
  • Describes computer network communications.
  • Developed by the International Standards Organization (ISO).
  • Consists of Seven Layers.
  • Model describes peer-to-peer correspondence, relationship between corresponding layers of sender and receiver.
  • Each layer represents a different activity performed in the actual transmission of a message.
  • Each layer serves a separate function.
  • Equivalent layers perform similar functions for sender and receiver.
Networks as Systems
  • Single System
  • Single set of security policies associated with each computing system.
  • Each system concerned with:
  • integrity of data
  • secrecy of data
  • availability of service
  • Operating system enforces its owns security policies.
Advantages of Computing Networks
  • Resource sharing
  • Reduces maintenance and storage costs.
  • Increased reliability (i.e. availability of service)
  • If one system fails users can shift to another.
  • Distributing the workload
  • Workload can be shifted from a heavily loaded system to an underutilized one.
  • Expandability
  • System is easily expanded by adding new nodes
Who cause security problem
  • Hacker
  • Spy
  • Student
  • Businessman
  • Ex-employee
  • Stockbroker
  • Terrorist
  • etc
Network security problem area

+Authentication
Deals with determining whom you are talking to before entering into a business deal or before revealing sensitive information
+Secrecy
What usually comes to mind when people think about network security
+Non-repudiation
Deals with signature
+Integrity control
Keeping information is not modified, add or delete by unauthorized user

Database Security - Threats and Countermeasures

Database Security - Threats and Countermeasures

Database security begins with physical security for the computer systems that host the DBMS. No DBMS is safe from intrusion, corruption, or destruction by people who have physical access to the computers. After physical security has been established, database administrators must protect the data from unauthorized user and from unauthorized access by authorized users.There are three main objects when designing a secure database application, and anything prevents from a DBMS to achieve these goals would be consider a threat to Database Security.

Integrity

Database integrity refers to the requirement that information be protected from improper modification. Modification of data includes creation, insertion, modification, changing the status of data, and deletion.Integrity is lost if unauthorized changes are made to the data by either intentional or accidental acts.

To prevent the loss of integrity from happening-->Only authorized users should be allowed to modify data.
e.g. Students may be allowed to see their grades, yet not allowed to modify it.

Availability

Authorized user or program should not be denied access. For example, an instructor who wishes to change a grade should be allowed to do so.

Secrecy

Information should not be disclosed to unauthorized users. For example, a student should not be allowed to examine other students¡¦ grades.

To achieve these objectives, a clear and consistent security policy should be developed to describe what security measures must be enforced. In particular, we must determine what part of the data is to be protected and which users get access to which portions of the data. Next, the security mechanisms of the underlying DBMS and operating system, as well as external mechanisms, such as securing access to buildings, must be utilized to enforce the policy. We emphasize that security measures must be taken at several levels.

Why is database security important?

If the loss of system or data integrity is not corrected, continued use of the contaminated system or corrupted data could result in inaccuracy, fraud, or erroneous decisions. In addition, unauthorized, unanticipated, or unintentional disclosure could result in loss of public confidence, embarrassment, or legal action against the organization.

Countermeasures to database security threats

Inference control -->The corresponding countermeasure to statistical database security.

Statistical database is a database which contains specific information on individuals or events but is intended to permit only statistical queries. (e.g. averages, sums, counts, maximums, minimums and standard deviations. However, it is possible to obtain confidential data on individuals by using only statistical queries. Inference control technique are used to prevent this from happening. (e.g. we can prohibit sequences of queries that refer repeatedly to the same population of tuples.

(2) Flow Control

"Flow control regulates the distribution or flow of information among accessible objects. A flow between object X and object Y occurs when a program reads values from X and writes values into Y. Flow controls check that information contained in some objects does not flow explicitly or implicitly into less protected objects. Thus, S user cannot get indirectly in Y what he or she cannot get directly from X." Elmasri,Navathe(P747)

(3) Encryption

"The idea behind encryption is to apply an encryption algorithm to the data, using a user-specified or DBA-specified encryption key. The output of the algorithm is the encrypted version of the data. There is also a decryption algorithm, which takes the encrypted data and a decryption key as input and then returns the original data." Elmasri,Navathe(P709)

(4) Access Control

A database for an enterprise contains a great deal of information and usually has several groups of users. Most users need to access only a small part of the database to carry out their tasks. Allowing users unrestricted access to all the data can be undesirable, and a DBMS should provide mechanisms to control access to data. The main idea behind access control is to protect unauthorized persons from accessing the system.

How it works?

1:Discretionary Access Control

Discretionary access control is based on the idea of access rights, or privileges, and mechanisms for giving users such privileges. A privilege allows a user to access some data object in a certain manner (e.g. to read or modify). A user who creates data object such as a table or a view automatically gets all applicable privileges on that object and the user can also propagate privileges using "Grant Option". The DBMS subsequently keeps track of how these privileges are granted to other users, and possibly revoked, and ensures that at all times only users with the necessary privileges can access an object.

SQL Syntax

SQL supports discretionary access control through the GRANT and REVOKE commands.

The GRANT command gives users privileges to base tables and views.

The REVOKE command cancels uses' privileges.

For example: GRANT privilege1, privilege2, ... ROVOKE privilege1, privilege2, ...
ON object_name ON object_name
TO user1, user2, ... ; FROM user1, user2, ... ;

GRANT SELECT, ALTER ROVOKE SELECT, ATLER
ON student ON student
TO db2_14 FROM db2_14

Example from Textbook (R.Elmasri, S. B. Navathe, Fundamentals of Database Systems, Ed.4, Addison-Wesley, 2003.Chapter 23)

Suppose that A1 creates the two base relations EMPLOYEE and DEPARTMENT

EMPLOYEE NAME SSN BDATE ADDRESS SEX SALARY DNO DEPARTMENT DNUMBER DNAME MGRSSN

A1 is then the owner of these two relations and hence has all the relation privileges on each of them. A1 wants to grant to account A2 the privilege to insert and delete tuples in both of these relations

GRANT INSERT, DELETE ON EMPLOYEE, DEPARTMENT TO A2;

A2 cannot grant INSERT and DELETE privileges on the EMPLOYEE and DEPARTMENT tables, because A2 was not given the GRANT OPTION in the preceding command.

GRANT SELECT ON EMPLOYEE, DEPARTMENT TO A3 with GRANT OPTION;

The clause WITH GRANT OPTION means that A3 can now propagate the privilege to other accounts by using GRANT. For example, A3 can grant the SELECT privilege on the EMPLOYEE relation to A4 by issuing the following command:

GRANT SELECT ON EMPLOYEE TO A4;

Now suppose that A1 decides to revoke the SELECT privilege on the EMPLOYEE relation from A3; A1 then can issue this command:

REVOKE SELECT ON EMPLOYEE FROM A3;

The DBMS must now automatically revoke the SELECT privilege on EMPLOYEE from A4, too, because A3 granted that privileges to A4 and A3 does not have the privilege any more.

MySQL grant revoke syntax

Limits on propagation of privileges

The techniques to limit the propagation of privileges have been developed, but they have not been implemented in most DBMSs and are not a part of SQL.

Horizontal propagation limits:
An account B given the GRANT OPTION can grant the privilege to at most i other accounts.

Vertical Propagation limits:
It limits the depth to which an account can pass on the privilege in terms of levels.

Pros and Cons of discretionary access control

Advantages:

Being flexible and suitable for various types of systems and application like commercial and industrial environtments.

Disadvantages:

Not providing real assurance on the satisfaction of the protection requirements.
Not imposing any restriction on the usage of information once it is obtained by a user and makes system vulnerable to attacks.

2:Mandatory Access control

Mandatory access control are aimed at addressing such loopholes in discretionary access control. The popular model for mandatory access control called the Bell-LaPadula model, is described in terms of objects, subjects, security classes, and clearances. Each database object is assigned a security class, and each subject is assigned clearance for a security class.

The Bell-LaPadula model imposes two restrictions on all reads and writes of database objects:

1: Simple Security Property: Subject S is allowed to read object O only if class(S)≥ class(O). For example, a user with TS (top
secret) clearance can read a table with C (confidential) clearance, but a user with C(Confidential) clearance is not allowed to
read a table with TS (top secret) classification.

2. *-Property: Subject S is allowed to write object O only if class(S)≤ class(O). For example, a user with S (secret) clearance can
write only objects with S (secret) or TS (top secret) classification.

If discretionary access controls are also specified, these rules represent additional restrictions. Therefore, to read or write a database object, a user must have the necessary privileges and the security classes of the user and the object must satisfy the preceding restrictions.

Advantages: Mandatory policies ensure a high degree of protection.-->suitable for military types of applications, which require a
high degree of protection.

Disadvantages: Applicable to very few environment for being too rigid.

Current State and Future-->Role-Based Access Control

Role-Based Access Control emerged rapidly in the 1990s and it's adopted by most DBMS since then. Its basic concept is that privileges are associated with roles, and users are assigned to appropriate roles. Roles can then be granted to users and other roles. (Roles can be created and destroyed using the CREATE ROLE and DROP ROLE commands.) RBAC appears to be a viable alternative to traditional discretionary and mandatory access controls; it ensures that only authorized users given access to certain data or resources.
Advantages of RBAC

A properly-administered RBAC system enables users to carry out a broad range of authorized operations, and provides great flexibility and breadth of application. System administrators can control access at a level of abstraction that is natural to the way that enterprises typically conduct business. This is achieved by statically and dynamically regulating users' actions through the establishment and definition of roles, role hierarchies, relationships, and constraints. Thus, once an RBAC framework is established for an organization, the principal administrative actions are the granting and revoking of users into and out of roles. Role associations can be established when new operations are instituted, and old operations can be deleted as organizational functions change and evolve. This simplifies the administration and management of privileges; roles can be updated without updating the privileges for every user on an individual basis. With these outstanding features and the easier deployment over the Internet, Role-Based Access Control undoubtedly will continue to be dominant in the future.

Conclustion
With the extensive use of database systems nowadays, everyone could become a victim of database crime, and a single database crime event might even result in a serious consequence on individual or public affairs. Because of that, database developers are always trying to create new technique to prevent unauthorized, unanticipated or unintentional disclosure of data from happening. No matter how good a security measure or technique is, database administrators always play a very important role in database securities issues. In addition to user account management, database administrator also contributes to developing security policy and enforcing the security-related aspects of a database design. But at the same time, advanced algorithms and technologies used to increase database security also raise challenges to both database developers and administrators. While databases with inference control, access control, encryption, etc. have become more and more complicated for developers, we can see that DBAs will need more knowledge to become qualified in the future.