Report on Linux

Page 1

Report on Linux Chapter 1: Introduction to Linux Introduction 1.1. What is Linux: Linux is a generic term referring to the family of Unix-like computer operating systems that use the Linux kernel. Their development is one of the most prominent examples of free and open source software collaboration; typically all the underlying source code can be used, freely modified, and redistributed, both commercially and non-commercially, by anyone under licenses such as the GNU General Public License. Linux can be installed on a wide variety of computer hardware, ranging from mobile phones, tablet computers and video game consoles, to mainframes and supercomputers. Linux is predominantly known for its use in servers; as of 2009 it has a server market share ranging between 20–40%. Most desktop computers run either Microsoft Windows or Mac OS X, with Linux having only 1–2% of the desktop market. However, desktop use of Linux has become increasingly popular in recent years, partly owing to the popular Ubuntu, Fedora, Mint, and openSUSE distributions and the emergence of netbooks and smart phones running an embedded Linux. Typically Linux is packaged in a format known as a Linux distribution for desktop and server use. Linux distributions include the Linux kernel and all of the supporting software required to run a complete system, such as utilities and libraries, the X Window System, the GNOME and KDE desktop environments, and the Apache HTTP Server. Commonly used applications with desktop Linux systems include the Mozilla Firefox web-browser, the OpenOffice.org office application suite and the GIMP image editor. The name "Linux" comes from the Linux kernel, originally written in 1991 by Linus Torvalds. The main supporting Userland in the form of system tools and libraries from the GNU Project (announced in 1983 by Richard Stallman) is the basis for the Free Software Foundation's preferred name GNU/Linux. (www.wikipedia.org) 1.2. What is Red Hat Linux? Red Hat Linux, assembled by the company Red Hat, was a popular Linux based operating system until its discontinuation in 2004. Red Hat Linux 1.0 was released on November 3, 1994. It was originally called "Red Hat Commercial Linux"It was the first Linux distribution to use the RPM Package Manager as its packaging format, and over time has served as the starting point for several other distributions, such as Mandriva Linux and Yellow Dog Linux. Since 2003, Red Hat has discontinued the Red Hat Linux line in favor of Red Hat Enterprise Linux (RHEL) for enterprise environments


1.3. Version history:

Box cover shot of Red Hat Linux 5.2 Release dates drawn from announcements on comp.os.linux.announce. Version names are chosen as to be cognitively related to the prior release, yet not related in the same way as the release before that • • • • • • • • • • • • • • • • • • • •

1.0 (Mother's Day), November 3, 1994 (Linux 1.2.8) 1.1 (Mother's Day+0.1), August 1, 1995 (Linux 1.2.11) 2.0, September 20, 1995 (Linux 1.2.13-2) 2.1, November 23, 1995 (Linux 1.2.13) 3.0.3 (Picasso), May 1, 1996 - first release supporting DEC Alpha 4.0 (Colgate), October 3, 1996 (Linux 2.0.18) - first release supporting SPARC 4.1 (Vanderbilt), February 3, 1997 (Linux 2.0.27) 4.2 (Biltmore), May 19, 1997 (Linux 2.0.30-2) 5.0 (Hurricane), December 1, 1997 (Linux 2.0.32-2) 5.1 (Manhattan), May 22, 1998 (Linux 2.0.34-0.6) 5.2 (Apollo), November 2, 1998 (Linux 2.0.36-0.7) 6.0 (Hedwig), April 26, 1999 (Linux 2.2.5-15) 6.1 (Cartman), October 4, 1999 (Linux 2.2.12-20) 6.2 (Zoot), April 3, 2000 (Linux 2.2.14-5.0) 7 (Guinness), September 25, 2000 (this release is labeled "7" not "7.0") (Linux 2.2.1622) 7.1 (Seawolf), April 16, 2001 (Linux 2.4.2-2) 7.2 (Enigma), October 22, 2001 (Linux 2.4.7-10, Linux 2.4.9-21smp) 7.3 (Valhalla), May 6, 2002 (Linux 2.4.18-3) 8.0 (Psyche), September 30, 2002 (Linux 2.4.18-14) 9 (Shrike), March 31, 2003 (Linux 2.4.20-8) (this release is labeled "9" not "9.0")

The Fedora and Red (www.wikipedia.org)

Hat

Projects

1.4. What is Red Hat Enterprise Linux :

were

merged

on

September

22,

2003.


Red Hat Enterprise Linux

Red Hat Enterprise Linux (RHEL) is a Linux distribution produced by Red Hat and targeted toward the commercial market, including mainframes. Red Hat Enterprise Linux is released in server versions for x86, x86-64, Itanium, PowerPC and IBM System z, and desktop versions for x86 and x86-64. All of Red Hat's official support and training, and the Red Hat Certification Program center around the Red Hat Enterprise Linux platform. Red Hat Enterprise Linux is often abbreviated to RHEL, although this is not an official designation. Although Red Hat claims to supply major releases every 18 to 24 months, over 36 months have elapsed since the first release of Red Hat Enterprise Linux 5. However, Red Hat vice president of platform engineering Tim Burke confirmed that the beta version of Red Hat Enterprise Linux 6 would become available during the month of April 2010 with further release announcements coming at the Red Hat Summit in June 2010.A public beta was released on April 21, 2010. When Red Hat releases a new version of Red Hat Enterprise Linux, customers may upgrade to the new version at no additional charge as long as they are in possession of a current subscription (i.e. the subscription term has not yet lapsed). Red Hat's first Enterprise offering (Red Hat Linux 6.2E) essentially consisted of a version of Red Hat Linux 6.2 with different support levels, and without separate engineering. The first version of Red Hat Enterprise Linux to bear the name originally came onto the market as "Red Hat Linux Advanced Server". In 2003 Red Hat rebranded Red Hat Linux Advanced Server to "Red Hat Enterprise Linux AS", and added two more variants, Red Hat Enterprise Linux ES and Red Hat Enterprise Linux WS. Verbatim copying and redistribution of the entire Red Hat Enterprise Linux distribution is not permitted due to trademark restrictions.However, there are several redistributions of Red Hat


Enterprise Linux—such as CentOS—with trademarked features (such as logos, and the Red Hat name) removed 1.5.1 Kernel

A kernel connects the application software to the hardware of a computer. In computing, the kernel is the central component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components).Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls. Operating system tasks are done differently by different kernels, depending on their design and implementation. (www.wikipedia.org) 1.5.2 Linux kernel The Linux kernel is an operating system kernel used by the Linux family of Unix-like systems. It is one of the most prominent examples of free and open source software The Linux kernel is released under the GNU General Public License version 2 (GPLv2), (plus some firmware images with various licenses), and is developed by contributors worldwide. Day-to-day development takes place on the Linux kernel mailing list. The Linux kernel was initially conceived and created by Finnish computer science student Linus Torvalds in 1991. Linux rapidly accumulated developers and users who adopted code from other free software projects for use with the new operating system.The Linux kernel has received contributions from thousands of programmers Many Linux distributions have been released based upon the Linux kernel. 1.6.1 What is Computer Security? Computer security is a general term that covers a wide area of computing and information processing.Industries that depend on computer systems and networks to conduct daily business transactions and access crucial information regard their data as an important part of their overall assets. Several terms and metrics have entered our daily business vocabulary,


such as total cost of ownership (TCO)and quality of service (QoS). In these metrics, industries calculate aspects such as data integrity and high-availability as part of their planning and process management costs. In some industries, such as electronic commerce, the availability and trustworthiness of data can be the difference between success and failure. (www.cert.org/tech_tips/home_networks.html) 1.6.2 Security Controls Computer security is often divided into three distinct master categories, commonly referred to as controls: . Physical . Technical . Administrative These three broad categories de_ne the main objectives of proper security implementation. Within these controls are sub-categories that further detail the controls and how to implement them. 1.6.3 Physical Controls Physical control is the implementation of security measures in a de_ned structure used to deter or prevent unauthorized access to sensitive material. Examples of physical controls are: . Closed-circuit surveillance cameras . Motion or thermal alarm systems . Security guards . Picture IDs . Locked and dead-bolted steel doors . Biometrics (includes _ngerprint, voice, face, iris, handwriting, and other automated methods used to recognize individuals) 1.6.4 Technical Controls Technical controls use technology as a basis for controlling the access and usage of sensitive data throughout a physical structure and over a network. Technical controls are far-reaching in scope and encompass such technologies as: . Encryption . Smart cards . Network authentication . Access control lists (ACLs) . File integrity auditing software 1.6.5 Administrative Controls Administrative controls de_ne the human factors of security. It involves all levels of personnel within an organization and determines which users have access to what resources and information by such means as:


. Training and awareness . Disaster preparedness and recovery plans . Personnel recruitment and separation strategies . Personnel registration and accounting 1.7.1 What is open source?

1.7.2 Software is free if it satisfies the four freedoms


1.8. Feature, Function and Benefit of Redhat Enterprise Linux 5:


(www.redhat.com/training/offices.html)

Chapter 2: Redhat Enterprise Linux 5 and Printer Installation 2.1. Installation of Red Hat Enterprise Linux 5: We need to Boot from the RedHat DVD.


After booting, Hit Enter to install using the graphical mode. . Press [Enter] to begin the installation. If we wish to abort the installation process at this time, simply eject the boot diskette now and reboot your machine Anaconda will start‌.

Next the GUI interface will pop up and you can begin the installation Setup. Click Next at the first screen.

redhat installation


Select the Language you want the system to use by default.

redhat 5 The next screen will display a popup asking for your installation number. If we have purchased redhat enterprise linux we should have an installation number,if not simply skip and use evaluation mode.

installation number Setting up our disk partitions can vary depending on our needs,if we like you can select a RAID setup or customize your partition layout.


partition layout Checking the Review and modify box allows you to edit the current layout. We have already setup a simple Hard disk withing vmware as 20GB for this guide.

review partition The next screen will install a bootloader. Leave the default for GRUB to be installed under our new disk. For added security we can protect the boot loader using a password (Recommended)


boot loader Next is our network settings, this can be a static ip address or a dynamic address assigned by a DHCP server on your network or a router. The ip address we use is an internal STATIC IP that uses my router for DNS and the gateway. The host name can be set using DHCP DNS or editing the /etc/hosts file

etho interfaces Select the timezone for our locale.


timezone

root password Select the software applications we want to install as part of the system. We selected the 2 available packages and checked the customize now box.

software applications


Customize the packages you want to install based on category.

package selection

Once started it will check for the package dependencies.

package dependencies Now you can begin the installation of redhat 5

install redhat


Formatting file system.

formatting file system Installing Packages

installing packages redhat 5

Reboot to complete the installation.

reboot


SETUP The next steps will setup and customize the system.

setup redhat 5 Accept license agreement

license agreement Configure the firewall to allow services such as HTTP or SSH.


redhat firewall Enable or disable SElinux

selinux Setup date and time

date time


Register at redhat for updates.

redhat updates Create a new user for the system.

create a user Setup and test your audio settings.


audio setup Insert any additional cds for software.

additional cd LOGIN as root and you can use your new redhat system.

login as root Runlevels: Before you can configure access to services, you must understand Linux runlevels. A runlevel is a state, or mode, that is defined by the services listed in the directory /etc/rc.d/rc<x>.d, where <x> is the number of the runlevel. The following runlevels exist: • 0 — Halt • 1 — Single-user mode • 2 — Not used (user-definable) • 3 — Full multi-user mode • 4 — Not used (user-definable) • 5 — Full multi-user mode (with an X-based login screen)


• 6 — Reboot If we use a text login screen, we are operating in runlevel 3. If we use a graphical login screen, you are operating in runlevel 5. The default runlevel can be changed by modifying the /etc/inittab file, which contains a line near the top of the file similar to the following: id:5:initdefault: Change the number in this line to the desired runlevel. The change does not take effect until We reboot the system. 2.2. Printer Setup in Redhat Enterprise Linux Red Hat Enterprise Linux 5 uses the Common Unix Printing System (CUPS). If a system was upgraded from a previous Red Hat Enterprise Linux version that used CUPS, the upgrade process preserves the configured queues. Using Printer Configuration Tool requires root privileges. To start the application, select System (on the panel) => Administration => Printing, or type the command systemconfig-printer at a shell prompt.

Figure 2-1. Printer Configuration Tool The following types of print queues can be configured:


o

AppSocket/HP JetDirect — a printer connected directly to the network through HP JetDirect or Appsocket interface instead of a computer.

o

Internet Printing Protocol (IPP) — a printer that can be accessed over a TCP/IP network via the Internet Printing Protocol (for example, a printer attached to another Red Hat Enterprise Linux system running CUPS on the network).

o

LPD/LPR Host or Printer — a printer attached to a different UNIX system that can be accessed over a TCP/IP network (for example, a printer attached to another Red Hat Enterprise Linux system running LPD on the network).

o

Networked Windows (SMB) — a printer attached to a different system which is sharing a printer over an SMB network (for example, a printer attached to a Microsoft Windows™ machine).

o

Networked JetDirect — a printer connected directly to the network through HP JetDirect instead of a computer.

Important If we add a new print queue or modify an existing one, we must apply the changes for them to take effect. Clicking the Apply button prompts the printer daemon to restart with the changes we have configured. Clicking the Revert button discards unapplied changes. 2.2.1. Adding a Local Printer To add a local printer, such as one attached through a parallel port or USB port on our computer, click the New Printer button in the main Printer Configuration Tool window to display the window.


Figure 2-2. Adding a Printer Click Forward to proceed. Enter a unique name for the printer in the Printer Name field. The printer name can contain letters, numbers, dashes (-), and underscores (_); it must not contain any spaces. We can also use the Description and Location fields to further distinguish this printer from others that may be configured on your system. Both of these fields are optional, and may contain spaces. Click Forward to open the New Printer dialogue. If the printer has been automatically detected, the printer model appears in Select Connection. Select the printer model and click Forward to continue. If the device does not automatically appear, select the device to which the printer is connected (such as LPT #1 or Serial Port #1) in Select Connection.


Figure 2-3: Selecting the Printer Model and Finishing Once we have properly selected a printer queue type, you can choose either option: o

Select a Printer from database - If you select this option, choose the make of your printer from the list of Makes. If your printer make is not listed, choose Generic.

o

Provide PPD file - A PostScript Printer Description (PPD) file may also be provided with your printer. this file is normally provided by the manufacturer. If you are provided with a PPD file, you can choose this option and use the browser bar below the option description to select the PPD file.


Figure 2-4. Selecting a Printer Model After choosing an option, click Forward to continue. We now have to choose the corresponding model and driver for the printer. The recommended printed driver is automatically selected based on the printer model we chose. The print driver processes the data that we want to print into a format the printer can understand. Since a local printer is attached directly to your computer, we need a printer driver to process the data that is sent to the printer. If we have a PPD file for the device (usually provided by the manufacturer), we can select it by choosing Provide PPD file. We can then browse the file system for the PPD file by clicking Browse 2.2.2. Confirming Printer Configuration The last step is to confirm our printer configuration. Click Apply to add the print queue if the settings are correct. Click Back to modify the printer configuration. After applying the changes, print a test page to ensure the configuration is correct.


After configuring printer successfully we have to apply to commands #service cups restart #chkconfig cups on 2.2.3 Printing a Test Page After we have configured our printer, we should print a test page to make sure the printer is functioning properly. To print a test page, select the printer that we want to try out from the printer list, then click Print Test Page from the printer's Settings tab. If we change the print driver or modify the driver options, we should print a test page to test the different configuration.

Chapter 3: User Account, Group and Permission 3.1. User Accounts, Groups, and Permissions Under Red Hat Enterprise Linux, a user can log into the system and use any applications or files they are permitted to access after a normal user account is created. Red Hat Enterprise Linux determines whether or not a user or group can access these resources based on the permissions assigned to them. There are three different permissions for files, directories, and applications. These permissions are used to control the kinds of access allowed. Different one-character symbols are used to describe each permission in a directory listing. The following symbols are used: — Indicates that a given category of user can read a file.

r

w

— Indicates that a given category of user can write to a file.

x

— Indicates that a given category of user can execute the contents of a file.

A fourth symbol (-) indicates that no access is permitted. Each of the three permissions are assigned to three different categories of users. The categories are: •

owner — The owner of the file or application.

group — The group that owns the file or application.

everyone — All users with access to the system.


As stated earlier, it is possible to view the permissions for a file by invoking a long format listing with the command ls -l. For example, if the user juan creates an executable file named foo, the output of the command ls -l foo would appear like this: -rwxrwxr-x

1 juan

juan

0 Sep 26 12:25 foo

The permissions for this file are listed at the start of the line, beginning with rwx. This first set of symbols define owner access — in this example, the owner juan has full access, and may read, write, and execute the file. The next set of rwx symbols define group access (again, with full access), while the last set of symbols define the types of access permitted for all other users. Here, all other users may read and execute the file, but may not modify it in any way. One important point to keep in mind regarding permissions and user accounts is that every application run on Red Hat Enterprise Linux runs in the context of a specific user. Typically, this means that if user juan launches an application, the application runs using user juan's context. However, in some cases the application may need a more privileged level of access in order to accomplish a task. Such applications include those that edit system settings or log in users. For this reason, special permissions have been created. There are three such special permissions within Red Hat Enterprise Linux. They are: •

setuid — used only for applications, this permission indicates that the application is to run as the owner of the file and not as the user executing the application. It is indicated by the character s in place of the x in the owner category. If the owner of the file does not have execute permissions, the S is capitalized to reflect this fact.

setgid — used primarily for applications, this permission indicates that the application is to run as the group owning the file and not as the group of the user executing the application. If applied to a directory, all files created within the directory are owned by the group owning the directory, and not by the group of the user creating the file. The setgid permission is indicated by the character s in place of the x in the group category. If the group owner of the file or directory does not have execute permissions, the S is capitalized to reflect this fact.

sticky bit — used primarily on directories, this bit dictates that a file created in the directory can be removed only by the user that created the file. It is indicated by the character t in place of the x in the everyone category. If the everyone category does not have execute permissions, the T is capitalized to reflect this fact. Under Red Hat Enterprise Linux, the sticky bit is set by default on the /tmp/ directory for exactly this reason.

3.2. Usernames and UIDs, Groups and GIDs: In Red Hat Enterprise Linux, user account and group names are primarily for peoples' convenience. Internally, the system uses numeric identifiers. For users, this identifier is known as a UID, while for groups the identifier is known as a GID. Programs that make user


or group information available to users translate the UID/GID values into their more humanreadable counterparts. UIDs and GIDs must be globally unique within your organization if you intend to share files and resources over a network. Otherwise, whatever access controls you put in place may fail to work properly, as they are based on UIDs and GIDs, not usernames and group names. Specifically, if the /etc/passwd and /etc/group files on a file server and a user's workstation differ in the UIDs or GIDs they contain, improper application of permissions can lead to security issues. For example, if user juan has a UID of 500 on a desktop computer, files juan creates on a file server will be created with owner UID 500. However, if user bob logs in locally to the file server (or even some other computer), and bob's account also has a UID of 500, bob will have full access to juan's files, and vice versa. Therefore, UID and GID collisions are to be avoided at all costs. There are two instances where the actual numeric value of a UID or GID has any specific meaning. A UID and GID of zero (0) are used for the root user, and are treated specially by Red Hat Enterprise Linux — all access is automatically granted. The second instance is that UIDs and GIDs below 500 are reserved for system use. Unlike UID/GID zero (0), UIDs and GIDs below 500 are not treated specially by Red Hat Enterprise Linux. However, these UIDs/GIDs are never to be assigned to a user, as it is likely that some system component either currently uses or will use these UIDs/GIDs at some point in the future. When new user accounts are added using the standard Red Hat Enterprise Linux user creation tools, the new user accounts are assigned the first available UID and GID starting at 500. The next new user account is assigned UID/GID 501, followed by UID/GID 502, and so on. A brief overview of the various user creation tools available under Red Hat Enterprise Linux occurs later in this chapter. But before reviewing these tools, the next section reviews the files Red Hat Enterprise Linux uses to define system accounts and groups. 3.3. Files Controlling User Accounts and Groups On Red Hat Enterprise Linux, information about user accounts and groups are stored in several text files within the /etc/ directory. When a system administrator creates new user accounts, these files must either be edited manually or applications must be used to make the necessary changes. The following section documents the files in the /etc/ directory that store user and group information under Red Hat Enterprise Linux. /etc/passwd


The /etc/passwd file is world-readable and contains a list of users, each on a separate line. On each line is a colon delimited list containing the following information: •

Username — The name the user types when logging into the system.

Password — Contains the encrypted password (or an x if shadow passwords are being used — more on this later).

User ID (UID) — The numerical equivalent of the username which is referenced by the system and applications when determining access privileges.

Group ID (GID) — The numerical equivalent of the primary group name which is referenced by the system and applications when determining access privileges.

GECOS — Named for historical reasons, the GECOS field is optional and is used to store extra information (such as the user's full name). Multiple entries can be stored here in a comma delimited list. Utilities such as finger access this field to provide additional user information.

Home directory — The absolute path to the user's home directory, such as /home/juan/.

Shell — The program automatically launched whenever a user logs in. This is usually a command interpreter (often called a shell). Under Red Hat Enterprise Linux, the default value is /bin/bash. If this field is left blank, /bin/sh is used. If it is set to a nonexistent file, then the user will be unable to log into the system.

Here is an example of a /etc/passwd entry: root:x:0:0:root:/root:/bin/bash This line shows that the root user has a shadow password, as well as a UID and GID of 0. The root user has /root/ as a home directory, and uses /bin/bash for a shell.

3.4. /etc/shadow Because the /etc/passwd file must be world-readable (the main reason being that this file is used to perform the translation from UID to username), there is a risk involved in storing everyone's password in /etc/passwd. True, the passwords are encrypted. However, it is possible to perform attacks against passwords if the encrypted password is available. If a copy of /etc/passwd can be obtained by an attacker, an attack that can be carried out in secret becomes possible. Instead of risking detection by having to attempt an actual login with every potential password generated by password-cracker, an attacker can use a password cracker in the following manner: •

A password-cracker generates potential passwords

Each potential password is then encrypted using the same algorithm as the system

The encrypted potential password is then compared against the encrypted passwords in /etc/passwd


The most dangerous aspect of this attack is that it can take place on a system far-removed from your organization. Because of this, the attacker can use the highest-performance hardware available, making it possible to go through massive numbers of passwords very quickly. Therefore, the /etc/shadow file is readable only by the root user and contains password (and optional password aging information) for each user. As in the /etc/passwd file, each user's information is on a separate line. Each of these lines is a colon delimited list including the following information: •

Username — The name the user types when logging into the system. This allows the login application to retrieve the user's password (and related information).

Encrypted password — The 13 to 24 character password. The password is encrypted using either the crypt(3) library function or the md5 hash algorithm. In this field, values other than a validly-formatted encrypted or hashed password are used to control user logins and to show the password status. For example, if the value is ! or *, the account is locked and the user is not allowed to log in. If the value is !! a password has never been set before (and the user, not having set a password, will not be able to log in).

Date password last changed — The number of days since January 1, 1970 (also called the epoch) that the password was last changed. This information is used in conjunction with the password aging fields that follow.

Number of days before password can be changed — The minimum number of days that must pass before the password can be changed.

Number of days before a password change is required — The number of days that must pass before the password must be changed.

Number of days warning before password change — The number of days before password expiration during which the user is warned of the impending expiration.

Number of days before the account is disabled — The number of days after a password expires before the account will be disabled.

Date since the account has been disabled — The date (stored as the number of days since the epoch) since the user account has been disabled.

A reserved field — A field that is ignored in Red Hat Enterprise Linux 3.5.1 Steps for adding user and groups and log in shell out:  For adding user we write adduser (username)  Then we have set password. For it we write (passwd username)  A root user can out the log in shell of any user by applying command (chsh username).Then /sbin/nologin  Multiple user can be added with any group by the command (gpasswd -M username1,username2 groupname)


User add,Group add and Login Shell out

Figure 3-1: In the above code user khalid, ruhul and neo is added also group netshared is added.User neo has no log in shell. User ruhul and khalid is the secondary group member of group netshared ďƒ˜ We write the command (cat /etc/group) for seeing group name and group members


Figure 3-2: In the figure user khalid and ruhul is the member of group netshared 3.5.2 User Account Expire, and User Password Lock  We write command (chage –l username) to see user account status  We give command (chage –E yyyy/mm/dd username) to set user account expiration date.  For removing account expiration date we write (chage –l username)  We can lock password of any user by writing command (passwd –l username)



Figure 3-3: In the above code we have seen the account status of user khalid and set the account expiration date as 05/07/2010.Also we have locked the password of khalid and after it we unlocked 3.6. Access Control Lists: Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented. The Red Hat Enterprise Linux 5 kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information. The cp and mv commands copy or move any ACLs associated with files and directories

3.6.1 Why ACL is used: ACL is used to give the special permission to any user. When in a file or directory has no others read, write or execution permission but a other user can red write or execute that file or directory by ACL. 3.6.2 Steps for setting up ACL: 1. At first we will make a partition. (The partition process is shown in chapter 4) 2. Then a directory is created under the root (mkdir /exports) 3. We have to give entry in the fstab file for mounting the new partition (vi /etc/fstab) 4. Then a file (fstab) is copied to the new directory (cp /etc/fstab /exports) and we withdraw the others permission of this file.(chmod –x fstab) 5. Then a special read write permission is given to user khalid by command (setfacl –m u:khalid:rw /exports/fstab) 6. The ACL status is seen by command (getfacl /exports/fstab).


Figure 3-4: In the above code a new directory exports is created and the fstab file is copied to directory exports. Then Execution permission of the directory is withdrawn.

Figure 3-5: Partition 15 is mounting under exports and ACL is set in the partition.


Figure 3-6: ACL status where there is no others permission but user khalid has read, write permission

Chapter 4: Disk management and Data Security Logical Volume Manager (LVM) 4.1. What is LVM? LVM is a method of allocating hard drive space into logical volumes that can be easily resized instead of partitions. With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. A physical volume cannot span over more than one drive. The physical volumes are combined into logical volume groups, with the exception of the /boot/ partition. The /boot/ partition cannot be on a logical volume group because the boot loader cannot read it. If the root (/) partition is on a logical volume, create a separate /boot/ partition which is not a part of a volume group. Since a physical volume cannot span over multiple drives, to span over more than one drive, create one or more physical volumes per drive.


Figure 4-1. Logical Volume Group The logical volume group is divided into logical volumes, which are assigned mount points, such as /home and /m and file system types, such as ext2 or ext3. When "partitions" reach their full capacity, free space from the logical volume group can be added to the logical volume to increase the size of the partition. When a new hard drive is added to the system, it can be added to the logical volume group, and partitions that are logical volumes can be expanded.

Figure 4-2. Logical Volumes On the other hand, if a system is partitioned with the ext3 file system, the hard drive is divided into partitions of defined sizes. If a partition becomes full, it is not easy to expand the size of the partition. Even if the partition is moved to another hard drive, the original hard drive space has to be reallocated as a different partition or not used.


LVM support must be compiled into the kernel, and the default Red Hat kernel is compiled with LVM support (www.redhat.com) 4.2. LVM Configuration LVM can be configured during the graphical installation process, the text-based installation process, or during a kickstart installation. You can use the utilities from the lvm package to create your own LVM configuration post-installation, but these instructions focus on using Disk Druid during installation to complete this task. An overview of the general steps required to configure LVM include: •

Creating physical volumes from the hard drives.

Creating volume groups from the physical volumes.

Creating logical volumes from the volume groups and assign the logical volumes mount points

4.3. Automatic Partitioning On the Disk Partitioning Setup screen, select Automatically partition. For Red Hat Enterprise Linux, LVM is the default method for disk partitioning. If you do not wish to have LVM implemented, or if you require RAID partitioning, manual disk partitioning through Disk Druid is required. The following properties make up the automatically created configuration: •

The /boot/ partition resides on its own non-LVM partition. In the following example, it is the first partition on the first drive ( /dev/sda1). Bootable partitions cannot reside on LVM logical volumes.

A single LVM volume group (VolGroup00) is created, which spans all selected drives and all remaining space available. In the following example, the remainder of the first drive (/dev/sda2), and the entire second drive (/dev/sdb1) are allocated to the volume group.

Two LVM logical volumes (LogVol00 and LogVol01) are created from the newly created spanned volume group. In the following example, the recommended swap space is automatically calculated and assigned to LogVol01, and the remainder is allocated to the root file system, LogVol00.


(www.redhat.com)

Figure 4-3. Automatic LVM Configuration With Two SCSI Drives 4.4. Steps for configuring LVM:  In the frist step we make a partition of 500MB (fdisk /dev/sda)  Then we have to change the file system of the new partition (For LVM its partition id is 8E)  Then we create the physical volume of the hard disk by command of (pvcreate /dev/sda11)  Volume group is created by the command (vgcreate newvg /dev/sda11)  After it we create logical volume of 100MB ( lvcreate –L 100M –n newlv newvg)  We hav to give entry in the fstab by opening fstab (vi /etc/fstab)  The entry will like this (dev/newvg/newlv /data ext3 dafaults 0 0)  For extending the LVM we give the command


(lvextend –L +100M /dev/newvg/newlv)  Before reducing LVM we have to unmount the partition from fstab.  For reducing the LVM we give the command (lvreduce –L 150 /dev/newvg/newlv) 4.4.1Creating Partition and Changing Partition ID:


Figure 4-4: Making new partition and changing partition id into LVM. 4.4.2Creating volume group and logical volume:


Figure 4-5: Creating 500MB physical and 100MB logical volume

4.4.3 Logical Volume Extension & Reduction:


Figure 4-6: Extending logical volume into 100MB and reducing logical volume into 150MB

4.4.4 Mounting Logical volume and Entry to the fstab:


Figure 4-7: Mounting LVM in the data directory 4.5 RAID, (Redundant Array of Independent Disks): 4.5.1 What is RAID? The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to accomplish performance or redundancy goals not attainable with one large and expensive drive. This array of drives appears to the computer as a single logical storage unit or drive. RAID is a method in which information is spread across several disks. RAID uses techniques such as disk striping (RAID Level 0), disk mirroring (RAID level 1), and disk striping with parity (RAID Level 5) to achieve redundancy, lower latency and/or to increase bandwidth for reading or writing to disks, and to maximize the ability to recover from hard disk crashes. The underlying concept of RAID is that data may be distributed across each drive in the array in a consistent manner. To do this, the data must first be broken into consistentlysized chunks (often 32K or 64K in size, although different sizes can be used). Each chunk is then written to a hard drive in the RAID array according to the RAID level used. When the data is to be read, the process is reversed, giving the illusion that the multiple drives in the array are actually one large drive. 4.5.2 Who Should Use RAID? Those who need to keep large quantities of data on hand (such as system administrators) would benefit by using RAID technology. Primary reasons to use RAID include: •

Enhanced speed

•

Increased storage capacity using a single virtual disk


Lessened impact of a disk failure

4.5.3 Hardware RAID versus Software RAID There are two possible RAID approaches: Hardware RAID and Software RAID. 4.5.4 Hardware RAID The hardware-based array manages the RAID subsystem independently from the host and presents to the host only a single disk per RAID array. An example of a Hardware RAID device would be one that connects to a SCSI controller and presents the RAID arrays as a single SCSI drive. An external RAID system moves all RAID handling "intelligence" into a controller located in the external disk subsystem. The whole subsystem is connected to the host via a normal SCSI controller and appears to the host as a single disk. RAID controllers also come in the form of cards that act like a SCSI controller to the operating system but handle all of the actual drive communications themselves. In these cases, you plug the drives into the RAID controller just like you would a SCSI controller, but then you add them to the RAID controller's configuration, and the operating system never knows the difference. 4.5.5 Software RAID Software RAID implements the various RAID levels in the kernel disk (block device) code. It offers the cheapest possible solution, as expensive disk controller cards or hotswap chassis are not required. Software RAID also works with cheaper IDE disks as well as SCSI disks. With today's fast CPUs, Software RAID performance can excel against Hardware RAID. The MD driver in the Linux kernel is an example of a RAID solution that is completely hardware independent. The performance of a software-based array is dependent on the server CPU performance and load. For those interested in learning more about what Software RAID has to offer, here are the most important features: •

Threaded rebuild process

Kernel-based configuration

Portability of arrays between Linux machines without reconstruction

Backgrounded array reconstruction using idle system resources

Hot-swappable drive support

Automatic CPU detection to take advantage of certain CPU optimizations


4.5.6 RAID Levels and Linear Support RAID supports various configurations, including levels 0, 1, 4, 5, and linear. These RAID types are defined as follows: •

Level 0 — RAID level 0, often called "striping," is a performance-oriented striped data mapping technique. This means the data being written to the array is broken down into strips and written across the member disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy. The storage capacity of a level 0 array is equal to the total capacity of the member disks in a Hardware RAID or the total capacity of member partitions in a Software RAID.

Level 1 — RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks that may use parallel access for high datatransfer rates when reading but more commonly operate independently to provide high I/O transaction rates. Level 1 provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost.The storage capacity of the level 1 array is equal to the capacity of one of the mirrored hard disks in a Hardware RAID or one of the mirrored partitions in a Software RAID.

Level 4 — Level 4 uses parity concentrated on a single disk drive to protect data. It is better suited to transaction I/O rather than large file transfers. Because the dedicated parity disk represents an inherent bottleneck, level 4 is seldom used without accompanying technologies such as write-back caching. Although RAID level 4 is an option in some RAID partitioning schemes, it is not an option allowed in Red Hat Enterprise Linux RAID installations.The storage capacity of Hardware RAID level 4 is equal to the capacity of member disks, minus the capacity of one member disk. The storage capacity of Software RAID level 4 is equal to the capacity of the member partitions, minus the size of one of the partitions if they are of equal size.

Level 5 — this is the most common type of RAID. By distributing parity across some or all of an array's member disk drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance bottleneck is the parity calculation process. With modern CPUs and Software RAID, that usually is not a very big problem. As with level 4, the result is asymmetrical performance, with reads substantially outperforming writes. Level 5 is often used with write-back caching to reduce the asymmetry. The storage capacity of Hardware RAID level 5 is equal to the capacity of member disks, minus the capacity of one member disk. The storage capacity of Software RAID level 5 is equal to the capacity of the member partitions, minus the size of one of the partitions if they are of equal size.


Linear RAID — Linear RAID is a simple grouping of drives to create a larger virtual drive. In linear RAID, the chunks are allocated sequentially from one member drive, going to the next drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations will be split between member drives. Linear RAID also offers no redundancy and, in fact, decreases reliability — if any one member drive fails, the entire array cannot be used. The capacity is the total of all member disks.

[1] RAID level 1 comes at a high cost because you write the same information to all of the disks in the array, which wastes drive space. For example, if you have RAID level 1 set up so that your root ( /) partition exists on two 40G drives, you have 80G total but are only able to access 40G of that 80G. The other 40G acts like a mirror of the first 40G. [2] Parity information is calculated based on the contents of the rest of the member disks in the array. This information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has been replaced. [3] RAID level 4 takes up the same amount of space as RAID level 5, but level 5 has more advantages. For this reason, level 4 is not supported. (www.redhat.com) 4.5.7 Steps for Configuring RAID:  In the first step we have to make two partitions and again we have to change partition id (for RAID partition id fd).Here we have created partition 12 and 13  For configuring RAID with partition 12 and 13 we have to give the following command mdadm –C /dev/md0 –l 1 –n 2 dev/sda{12,13} where (–l 1) is the RAID level and (–n 2) is the number of the partition  After it we have to mount the RAID under a directory. Here RAID is mounted in the directory base.


Figure 4-8: Making partition 12 and 13 for RAID and combining them into RAID level 1


Figure 4-9: Mounting RAID in the directory base 4.6 Implementing Disk Quotas Disk space can be restricted by implementing disk quotas which alert a system administrator is alerted before a user consumes too much disk space or a partition becomes full. Disk quotas can be configured for individual users as well as user groups. This kind of flexibility makes it possible to give each user a small quota to handle "personal" files (such as email and reports), while allowing the projects they work on to have more sizable quotas (assuming the projects are given their own groups). In addition, quotas can be set not just to control the number of disk blocks consumed but to control the number of inodes (data structures that contain information about files in UNIX file systems). Because inodes are used to contain file-related information, this allows control over the number of files that can be created. (www.yolinux.com) 4.6.1 Configuring Disk Quotas To implement disk quotas, use the following steps: 1. Enable quotas per file system by modifying the /etc/fstab file. 2. Remount the file system(s). 3. Create the quota database files and generate the disk usage table. 4. Assign quota policies.


Figure 4-10: Find users quota status by command (repquota /) and set quota for user sofiq which soft limit will be 800kB and hard limit will be 1024kB by command (setquota –u sofiq 800 1024 0 0)


Figure 4-11: Mounting quota in the root (/) directory

Chapter 5: Server setup and network security configuration 5.1 IP Setup in the Server: For placing the IP number in the local server we need to follow the following steps: Step-1: We will give a command setup for opening window of ip setup and we will select network configuration from the list

Step-2: Then we will select device parameter from the list


Step-3: After selecting device parameter the lan card list will be shown(here only one Lan card) and we will select the Lan card (here eth0)


Step-4: By default there will have no static IP in the Lan card.So we will deselect the DHCP by pressing space button from the keyboard so Static ip option will be activated

Step-5: After activating static IP option we add the desire IP subnetamask and default gateway IP Step-6: We need to restart the network service for using the IP we have set

5.2 Package installation from the Local Machine: 1. Copy server folder from the DVD: At first we have to make a directory (mkdir –p /var/ftp/pub/Server). Then we will enter to the Server directory and copy 2206 packages from the DVD and lets keep it in the Server directory.


2. For package installation from the local machine we have to change yum.repos.d file which is exist in the (/etc/yum.repos.d) location. The file will be changed into like the following picture.


5.3 DHCP Server Configuration:

Dynamic Host Configuration Protocol (DHCP) automatically assigns IP addresses and other network configuration information (subnet mask, broadcast address, etc) to computers on a network. A client configured for DHCP will send out a broadcast request to the DHCP server requesting an address. The DHCP server will then issue a "lease" and assign it to that client. The time period of a valid lease can be specified on the server. DHCP reduces the amount of time required to configure clients and allows one to move a computer to various networks and be configured with the appropriate IP address, gateway and subnet mask. For ISP's it conserves the limited number of IP addresses it may use. DHCP servers may assign a "static" IP address to specified hardware. Microsoft NetBios information is often included in the network information sent by the DHCP server. 5.3.1 DHCP assignment: 1. Lease Request: Client broadcasts request to DHCP server with a source address of 0.0.0.0 and a destination address of 255.255.255.255. The request includes the MAC address which is used to direct the reply. 2. IP lease offer: DHCP server replies with an IP address, subnet mask, network gateway, name of the domain, name servers, duration of the lease and the IP address of the DHCP server. 3. Lease Selection: Client recieves offer and broadcasts to al DHCP servers that will accept given offer so that other DHCP server need not make an offer.


4. The DHCP server then sends an ack to the client. The client is configured to use TCP/IP. 5. Lease Renewal: When half of the lease time has expired, the client will issue a new request to the DHCP server


5.3.2 Dhcp package installation: We will apply command (yum install dhcp* -y) for installing all dhcp related pacakage in your server.


ďƒ˜ Then dhcp.conf.sample file is copied to the /etc directory by the name of dhcp.conf.Because we have to make some change of this file for filling up our own dhcp demand.Then we open the file (vi/etc/dhcp.conf)

In the following file we will set all options like default gateway router, NIS domain. We will also define the IP range here. He we have set the IP range from 192.168.0.128 to 192.168.0.254. So all machine connected with this server will get the IP within this range. Here we have also binned the MAC address with a fixed IP. So that the MAC address belonging computer will always get that IP


ďƒ˜ We have to restart the dhcp service for beginning our own dhcp configuration


5.4 Proxy Server In an enterprise that uses the Internet, a proxy server is a server that acts as an intermediary between a workstation user and the Internet so that the enterprise can ensure security, administrative control, and caching service. A proxy server is associated with or part of a gateway server that separates the enterprise network from the outside network and a firewall server that protects the enterprise network from outside intrusion. A proxy server receives a request for an Internet service (such as a Web page request) from a user. If it passes filtering requirements, the proxy server, assuming it is also a cache server , looks in its local cache of previously downloaded Web pages. If it finds the page, it returns it to the user without needing to forward the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses one of its own IP addresses to request the page from the server out on the Internet. When the page is returned, the proxy server relates it to the original request and forwards it on to the user. To the user, the proxy server is invisible; all Internet requests and returned responses appear to be directly with the addressed Internet server. (The proxy is not quite invisible; its IP address has to be specified as a configuration option to the browser or other protocol program.) An advantage of a proxy server is that its cache can serve all users. If one or more Internet sites are frequently requested, these are likely to be in the proxy's cache, which


will improve user response time. In fact, there are special servers called cache servers. A proxy can also do logging. The functions of proxy, firewall, and caching can be in separate server programs or combined in a single package. Different server programs can be in different computers. For example, a proxy server may in the same machine with a firewall server or it may be on a separate server and forward requests through the firewall (www.webopedia.com) 5.4.1 Squid web proxy server: Squid has three main purpose  Speeding delivary of content  Tracking what sites people are visiting  Limiting the sites people are visiting • • •

Squid supports caching of FTP,HTTP,and other data streams Squid will forward SSL requests directly to origin servers or to one other proxy Squid includes advanced features including access control lists,cache hierarchies and HTTP server acceleration

5.4.2 Steps for setting Squid web proxy server:  At the first step we install the squid package (yum install squid* -y)  In the second step we open the file squid.conf (vi /etc/squid/squid.conf)  In the squid.conf file we have set the port number as 3128 which is exist in line 919  Then we have to set the cache memory that is exist in line 1782  Then Squid file location and its access permission is given at line 1782 and 1944  At line 627 we have to set our own rules to which network we want to allow and to which user we want to block to which sites or sites list we want to block


Step: 1&2 Installing squid package and opening squid.conf file


Step-3: Checking the squid proxy port number which is 3128


Step-4: Setting the squid cache memory that is here 8MB


Step-5: Activating the cache location at line 1782


Step-5: Activation of the access log at line 1944 so all sites of the client can be monitored.


Step-6: At line 627 we have allowed the access of the network 192.168.0.0 and 192.168.0.68 user is blocked from the internet access also site facebook is blocked and a list of the site which is written in the /tmp/�restricted.txt�block. location that is blocked.


5.5 Email The birth of electronic mail (email) occurred in the early 1960s. The mailbox was a file in a user's home directory that was readable only by that user. Primitive mail applications appended new text messages to the bottom of the file, making the user had to wade through the constantly growing file to find any particular message. This system was only capable of sending messages to users on the same system. The first network transfer of an electronic mail message file took place in 1971 when a computer engineer named Ray Tomlinson sent a test message between two machines via ARPANET — the precursor to the Internet. Communication via email soon became very popular, comprising 75 percent of ARPANET's traffic in less than two years. Today, email systems based on standardized network protocols have evolved into some of the most widely used services on the Internet. Red Hat Enterprise Linux offers many advanced applications to serve and access email. This chapter reviews modern email protocols in use today and some of the programs designed to send and receive email. (www.hypexr.org) 5.5.1 How email works Email is based around the use of electronic mailboxes. When an email is sent, the message is routed from server to server, all the way to the recipient's email server. More precisely, the message is sent to the mail server tasked with transporting emails (called the MTA, for Mail Transport Agent) to the recipient's MTA. On the Internet, MTAs communicate with one another using the protocol SMTP, and so are logically called SMTP servers (or sometimes outgoing mail servers). The recipient's MTA then delivers the email to the incoming mail server (called the MDA, for Mail Delivery Agent), which stores the email as it waits for the user to accept it. There are two main protocols used for retrieving email on an MDA: • •

POP3 (Post Office Protocol), the older of the two, which is used for retrieving email and, in certain cases, leaving a copy of it on the server. IMAP (Internet Message Access Protocol), which is used for coordinating the status of emails (read, deleted, moved) across multiple email clients. With IMAP, a copy of every message is saved on the server, so that this synchronisation task can be completed.

For this reason, incoming mail servers are called POP servers or IMAP servers, depending on which protocol is used.


Figure 5-1: Email process To use a real-world analogy, MTAs act as the post office (the sorting area and mail carrier, which handle message transportation), while MDAs act as mailboxes, which store messages (as much as their volume will allow) until the recipients check the box. This means that it is not necessary for recipients to be connected in order for them to be sent email. To keep everyone from checking other users' emails, MDA is protected by a user name called a login and by a password. Retrieving mail is done using a software program called an MUA (Mail User Agent). When the MUA is a program installed on the user's system, it is called an email client (such as Mozilla Thunderbird, Microsoft Outlook, Eudora Mail, Incredimail or Lotus Notes). When it is a web interface used for interacting with the incoming mail server, it is called webmail Sendmail's core purpose, like other MTAs, is to safely transfer email among hosts, usually using the SMTP protocol. However, Sendmail is highly configurable, allowing control over almost every aspect of how email is handled, including the protocol used. Many system administrators elect to use Sendmail as their MTA due to its power and scalability. 5.5.2 Purpose and Limitations Sendmail can spool mail to each users' directory and deliver outbound mail for users. However, most users actually require much more than simple email delivery. They usually want to interact with their email using an MUA, that uses POP or IMAP, to download their messages to their local machine. Or, they may prefer a Web interface to gain access to their mailbox. These other applications can work in conjunction with


Sendmail, but they actually exist for different reasons and can operate separately from one another 5.5.3 Steps of sendmail server setup: Step-1: We need package sendmail which have to install for sendmail server setup (yum install sendmail* -y) .After it we open file sendmail.mc


Step-2:In the sendmail.mc file we have to set the local domain name. Here we have set daffodil.com as the local domain at line number 155


Step-3: In this step we open file access by the command (vi /etc/mail/access).Here we define which network will get the permission for access and whill will not.In this file 192.168.0.0 network will get the access permission and network 192.169.1.0 will not get the access permission

Step-4: We have to restart and do on the sendmail service by these two bellowing command

5.5.4 Configuring Mail Delivery Agent (MDA):


Step-1: For configuring mail delivery agent we need a package dovecot to install. So we give a command (yum install dovecot* -y).

Step-2: Next we open file dovecot.conf and configure the protocols as pop3 in line number 20


5.5.5 Aliasing: For aliasing we have open file aliases (vi /etc/aliases).Then at the last line we have to make a list about the person of the network who will get the root for mail.


Step-3: We have restarted the service dovecot for MDA service


5.6 Postfix Postfix is a Sendmail-compatible MTA that is designed to be secure, fast, and easy to configure. To improve security, Postfix uses a modular design, where small processes with limited privileges are launched by a master daemon. The smaller, less privileged processes perform very specific tasks related to the various stages of mail delivery and run in a change rooted environment to limit the effects of attacks. Configuring Postfix to accept network connections from hosts other than the local computer takes only a few minor changes in its configuration file. Yet for those with more complex needs, Postfix provides a variety of configuration options, as well as third party add ons that make it a very versatile and full-featured MTA. The configuration files for Postfix are human readable and support upward of 250 directives. Unlike Sendmail, no macro processing is required for changes to take effect and the majority of the most commonly used options are described in the heavily commented files. (www.postfix.org)


5.6.1 The Default Postfix Installation The Postfix executable is /usr/sbin/postfix. This daemon launches all related processes needed to handle mail delivery. Postfix stores its configuration files in the /etc/postfix/ directory. The following is a list of the more commonly used files: — Used for access control, this file specifies which hosts are allowed to connect to Postfix.

access

aliases

main.cf

master.cf

transport

— A configurable list required by the mail protocol.

— The global Postfix configuration file. The majority of configuration options are specified in this file. — Specifies how Postfix interacts with various processes to accomplish mail delivery. — Maps email addresses to relay hosts.

5.6.2 Basic Postfix Configuration By default, Postfix does not accept network connections from any host other than the local host. Perform the following steps as root to enable mail delivery for other hosts on the network: •

Edit the /etc/postfix/main.cf file with a text editor, such as vi.

Uncomment the mydomain line by removing the hash mark (#), and replace domain.tld with the domain the mail server is servicing, such as example.com.

Uncomment the myorigin = $mydomain line.

Uncomment the myhostname line, and replace host.domain.tld with the hostname for the machine.

Uncomment the mydestination = $myhostname, localhost.$mydomain line.

Uncomment the mynetworks line, and replace 168.100.189.0/28 with a valid network setting for hosts that can connect to the server.

Uncomment the inet_interfaces = all line.

Restart the postfix service.

Once these steps are complete, the host accepts outside emails for delivery



ďƒ˜ Setting the domain name as daffodil.com


ďƒ˜ List of the network which network will get the access permission for mail and which will not


5.6.3 Output of the mail server: ďƒ˜ First of all we use the mutt command to compose a mail.then we write the destination address,subject and body of the message .After sending the mail we get a confirmation

ďƒ˜ If we log in as a neo user then we will there have message which is unread


5.7 Samba: Samba is a powerful and versatile server application. Even seasoned system administrators must know its abilities and limitations before attempting installation and configuration.

5.7.1 What Samba can do? •

Serve directory trees and printers to Linux, UNIX, and Windows clients

Assist in network browsing (with or without NetBIOS)

Authenticate Windows domain logins

Provide Windows Internet Name Service (WINS) name server resolution

Act as a Windows NT®-style Primary Domain Controller (PDC)

Act as a Backup Domain Controller (BDC) for a Samba-based PDC

Act as an Active Directory domain member server

Join a Windows NT/2000/2003 PDC

(www.samba.org) 5.7.2 Samba configuration:  At first we have to install a package (Samba) for starting samba service in linux  Then we will open the file smb.conf (vi /etc/samba/smb.conf) for our desire configuration  At line no 74 we set a same workgroup (DAFFODIL) with our windows client  At line 80 we set the windows client network to which network belonging client will get the access  We have to also set the folder or directory to will windows client pc will able to access. It happens at line 291.Here the valid windows client user is also set


Step-1: Firstly we install the package samba (yum install samba* -y).Then we open the file smb.conf (vi /etc/samba/smb.conf)


Step-2: We have to set the same workgroup (DAFFODIL) at line 77 and we set the network for windows client access at line 80 (Network 192.168.0 and network 192.168.1 got the access)


Step-3:We have create a directory to which directory the windows client will get access permission which lies at line 289.Here also some valid windows client is sent so that they can specifically access.


Step-4: We restart the samba service for starting the new samba configuration.

We have added some users like sourov,sofiq,imran so that they will get access from windows computer remotely


5.7.3 Samba Client:At the client end of the windows computer we set a ip address of that network which we have set at the samba configuration. Then the workgroup will be same as DAFFODIL.Then client computer will be able to access samba share folder ďƒ˜ At the first stage we open a run tool box and written the ip address of server which is 192.168.0.35

ďƒ˜ Then we give the valid samba username and password for accessing samba shared folder


ďƒ˜ After entering valid username and password samba client will be able to access and then he will get the shared directory

ďƒ˜ If the client user click on that directory then he will get the file which created at samba server


5.8 Network File System (NFS): A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. 5.8.1 How It Works: Currently, there are three versions of NFS. NFS version 2 (NFSv2) is older and is widely supported. NFS version 3 (NFSv3) has more features, including 64bit file handles, Safe Async writes and more robust error handling. NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires portmapper, supports ACLs, and utilizes stateful operations. Red Hat Enterprise Linux supports NFSv2, NFSv3, and NFSv4 clients, and when mounting a file system via NFS, Red Hat Enterprise Linux uses NFSv3 by default, if the server supports it. All versions of NFS can use Transmission Control Protocol (TCP) running over an IP network, with NFSv4 requiring it. NFSv2 and NFSv3 can use the User Datagram Protocol (UDP) running over an IP network to provide a stateless network connection between the client and server. When using NFSv2 or NFSv3 with UDP, the stateless UDP connection under normal conditions has less Protocol overhead than TCP which can translate into better performance on very clean, non-congested networks. The NFS server sends the client a file handle after the client is authorized to access the shared volume. This file handle is an opaque object stored on the server's side and is passed along with RPC requests from the client. The NFS server can be restarted without affecting the clients and the cookie remains intact. However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to saturate the network with requests for the server. For this reason, TCP is the preferred protocol when connecting to an NFS server.

(www.redhat.com)


5.8.2 Steps for Configuring NFS: ďƒ˜ In the first step we open the file exports (vi /etc/exports).Then we set the desire directory for desire networks. We can give read only (ro) permission or read write (rw) permission with synchronization permission for other networking linux computer. The star (*) sign indicates, that directory is shared for all user of the netwok.


ďƒ˜ We have to restart the portmap service for starting the configured NFS service.This service will automatically on if we give command

#chkconfig portmap on If we want to see which directory on share then we give command

#showmount -e


5.9 Firewalls Information security is commonly thought of as a process and not a product. However, standard security implementations usually employ some form of dedicated mechanism to control access privileges and restrict network resources to users who are authorized, identifiable, and traceable. Red Hat Enterprise Linux includes several tools to assist administrators and security engineers with network-level access control issues. Firewalls are one of the core components of a network security implementation. Several vendors market firewall solutions catering to all levels of the marketplace: from home users protecting one PC to data center solutions safeguarding vital enterprise information. Firewalls can be stand-alone hardware solutions, such as firewall appliances by Cisco, Nokia, and Sonicwall. Vendors such as Checkpoint, McAfee, and Symantec have also developed proprietary software firewall solutions for home and business markets. Apart from the differences between hardware and software firewalls, there are also differences in the way firewalls function that separate one solution from another. Table5.1: Advantage, Disadvantage of NAT, Packet filter and Proxy

Method

Description

NAT

Network

Address

Translation places

Disadvantage

Advantages

(NAT)

private

IP

Can

be

s configured

transparently

to

Cannot

prevent

malicious

activity

machines on a LAN

once users connect

subnetworks behind

路 Protection of many

to a service outside

one or a small pool

machines and services

of the firewall

of

behind

public

IP

addresses, to The

all one

source rather several.

or

more

external IP addresses

masquerading requests

one

than

simplifies administration duties 路

Restriction

of

user

Linux

access to and from the

kernel

has

built-in

LAN can be configured

NAT

functionality

by opening and closing

through the Netfilter

ports

on

the

NAT


Method

Description

packet

s

firewall/gateway

kernel subsystem. A

Disadvantage

Advantages

filtering

· Customizable through

·

firewall reads each

the iptables front-end

packets for content

utility

like proxy firewalls

data

packet

passes

that

through

a

Cannot

filter

· Does not require any

· Processes packets

LAN. It can read and

customization

on

the

at

process packets by

client

as

all

header

network

activity

is

and

information filters

side,

the

layer,

protocol

but

cannot

filter packets at an

the

filtered at the router

application layer

Packet

packet based on sets

level rather than the

· Complex network

Filter

of

application level

architectures

· Since packets are not

make

transmitted through a

packet filtering rules difficult, especially if

programmable

rules

implemented

by

the

firewall

establishing

administrator.

The

proxy,

Linux

has

performance is faster

coupled

due

masquerading

kernel

built-in

packet

network to

direct

can

with

IP or

filtering functionality

connection from client

local

through the Netfilter

to remote host

DMZ networks

Proxy firewalls filter

· Gives administrators

· Proxies are often

all

control

application-specific

subnets

and

kernel subsystem. Proxy

requests

of

a

certain

protocol

or

type

from

clients

to

LAN

a proxy

over

what

applications protocols

and function

(HTTP, Telnet, etc.), or

protocol-

outside of the LAN

restricted

machine, which then

· Some proxy servers

proxies

makes

can cache frequently-

TCP-connected

accessed

services only)

requests

those to

the

data

locally

(most work

with

Internet on behalf of

rather than having to

·

the

use

Internet

services cannot run

proxy machine acts

connection to request

behind a proxy, so

as a buffer between

it. This helps to reduce

your

malicious

bandwidth

servers must use a

consumption

separate

· Proxy services can be

network security

logged and monitored

·

local client.

users internal

A

remote and

the

network

client machines.

the

Application

application form

Proxies

of can


Method

Description

Disadvantage

Advantages

s become a network bottleneck,

closely, tighter

allowing control

over

as

all

requests

and

transmissions

are

resource utilization on

passed through one

the network

source rather than directly from a client to a remote service

(www.redhat.com) 5.9.1 IPTables Included with Red Hat Enterprise Linux are advanced tools for network packet filtering — the process of controlling network packets as they enter, move through, and exit the network stack within the kernel. Kernel versions prior to 2.4 relied on ipchains for packet filtering and used lists of rules applied to packets at each step of the filtering process. The 2.4 kernel introduced iptables (also called netfilter), which is similar to ipchains but greatly expands the scope and control available for filtering network packets. This chapter focuses on packet filtering basics, defines the differences between ipchains and iptables, explains various options available with iptables commands, and explains how filtering rules can be preserved between system reboots. 5.9.2 Packet Filtering The Linux kernel uses the Netfilter facility to filter packets, allowing some of them to be received by or pass through the system while stopping others. This facility is built in to the Linux kernel, and has three built-in tables or rules lists, as follows: o

filter — The default table for handling network packets.


o

nat — Used to alter packets that create a new connection and used for Network Address Translation (NAT).

o

mangle — Used for specific types of packet alteration.

Each table has a group of built-in chains, which correspond to the actions performed on the packet by netfilter. The built-in chains for the filter table are as follows: o

INPUT — Applies to network packets that are targeted for the host.

o

OUTPUT — Applies to locally-generated network packets.

o

FORWARD — Applies to network packets routed through the host.

The built-in chains for the nat table are as follows: o

PREROUTING — Alters network packets when they arrive.

o

OUTPUT — Alters locally-generated network packets before they are sent out.

o

POSTROUTING — Alters network packets before they are sent out.

The built-in chains for the mangle table are as follows: o

INPUT — Alters network packets targeted for the host.

o

OUTPUT — Alters locally-generated network packets before they are sent out.

o

FORWARD — Alters network packets routed through the host.

o

PREROUTING — Alters incoming network packets before they are routed.

o

POSTROUTING — Alters network packets before they are sent out.

Every network packet received by or sent from a Linux system is subject to at least one table. However, a packet may be subjected to multiple rules within each table before


emerging at the end of the chain. The structure and purpose of these rules may vary, but they usually seek to identify a packet coming from or going to a particular IP address, or set of addresses, when using a particular protocol and network service. Regardless of their destination, when packets match a particular rule in one of the tables, a target or action is applied to them. If the rule specifies an ACCEPT target for a matching packet, the packet skips the rest of the rule checks and is allowed to continue to its destination. If a rule specifies a DROP target, that packet is refused access to the system and nothing is sent back to the host that sent the packet. If a rule specifies a QUEUE target, the packet is passed to user-space. If a rule specifies the optional REJECT target, the packet is dropped, but an error packet is sent to the packet's originator. Every chain has a default policy to ACCEPT, DROP, REJECT, or QUEUE. If none of the rules in the chain apply to the packet, then the packet is dealt with in accordance with the default policy. The iptables command configures these tables, as well as sets up new tables if necessary 5.9.3 Differences Between IPTables and IPChains Both ipchains and iptables use chains of rules that operate within the Linux kernel to filter packets based on matches with specified rules or rule sets. However, iptables offers a more extensible way of filtering packets, giving the administrator greater control without building undue complexity into the system. You should be aware of the following significant differences between ipchains and iptables: Using iptables, each filtered packet is processed using rules from only one chain rather than multiple chains. For example, a FORWARD packet coming into a system using ipchains would have to go through the INPUT, FORWARD, and OUTPUT chains to continue to


its destination. However, iptables only sends packets to the INPUT chain if they are destined for the local system, and only sends them to the OUTPUT chain if the local system generated the packets. It is therefore important to place the rule designed to catch a particular packet within the chain that actually handles the packet. The DENY target has been changed to DROP. In ipchains, packets that matched a rule in a chain could be directed to the DENY target. This target must be changed to DROP in iptables. Order matters when placing options in a rule. In ipchains, the order of the rule options does not matter. The iptables command has a stricter syntax. The iptables command requires that the protocol (ICMP, TCP, or UDP) be specified before the source or destination ports. Network interfaces must be associated with the correct chains in firewall rules. For example, incoming interfaces (-i option) can only be used in INPUT or FORWARD chains. Similarly, outgoing interfaces (-o option) can only be used in FORWARD or OUTPUT chains. In other words, INPUT chains and incoming interfaces work together; OUTPUT chains and outgoing interfaces work together. FORWARD chains work with both incoming and outgoing interfaces. OUTPUT chains are no longer used by incoming interfaces, and INPUT chains are not seen by packets moving through outgoing interfaces. Table 5.2: PORT List in the Network:


Service

Port No

DNS DHCP NFS FTP SAMBA WEB DOVOCOT SQUID SSH TELNET

53,TCP,UDP 67,UDP 2049,TCP 20,UDP 21,TCP 139,TCP 80 110 3128 22,TCP 53,TCP

 If we stop all ports in the specific network then we write a command

# iptables –I INPUT –s 192.168.1.0/24 –p tcp –j REJECT Here the all tcp ports are blocked in 192.168.0.1 network  If we stop all ports in all networks and give permission in the specific network then we write a command

# iptables –I INPUT –s !192.168.1.0/24 –p tcp –j REJECT Here (!) sign indicates all tcp ports are blocked in all networks except 192.168.0.1 network

CONCLUSION: Red Hat Enterprise Linux (RHEL) is a command based operating system. It is a open source software which enhances reliable and variable use of Red Hat Enterprise. It’s comprehensive para visualization and full visualization capabilities enable multiple operating systems, Read Hat and third party to run on the same System included with all RHEL server subscription. In this report we have added user, groups and given permission to a file and also special permission to a user to read write or execute a file .Using LVM we have extended and reduced the logical volume. By means of Quota we have fixed the disk space of a user in the server. We have used RAID service for data backup in server. We have configured the DHCP server so that, the client gets IP address automatically from the server. We have configured proxy server to give the internet connection to a network or user or to block a site or network from internet access. We have also shown the stopping of the internet connection in specific time. We have configured mail server so that user of the specific network can send or receive mail by


that server address. We have configured a samba service so that windows client computer can share the file from Linux server. We have also configured the NFS service for this reason the Linux computers can share specific file to their network. At last we have configured the linux firewall; as a result the server can restrict the specific port of the network, which port used for different service of the network. Finally we can say that Red Hat Enterprise Linux based server is most secured than the other operating systems

References: 1. www.wikipedia.org 2. www.redhat.com 3. www.cert.org/tech_tips/home_networks.html 4. www.yolinux.com 5.www.webopedia.com/proxy_server.html 6.www.hypexr.org/linux_mail_server.php 7.www.postfix.org 8.www.samba.org 9.www.brennan.id.au 10.www.ehow.com 11.www.cyberciti.biz 12.www.askdavetaylor.com/configuring_squid_as_a_linux_proxy_server.html 13.www.puschitz.com/SecuringLinux.shtml 14.www.theregister.co.uk 15.www.puschitz.com/SecuringLinux.shtml


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.