ISCSI stands for Internet Small Computer Systems Interface, which is an IP-based storage,and works on top of internet protocol by carrying SCSI commands over IP network. iSCSI transports block-level data between an iSCSI initiator(on the Client machine) and an iSCSI target on a storage device (server).
Before getting into the Configuration of an iscsi target and initiator,
let’s talk a little bit about iscsi Terminology as your acquaintance with the
definitions will light up the path for you to understand the subject
profoundly.
Iscsi Terminology
iscsi Target: you already know about iscsi target , it is actually the server that share the block device and to which you log in . the disk is configured as an iscsi target through targetcli utility and becomes available to the clients.
iscsi Initiator: the shared device is requested with initiator which resides on the client , the initiator itself is installed through iscsi-initiator-utils on the client machine.
iscsi Target: you already know about iscsi target , it is actually the server that share the block device and to which you log in . the disk is configured as an iscsi target through targetcli utility and becomes available to the clients.
iscsi Initiator: the shared device is requested with initiator which resides on the client , the initiator itself is installed through iscsi-initiator-utils on the client machine.
IQN(WWN):Internet
Qualified Name or World Wide Name is a unique name with which the
initiator’s privilege and identity is recognized.
ACL: an Access Control
List which is based on node iqn.
LUN: Logical Unit which
is the block device that is shared through target.
TPG: Target Portal
Group, the collection of IP Addresses to which a specific iscsi target listens.
Environment
Now let’s get into
work , we should have 2 centos 7 machines as bellow
Server: kserver.lab
IP Address: 192.168.1.200
Client: client.kserver
IP Address:
192.168.1.201
Preparing an LVM
Partition on the target
Before starting the
configuration we should prepare a new disk to be shared , I intend to prepare
the disk with lvm(Logical Volume Manager) , because the best practice is that ,
on the target you will be able to extend or shrink the partitions easily .
I have added a new disk /dev/sdc and I am going to make an lvm partirion using fdisk utility.
I have added a new disk /dev/sdc and I am going to make an lvm partirion using fdisk utility.
Let’s approach it
step by step :)
#1-making sure that
the disk is recognized by the system
#cat /proc/partitions
#cat /proc/partitions
As you can see the
new disk is recognized by the system.
#2-Use fdisk to create the partition
#2-Use fdisk to create the partition
#fdisk /dev/sdc
At this point type m
for help to see options that are available for creating the partition.
For creating a new
partition we must press n
Target
Configuration
We are going to
configure backend device for sharing , first we need to install targetcli
package .
# yum install targetcli
# yum install targetcli
The targetcli is a
command line interface through which you are capable of creating block backend
device as well as creating Logical Units(LUNs) and Access Control Lists(ACL).based
on the man pages targetcli is a shell for viewing, editing, and saving
the configuration of the kernel's target subsystem, also known as LIO. It
enables the administrator to assign local storage resources backed by either
files, volumes, local SCSI devices, or ramdisk, and export them to remote
systems via network fabrics, such as iSCSI or FCoE. once the installation is
done you can easily type targetcli to jump into it’s interface and this
is what you will encounter.
Targetcli is an intuitive
utility, although it is not obvious from the beginning. you can see in the
picture that it starts with a prompt and there is also a help command provided,
and if you type ls you can see my previous configuration will appear,
but since, I am intending to create a fresh configuration, let’s clear the
Current one.
In targetcli prompt
type the following command to clear the existing configuration
/> clearconfig
confirm=true
So we have a
cleared configuration at the moment, let’s start exploring the items of the
above figure. the first item to address is backstores, which are different
kinds of local storage resources that the kernel target uses to
"back" the SCSI devices it exports. The mappings to local storage
resources that each backstore creates are called storage objects, and since we
are going to share a disk partition we should create a block object .
/>cd backstores
/>cd backstores
/backstores >
block / create
dev=/dev/sdc name=sdc
We specify that we
are going to create a block device with the name sdc on /dev/sdc and in this
way our disk backend is prepared the next step is to create an iscsi to share .
/backstores>cd
../iscsi
/iscsi>
create
wwn=iqn.2018-02.lab.kserver:disk
The format for wwn
should be iqn.<year>-<month>.<Reverse DNS>:<an Optional Name>
After Creating the iscsi device the result of ls would be
/iscsi> cd
iqn.2018-02.lab.kserver:disk/tpg1/luns
/iscsi/iqn.20...isk/tpg1/luns>
create
/backstores/block/sdb
Now we have created
a lun on the block device and the next step is to create access control list.in
the picture you see that the target is going to listen on port 3260 on all
incoming connections, with acl we define the allowed initiator to connect.the
next step is setting up acl but first, we should setup initiator to be able to
get the iqn for the acl , let’s continue on the second machine and run the
following command :
[root@client ~] #yum install iscsi-initiator-utils
[root@client ~] #yum install iscsi-initiator-utils
This will install a
set of utilities to connect to the target , now let’s get the iqn of the
initiator by running the following command:
[root@client ~]#
cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:26bdc0c78251
So now we have the
iqn of the initiator let’s get back to the target and run create acl for the
initiator
/iscsi/iqn.20...isk/tpg1/luns>
cd ../acls
create
wwn=iqn.1994-05.com.redhat:26bdc0c78251
now we are done ,
let’s run cd / to get to the root of our configuration and the run ls:
Wonderful!, isn’t it?
Now let’s get back to the
initiator and do some connectivity test .but first we need to add iscsi-target service to firewall . just commit the following command on the target.
# firewall-cmd --add-service=iscsi-target --permanent
# firewall-cmd --reload
# firewall-cmd --reload
iscsiadm (open-iscsi administration utility) is part of iscsi-initiator-utils package. it has fantastic option for connecting to iscsi target.
# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.1.200 --discover
this command would discover the target shared device and if everything has been set up correctly, it will return following message as output.
192.168.1.200:3260,1 iqn.2018-02.lab.kserver:disk
now that we are connected , we can login to the target .
# iscsiadm --mode node --targetname iqn.2018-02.lab.kserver:disk --portal 192.168.1.200:3260 --login
# iscsiadm --mode node --targetname iqn.2018-02.lab.kserver:disk --portal 192.168.1.200:3260 --login
Mounting and using the Shared
Device
now we are going to configure the Initiator to
mount and use the shared block device, In order to do so, we must make a
fileSystem on top of an LVM Disk , let’s first see what do we have !
commit the following command to see what SCSI devices are available:
commit the following command to see what SCSI devices are available:
#lsscsi
[1:0:0:0] cd/dvd
VBOX CD-ROM 1.0
/dev/sr0
[2:0:0:0] disk
ATA VBOX HARDDISK 1.0
/dev/sda
[3:0:0:0] disk
LIO-ORG sdc 4.0 /dev/sdb
The result shows that /dev/sdb is with the name
of sdc is currently available as scsi Device. the next step is to create a
physical volume , let’s do it
#pvcreate /dev/sdb
Now we must create a volume group so that our
physical volume can be added to the volume group we name the volume Group vgsan:
#vgcreate vgsan
/dev/sdb1
After creating the
Volume Group comes Logical Volume and we name our Logical Volume lvsan:
#lvcreate -l
100%FREE -n lvsan vgsan
Now, All we have to
do is to make a file system and put a mount point and add an entry to the file
/etc/fstab:
#mkfs.ext4
/dev/vgsan/lvsan
#mkdir /san
/san directory is
our mount point and to make mounting of the new partition persistent, we shall
add an entry to /etc/fstab:
#nano /etc/fstab
Add following line
/dev/vgsan/lvsan /san ext4 _netdev 0 0
#mount -a
#reboot
And if your configuration on the target survives a reboot it
you will have fully operational iscsi device shared and mounted and ready to be
used.








Comments
Post a Comment