博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
NVMe on RHEL7
阅读量:4330 次
发布时间:2019-06-06

本文共 8253 字,大约阅读时间需要 27 分钟。

原文地址https://www.dell.com/support/article/cn/zh/cnbsd1/sln312382/nvme-on-rhel7?lang=en

 

Posted on behalf of Lakshmi Narayanan Durairajan (Lakshmi_Narayanan_Du@dell.com)

What is NVMe?

NVM Express [NVMe] or Non-Volatile Memory Host Controller Interface Specification (NVMHCI), is a  for accessing  (SSDs) attached through the  (PCIe) bus. NVM is an acronym for , as used in SSDs.

NVMe defines optimized register interface, command set and feature set for PCIe SSD’s. NVMe focuses to standardize the PCIe SSD’s and improve the performance
PCIe SSD devices designed based on the NVMe specification are NVMe based PCIeSSD’s
For more details on the NVMe please refer the link  .The NVMe devices used currently are NVMe 1.0c compliant
In this blog we will be looking into RHEL 7 support for the NVMe devices.
Currently DELL support the NVMe devices with RHEL 7 out of box [vendor based] driver
Following are the list of the things that we will look into:

  • NVMe - Features Supported
  • NVMe Device : Listing the device and its Capabilities
  • Checking MaxPayLoad
  • NVMe Driver : List the driver information
  • NVMe Device Node and Naming conventions
  • Formatting with xfs and mounting the device
  • Using ledmon utility to manage backplane LEDs for NVMe device

NVMe- Features Supported

NVMe driver exposes the following features

  • Basic IO operations
  • Hot Plug
  • Boot Support [UEFI and Legacy]

The following table lists the RHEL 7 [Out of box] driver supported features for NVMe on 12G and 13 G machines

Generation Basic IO Hot Plug UEFI Boot Legacy Boot
13 G Yes Yes Yes No
12 G Yes Yes No No
Table 1: RHEL 7 Driver Support
NVMe Device: Listing the device and its Capabilities
1) List the RHEL 7 OS information
[root@localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux 
2) Get the device details by using the lspci utility
a) We support Samsung based NVMe drives. First get the pci slot id by using the following command
[root@localhost ~]# lspci | grep -i Samsung
45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03)
47:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03)
b) The slot id will be listed as shown in the below [ Fig 1]. Here ‘’45:00.0"and "47:00.0"are the slots on which the drives are connected.
lspci listing the slot id
Figure 1: lspci listing the slot id
a) Use the slot id and use the following lspci options to get the device details ,capabilities and the corresponding driver
[root@localhost ~]# lspci -s 45:00.0 -v
45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03) (prog-if 02)
Subsystem: Dell Express Flash NVMe XS1715 SSD 800GB
Physical Slot: 25
Flags: bus master, fast devsel, latency 0, IRQ 76
Memory at d47fc000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [c0] Power Management version 3
Capabilities: [c8] MSI: Enable- Count=1/32 Maskable+ 64bit+
Capabilities: [e0] MSI-X: Enable+ Count=129 Masked-
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [40] Vendor Specific Information: Len=24 <?>
Capabilities: [100] Advanced Error Reporting
Capabilities: [180] #19
Capabilities: [150] Vendor Specific Information: ID=0001 Rev=1 Len=02c <?>
Kernel driver in use: nvme
The below [Fig 2] shows the Samsung NVMe device and the device details listed. It also shows name of the driver ‘nvme’ in this case for this device
lspci listing  NVMe device details
Figure 2: lspci listing NVMe device details

Checking MaxPayLoad

Check the MaxPayload value by executing the following commands. It should set it to 256 bytes [Fig.3]
[root@localhost home]# lspci | grep -i Samsung
45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03) 
[root@localhost home]# lspci -vvv -s 45:00.0
MaxPayload set to 256 bytes
Figure 3: MaxPayload set to 256 bytes

NVMe Driver: List the driver information

1) Use modinfo command to list the diver details
[root@localhost ~]# modinfo nvme
filename: /lib/modules/3.10.0-123.el7.x86_64/extra/nvme/nvme.ko
version: 0.8-dell1.17
license: GPL
author: Samsung Electronics Corporation
srcversion: AB81DD9D63DD5DADDED9253
alias: pci:v0000144Dd0000A820sv*sd*bc*sc*i*
depends: 
vermagic: 3.10.0-123.el7.x86_64 SMP mod_unload modversions
parm: nvme_major:int
parm: use_threaded_interrupts:int 
The below [Fig 4] shows details of the NVMe driver nvme.ko 
Modinfo listing driver information
Figure 4: Modinfo listing driver information

NVMe Device Node and Naming conventions

1) cat /proc/partitions displays the device node of nvme.
a) Following command run lists the nvme device as nvme0n1 and nvme1n1
[root@localhost ~]# cat /proc/partitions
major minor #blocks name 
259 0 781412184 nvme0n1
8 0 1952448512 sda
8 1 512000 sda1
8 2 1951935488 sda2
11 0 1048575 sr0
253 0 52428800 dm-0
253 1 16523264 dm-1
253 2 1882980352 dm-2
259 3 390711384 nvme1n1 
Partition the device using the any partitioning tools (fdisk,parted)
b) Executing the following command again, lists nvme device along with partitions
[root@localhost ~]# cat /proc/partitions
major minor #blocks name 
259 0 781412184 nvme0n1
259 1 390705068 nvme0n1p1
259 2 390706008 nvme0n1p2
8 0 1952448512 sda
8 1 512000 sda1
8 2 1951935488 sda2
11 0 1048575 sr0
253 0 52428800 dm-0
253 1 16523264 dm-1
253 2 1882980352 dm-2
259 3 390711384 nvme1n1
259 4 195354668 nvme1n1p1
259 5 195354712 nvme1n1p2 

Naming conventions:

The below [Fig 5] explains the naming convention of the device nodes

The number immediately after the string "nvme" is the device number

Example:

nvme0n1 – Here the device number is 0

Partitions are appended after the device name with the prefix ‘p’

Example:

nvme0n1p1 – partition 1

nvme1n1p2 – partition 2

Example:

nvme0n1p1 – partition 1 of device 0

nvme0n1p2 – partition 2 of device 0

nvme1n1p1 – partition 1 of device 1

nvme1n1p2 – partition 2 of device 1 

Device node naming conventions
Figure 5: Device node naming conventions

Formatting with xfs and mounting the device

1) The following command formats the nvme partition 1 on device 1 to xfs 
[root@localhost ~]# mkfs.xfs /dev/nvme1n1p1
meta-data=/dev/nvme1n1p1 isize=256 agcount=4, agsize=12209667 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=48838667, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=23847, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0 
2) Mount the device to a mount point and list the same 
[root@localhost ~]# mount /dev/nvme1n1p1 /mnt/
[root@localhost ~]# mount | grep -i nvme
/dev/nvme1n1p1 on /mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota) 
Using ledmon utility to manage backplane LEDs for NVMe device
Ledmon and ledctl are two utilities for Linux that can be used to control LED status on drive backplanes. Normally drive backplane LEDs are controlled by a hardware RAID controller (PERC), but when using Software RAID on Linux (mdadm) for NVMe PCIE SSD, the ledmon daemon will monitor the status of the drive array and update the status of drive LEDs.
For extra reading check the link 
Following are the steps to install and use the ledmon/ledctl utility
1) Installing OpenIPMI and ledmon/ledctl utilities:
Execute the following commands to install OpenIPMI and ledmon
[root@localhost ~]# yum install OpenIPMI
[root@localhost ~]# yum install ledmon-0.79-3.el7.x86_64.rpm 
2) Use ledmod/ledctl utilities 
Running ledctl and ledmon concurrently, ledmon will eventually override the ledctl settings
a) Start and check the status of ipmi as shown in the [Fig.6] using the following command
[root@localhost ~]# systemctl start ipmi
IPMI start and status
Figure 6: 
IPMI start and status

a) Start the ledmod

[root@localhost ~]# ledmon

b) [Fig 7] shows LED status after executing ledmon for the working state of the device

LED status after ledmon run for working state of the device (green)
Figure 7: 
LED status after ledmon run for working state of the device (green) 
a) The below command will blink drive LED [on the device node /dev/nvme0n1 ]
[root@localhost ~]# ledctl locate=/dev/nvme0n1
Below command will blink both the drive LEDs [on the device node /dev/nvme0n1 and /dev/nvme1n1]
[root@localhost ~]# ledctl locate={ /dev/nvme0n1 /dev/nvme1n1 }
And the following command will turn off the locate LED
[root@localhost ~]# ledctl locate_off=/dev/nvme0n1
​​​​​​​

转载于:https://www.cnblogs.com/tcicy/p/10010359.html

你可能感兴趣的文章
程序员最想得到的十大证件,你最想得到哪个?
查看>>
我的第一篇CBBLOGS博客
查看>>
【MyBean调试笔记】接口的使用和清理
查看>>
07 js自定义函数
查看>>
jQueru中数据交换格式XML和JSON对比
查看>>
form表单序列化后的数据转json对象
查看>>
[PYTHON]一个简单的单元測试框架
查看>>
iOS开发网络篇—XML数据的解析
查看>>
[BZOJ4303]数列
查看>>
一般处理程序在VS2012中打开问题
查看>>
C语言中的++和--
查看>>
thinkphp3.2.3入口文件详解
查看>>
POJ 1141 Brackets Sequence
查看>>
Ubuntu 18.04 root 使用ssh密钥远程登陆
查看>>
Servlet和JSP的异同。
查看>>
虚拟机centOs Linux与Windows之间的文件传输
查看>>
ethereum(以太坊)(二)--合约中属性和行为的访问权限
查看>>
IOS内存管理
查看>>
middle
查看>>
[Bzoj1009][HNOI2008]GT考试(动态规划)
查看>>