📜 ⬆️ ⬇️

Necessity on invention is cunning

Most administrators know that in order to preserve their time, nerves and user data, it is necessary to use RAID arrays. However, their use is justified only if there is an adequate monitoring system.
If a full-fledged operating system is used, problems usually do not arise: there are usually drivers and software for obtaining information from a RAID controller.

But in the case of using ESXi server without additional binding in the form of vCenter, obtaining information from the controllers may not be completely trivial.

Firstly, ESXi is not a full-fledged operating system, respectively, we are limited in the choice of software that we can install on it. Theoretically, it is possible to install additional libraries on the server, which will make it possible to launch a cli interface for controlling the controller. But this decision is bad in that it is necessary to intervene in the structure of the system, which can lead to poorly predictable consequences, moreover, it does not give a solution to the main task - obtaining an interface for automatic monitoring.

To solve our problem, you can use the server provided with ESXi. In this case, we need: a driver with CIM support, a CIM provider for VMware and software that can communicate with this CIM server.
')
So, we have - Nagios, Nagios client on the Windows system and ESXi.

By itself, installing drivers on ESXi is usually not a problem: you only need to upload a vib package to the server and install it using esxcli or esxupdate, like any other vib package. Of course, before that it is necessary to turn off all guest systems and transfer ESXi to maintenance mode.

The most interesting thing begins later - when the server is loaded, it is discovered that our datastore is no longer connected to the host. To connect a datastore in ESXi 5, it is enough to add it to the list through the VMware client's snap-in with the UUID preserved; for ESXi 4.1, you will have to do a bit of work in the console.
First, view the list of available volumes:
esxcfg-volume -l
Then mount the volume we need, the –M key indicates that we want to mount the volume permanently:
esxcfg-volume -M

cim . ( , RAID . , , .) maintenance .

CIM Adaptec /var/log/arcconf.log, . .. :
/var/spool/cron/crontabs/root, :
*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh /etc/rc.local , cron :
/bin/kill $(cat /var/run/crond.pid)
/bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
/bin/busybox crond
/vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
#!/bin/sh
rm -f /var/log/arcconf.log
: chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

SSH, CIM .

RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

, .

, . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

c , – , PowerShell:

$outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

– Adaptec, LSI - . LSI , Adaptec .

– . , . , . .
  1. esxcfg-volume -M

    cim . ( , RAID . , , .) maintenance .

    CIM Adaptec /var/log/arcconf.log, . .. :
    /var/spool/cron/crontabs/root, :
    */05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh /etc/rc.local , cron :
    /bin/kill $(cat /var/run/crond.pid)
    /bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
    /bin/busybox crond
    /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
    #!/bin/sh
    rm -f /var/log/arcconf.log
    : chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

    SSH, CIM .

    RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

    , .

    , . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

    c , – , PowerShell:

    $outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

    – Adaptec, LSI - . LSI , Adaptec .

    – . , . , . .
  2. esxcfg-volume -M

    cim . ( , RAID . , , .) maintenance .

    CIM Adaptec /var/log/arcconf.log, . .. :
    /var/spool/cron/crontabs/root, :
    */05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh
    /etc/rc.local , cron :
    /bin/kill $(cat /var/run/crond.pid)
    /bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
    /bin/busybox crond
    /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
    #!/bin/sh
    rm -f /var/log/arcconf.log
    : chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

    SSH, CIM .

    RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

    , .

    , . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

    c , – , PowerShell:

    $outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

    – Adaptec, LSI - . LSI , Adaptec .

    – . , . , . .
  3. esxcfg-volume -M

    cim . ( , RAID . , , .) maintenance .

    CIM Adaptec /var/log/arcconf.log, . .. :
    /var/spool/cron/crontabs/root, :
    */05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh
    /etc/rc.local , cron :
    /bin/kill $(cat /var/run/crond.pid)
    /bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
    /bin/busybox crond
    /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
    #!/bin/sh
    rm -f /var/log/arcconf.log
    : chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

    SSH, CIM .

    RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

    , .

    , . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

    c , – , PowerShell:

    $outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

    – Adaptec, LSI - . LSI , Adaptec .

    – . , . , . .
  4. esxcfg-volume -M

    cim . ( , RAID . , , .) maintenance .

    CIM Adaptec /var/log/arcconf.log, . .. :
    /var/spool/cron/crontabs/root, :
    */05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh
    /etc/rc.local , cron :
    /bin/kill $(cat /var/run/crond.pid)
    /bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
    /bin/busybox crond
    /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
    #!/bin/sh
    rm -f /var/log/arcconf.log
    : chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

    SSH, CIM .

    RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

    , .

    , . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

    c , – , PowerShell:

    $outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

    – Adaptec, LSI - . LSI , Adaptec .

    – . , . , . .
  5. esxcfg-volume -M

    cim . ( , RAID . , , .) maintenance .

    CIM Adaptec /var/log/arcconf.log, . .. :
    /var/spool/cron/crontabs/root, :
    */05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh
    /etc/rc.local , cron :
    /bin/kill $(cat /var/run/crond.pid)
    /bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
    /bin/busybox crond
    /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
    #!/bin/sh
    rm -f /var/log/arcconf.log
    : chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

    SSH, CIM .

    RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

    , .

    , . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

    c , – , PowerShell:

    $outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

    – Adaptec, LSI - . LSI , Adaptec .

    – . , . , . .
esxcfg-volume -M

cim . ( , RAID . , , .) maintenance .

CIM Adaptec /var/log/arcconf.log, . .. :
/var/spool/cron/crontabs/root, :
*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh
/etc/rc.local , cron :
/bin/kill $(cat /var/run/crond.pid)
/bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
/bin/busybox crond
/vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
#!/bin/sh
rm -f /var/log/arcconf.log
: chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

SSH, CIM .

RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

, .

, . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

c , – , PowerShell:

$outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

– Adaptec, LSI - . LSI , Adaptec .

– . , . , . .
 esxcfg-volume -M 

cim . ( , RAID . , , .) maintenance .

CIM Adaptec /var/log/arcconf.log, . .. :
/var/spool/cron/crontabs/root, :
*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh
/etc/rc.local , cron :
/bin/kill $(cat /var/run/crond.pid)
/bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
/bin/busybox crond
/vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
#!/bin/sh
rm -f /var/log/arcconf.log
: chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

SSH, CIM .

RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

, .

, . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

c , – , PowerShell:

$outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

– Adaptec, LSI - . LSI , Adaptec .

– . , . , . .
esxcfg-volume -M

cim . ( , RAID . , , .) maintenance .

CIM Adaptec /var/log/arcconf.log, . .. :
/var/spool/cron/crontabs/root, :
*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-00123t13a19e2/arcconf_del.sh
/etc/rc.local , cron :
/bin/kill $(cat /var/run/crond.pid)
/bin/echo '*/05 * * * * /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh' >> /var/spool/cron/crontabs/root
/bin/busybox crond
/vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh:
#!/bin/sh
rm -f /var/log/arcconf.log
: chmod +x /vmfs/volumes/4ef49f1a-b2abe2ec-32c6-001b213a19e2/arcconf_del.sh 5 .

SSH, CIM .

RAID-. . , , , . , Adaptec arcconf Y, E K ( 18856 , ).

, .

, . Windows NSClient++ exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/Windows-nrpe-3A-Check-Raid-adaptec-AAC/details

c , – , PowerShell:

$outputPath = 'C:\Program Files (x86)\Nagios\scripts\MegaSAS.log' Set-Content $outputPath ""; $raidCLI = 'C:\Program Files (x86)\CLI_Win_8.04.07\megacli64.exe' $raidArgs = '-LDInfo -Lall -aALL' $shiftString = 5 $searchPattern = "Virtual Drive:" Start-Process -FilePath $raidCLI -ArgumentList $raidArgs -Wait $arcconfOutput = Get-Content -Path $outputPath $logicalDevices = Select-String -Path $outputPath -Pattern $searchPattern; $healthCheckIndex = 0; foreach($logicalDevice in $logicalDevices) { $deviceState = $logicalDevice.get_linenumber(); $deviceStatus = $arcconfOutput.Get($deviceState+$shiftString); if($deviceStatus -match "/?Optimal") { $healthCheckIndex = $healthCheckIndex+1; } } if($healthCheckIndex -lt $logicalDevices.Count) { Write-Host "CRITICAL: Logical Device state failed"; } if($healthCheckIndex -eq $logicalDevices.Count) { Write-Host "OK: All logical devices works fine"; } else { Write-Host "Unknown: Error occured"; }

– Adaptec, LSI - . LSI , Adaptec .

– . , . , . .

Source: https://habr.com/ru/post/186950/


All Articles