📜 ⬆️ ⬇️

Transition from UFS to ZFS, dangerous operations with a ROOT partition

Today decided to consider an interesting question regarding the transition to ZFS.
To get started, let's look at what we have for the relevant experiment, there is a SunFire T2000 server, with 4 SAS disks.

image


View available drives:
  root @ T2000 # format
 Searching for disks ... done

 AVAILABLE DISK SELECTIONS:
        0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> main
           / pci @ 780 / pci @ 0 / pci @ 9 / scsi @ 0 / sd @ 0.0
        1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
           / pci @ 780 / pci @ 0 / pci @ 9 / scsi @ 0 / sd @ 1.0
        2. c0t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> Filename
           / pci @ 780 / pci @ 0 / pci @ 9 / scsi @ 0 / sd @ 2.0
        3. c0t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
           / pci @ 780 / pci @ 0 / pci @ 9 / scsi @ 0 / sd @ 3.0 

')
The system is installed on the zero disk:

 root @ T2000 # uname -a
 SunOS T2000 5.10 Generic_142909-17 sun4v sparc SUNW, Sun-Fire-T200 


Patched to all Oracle standards using EIS-DVD.
The ultimate goal is to get a working system on ZFS, with a mirror installed by zfs.
We prepare our hard disks for migration of the root file system to them, in our case these are disks 2 and 3. We will do this with the help of the format utility:

  format> disk

 AVAILABLE DISK SELECTIONS:
        0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> main
           / pci @ 780 / pci @ 0 / pci @ 9 / scsi @ 0 / sd @ 0.0
        1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
           / pci @ 780 / pci @ 0 / pci @ 9 / scsi @ 0 / sd @ 1.0
        2. c0t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> Filename
           / pci @ 780 / pci @ 0 / pci @ 9 / scsi @ 0 / sd @ 2.0
        3. c0t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
           / pci @ 780 / pci @ 0 / pci @ 9 / scsi @ 0 / sd @ 3.0
 Specify disk (enter its number) [0]: 2
 selecting c0t2d0: filename
 [disk formatted]
 format> p
 partition> 0
 Part Tag Flag Cylinders Size Blocks
   0 unassigned wm 0 0 (0/0/0) 0
 Enter partition id tag [unassigned]: root
 Enter partition permission flags [wm]: 
 Enter new starting cyl [0]: 
 Enter partition size [28665792b, 2817c, 2816e, 13996.97mb, 13.67gb]: 7850c
 partition> p 


Let me explain why the size is 7850c, so that in the future it will allow without problems to copy there all the information from the existing root system, and sections relating to it.

  partition> label
 Ready to label disk, continue?  y
 partition> name
 Enter table name (remember quotes): ZFS
 partition> q
 format> save
 Saving new disk and partition definitions
 Enter file name ["./ format.dat"]: 


Exactly the same operation will be done for the disk number 3. And we create our pool mirror from the slices we created on these two disks.

  root @ T2000 # zpool create -f mainpool mirror c0t2d0s0 c0t3d0s0
 root @ T2000 # zpool list
 NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
 mainpool 38G 1.69G 36.3G 4% ONLINE - 


Next, create a bootable environment (BE). There is a special command for this, which we will run with the keys we need:

  root @ T2000 # lucreate -c ufsBE -n zfsBE -p mainpool
 Analyzing system configuration.
 Comparing source boot environment <ufsBE> file systems with the file 
 system (s) you specified for the new boot environment.  Determining which 
 file systems should not be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device </ dev / dsk / c0t2d0s0> is not a device;  cannot get BE ID.
 Creating configuration for boot environment <zfsBE>.
 Source boot environment is <ufsBE>.
 Creating boot environment <zfsBE>.
 Creating file systems on boot environment <zfsBE>.
 Creating <zfs> file system for </> in zone <global> on <mainpool / ROOT / zfsBE>.
 Populating file systems on boot environment <zfsBE>.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point </>.
 Copying.
 Creating shared file system mount points.
 Creating databases for boot environment <zfsBE>.
 Creating compare database for file system </ var>.
 Creating compare database for file system </>.
 Updating compare databases on boot environment <zfsBE>.
 Making boot environment <zfsBE> bootable.
 Creating boot_archive for /.alt.tmp.b-tDb.mnt
 updating /.alt.tmp.b-tDb.mnt/platform/sun4v/boot_archive
 Population of boot environment <zfsBE> successful.
 Creation of boot environment <zfsBE> successful. 


Check out what we did.

  root @ T2000 # lustatus
 Boot Environment Is Active Active Can Copy      
 Name Complete Now On Reboot Delete Status    
 -------------------------- -------- ------ --------- - ----- ----------
 ufsBE yes yes yes no -         
 zfsBE yes no no yes - 


Having observed that everything went well, we can activate the new zfsBE boot environment.

  root @ T2000 # luactivate zfsBE
 A Live Upgrade Sync <zfsBE>.


 ************************************************** ********************

 The target boot environment has been activated.  It will be used when you 
 reboot.  NOTE: You must NOT use the reboot, halt, or uadmin commands.  You 
 MUST USE either the init or the shutdown command when you reboot.  If you 
 the system will not boot up using the 
 target BE.

 ************************************************** ********************

 In case of a failure 
 needs to be followed fallback to the currently working boot environment:

 1. Enter the PROM monitor (ok prompt).

 2. Change the boot device back to original boot environment by typing:

      setenv boot-device disk: a

 3. Boot to original boot environment by typing:

      boot

 ************************************************** ********************

 Modifying boot archive service
 Activation of boot environment <zfsBE> successful. 


This message tells us that don’t worry, if something goes wrong, you still have the opportunity to boot from an old source. Well, of course we try, reboot:

root @ T2000 # init 6

And after we booted we check our system for the correct boot.

  root @ T2000 # lustatus
 Boot Environment Is Active Active Can Copy      
 Name Complete Now On Reboot Delete Status    
 -------------------------- -------- ------ --------- - ----- ----------
 ufsBE yes no no yes -         
 zfsBE yes yes yes no - 


Here we see that at the moment the main boot environment is zfsBE, just created by us. Now we can safely remove ufsBE.
As a result, we got a system that has all the charms of the ZFS system without any problems and with minimal downtime.
Thanks for attention.

Source: https://habr.com/ru/post/126508/


All Articles