fix: Fix getting PVs from raid_disks for RAID LVs#536
fix: Fix getting PVs from raid_disks for RAID LVs#536richm merged 1 commit intolinux-system-roles:mainfrom
Conversation
Currently the code expects the disks specified in raid_disks are parent devices of the PVs we are going to use to allocate the RAID LVs. This might not be true in several cases, for example if the VG is encrypted (the LUKS device introduces a new layer in the storage stack) or if partitions are not used (the disk is then the PV and doesn't have a parent). This fix also allows user to specify the PV partition instead of the underlying disk.
Reviewer's GuideEnhance LVM RAID disk resolution to correctly handle partitions and encrypted VGs by matching PVs via direct or ancestor relationships, add validation errors, and introduce integration tests covering encrypted VG RAID creation and removal with PV-level disk specification. Class Diagram: Storage Entities for PV Matching in LVM RAIDclassDiagram
class Device {
+name: string
}
class Disk {
<<Represents resolved raid_disk entry>>
+path: string
}
class PV {
<<Physical Volume>>
+ancestors: List<Device>
}
class ParentDevice {
<<e.g. Volume Group>>
+name: string
+pvs: List<PV>
}
Disk --|> Device : is a
PV --|> Device : is a
ParentDevice "1" o-- "*" PV : contains
PV "1" -- "*" Device : has_ancestor
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #536 +/- ##
==========================================
- Coverage 16.54% 10.64% -5.91%
==========================================
Files 2 8 +6
Lines 284 1964 +1680
Branches 79 0 -79
==========================================
+ Hits 47 209 +162
- Misses 237 1755 +1518
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Hey @vojtechtrefny - I've reviewed your changes - here's some feedback:
- The new test is titled “Create a RAID1 lvm raid device on encrypted VG” but uses raid_level: raid0—please update the test title or the raid_level to match.
- It would be good to add a test case where raid_disks points to a partition (not the whole disk) to verify the new pv == disk and disk in pv.ancestors logic handles that scenario.
- Consider adding a brief code comment or docstring explaining why we switched from pv.parents to pv.ancestors so future readers understand the intended device hierarchy resolution.
Here's what I looked at during the review
- 🟡 General issues: 1 issue found
- 🟢 Security: all looks good
- 🟢 Review instructions: all looks good
- 🟢 Testing: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| for pv in parent_device.pvs: | ||
| if disk in pv.parents: | ||
| pvs.append(pv) | ||
| if disk: |
There was a problem hiding this comment.
suggestion (bug_risk): Use explicit None check for resolved device
Use if disk is not None: to prevent issues if disk has a custom __bool__ method.
| if disk: | |
| if disk is not None: |
|
[citest] |
|
[citest_bad] |
Currently the code expects the disks specified in raid_disks are parent devices of the PVs we are going to use to allocate the RAID LVs. This might not be true in several cases, for example if the VG is encrypted (the LUKS device introduces a new layer in the storage stack) or if partitions are not used (the disk is then the PV and doesn't have a parent). This fix also allows user to specify the PV partition instead of the underlying disk.
Summary by Sourcery
Improve PV resolution in LVM RAID to support encrypted and partitioned devices and add corresponding tests
Bug Fixes:
Enhancements:
Tests: