Storage Management (LVM & RAID)
Overview
FreeWorld OS includes comprehensive storage management capabilities through Logical Volume Manager (LVM) and RAID support. These systems provide flexible storage allocation, redundancy, and performance optimization at the kernel level.
Logical Volume Manager (LVM)
LVM provides flexible disk management by abstracting physical storage into logical volumes that can be resized, moved, and managed independently of physical hardware.
Components
- Physical Volumes (PVs) - Physical storage devices (disks, partitions)
- Volume Groups (VGs) - Collections of physical volumes
- Logical Volumes (LVs) - Virtual partitions created from volume groups
- Extents - Small allocation units (default 4MB)
Features
- ✅ Physical Volume management (create, remove, scan)
- ✅ Volume Group management (create, extend, reduce, remove)
- ✅ Logical Volume management (create, extend, reduce, remove)
- ✅ Metadata management (read, write, scan)
- ✅ Linear and striped logical volumes
- ✅ Dynamic resizing
- ✅ UUID-based identification
API Functions
; Physical Volumes
pv_create(device_path, name) -> PV pointer
pv_remove(PV pointer) -> status
pv_find(name) -> PV pointer
; Volume Groups
vg_create(name, extent_size, PV_array, PV_count) -> VG pointer
vg_extend(VG pointer, PV_array, PV_count) -> status
vg_reduce(VG pointer, PV_array, PV_count) -> status
vg_remove(VG pointer) -> status
vg_find(name) -> VG pointer
; Logical Volumes
lv_create(VG pointer, name, extents, stripe_count, stripe_size) -> LV pointer
lv_extend(LV pointer, additional_extents) -> status
lv_reduce(LV pointer, extents_to_remove) -> status
lv_remove(LV pointer) -> status
lv_find(name) -> LV pointer
lv_open(LV pointer) -> file descriptor
lv_close(file descriptor) -> status
lv_read(file descriptor, buffer, count) -> bytes read
lv_write(file descriptor, buffer, count) -> bytes written
Usage Example
; Create Physical Volume
mov rdi, device_path
mov rsi, pv_name
call pv_create
mov [pv1], rax
; Create Volume Group
mov rdi, vg_name
mov rsi, 0 ; Default extent size
mov rdx, pv_array
mov rcx, 1 ; One PV
call vg_create
mov [vg1], rax
; Create Logical Volume
mov rdi, [vg1]
mov rsi, lv_name
mov rdx, 1000 ; 1000 extents
mov rcx, 0 ; Linear (no striping)
mov r8, 0 ; Default stripe size
call lv_create
mov [lv1], rax
; Open and use Logical Volume
mov rdi, [lv1]
call lv_open
mov [lv_fd], rax
RAID Support
RAID (Redundant Array of Independent Disks) provides data redundancy, performance improvement, or both through various RAID levels.
Supported RAID Levels
- RAID 0 (Striping) - Data striped across multiple disks for performance. No redundancy. Minimum 2 disks.
- RAID 1 (Mirroring) - Data mirrored across disks for redundancy. Minimum 2 disks, even number.
- RAID 5 (Parity) - Data striped with distributed parity. Can tolerate 1 disk failure. Minimum 3 disks.
- RAID 6 (Dual Parity) - Data striped with dual distributed parity. Can tolerate 2 disk failures. Minimum 4 disks.
- RAID 10 (Striped Mirrors) - Combination of RAID 1 and RAID 0. Minimum 4 disks, even number.
Features
- ✅ RAID array creation and management
- ✅ Disk addition and removal
- ✅ Automatic failure detection
- ✅ Hot spare support (up to 8 hot spares)
- ✅ Automatic rebuild on failure
- ✅ Rebuild progress tracking
- ✅ Status monitoring
- ✅ Degraded mode operation
- ✅ Configurable chunk size (default 64KB)
API Functions
; RAID Array Management
raid_create(name, level, device_array, device_count, chunk_size, hot_spare_array, hot_spare_count) -> RAID pointer
raid_add_disk(RAID pointer, device_pointer, position) -> status
raid_remove_disk(RAID pointer, device_index) -> status
raid_fail_disk(RAID pointer, device_index) -> status
raid_rebuild(RAID pointer, device_index) -> status
raid_get_status(RAID pointer, status_buffer) -> status
raid_find(name) -> RAID pointer
; I/O Operations
raid_read(RAID pointer, buffer, offset, count) -> bytes read
raid_write(RAID pointer, buffer, offset, count) -> bytes written
; Monitoring
raid_monitor_init() -> status
raid_monitor_check() -> issue_count
raid_monitor_stop() -> status
raid_add_hot_spare(RAID pointer, device_pointer) -> status
raid_remove_hot_spare(RAID pointer, device_index) -> status
raid_get_rebuild_progress(RAID pointer, device_index) -> progress_percentage
RAID Status Flags
- RAID_STATUS_ACTIVE - Array is active and operational
- RAID_STATUS_DEGRADED - Array is degraded but operational
- RAID_STATUS_REBUILDING - Array is rebuilding a failed disk
- RAID_STATUS_FAILED - Array has failed and is unusable
- RAID_STATUS_OFFLINE - Array is offline
Usage Example
; Create RAID 5 array
mov rdi, raid_name
mov rsi, RAID_LEVEL_5
mov rdx, device_array
mov rcx, 5 ; 5 disks
mov r8, 0 ; Default chunk size
mov r9, hot_spare_array
push 2 ; 2 hot spares
call raid_create
mov [raid1], rax
; Monitor RAID arrays
call raid_monitor_init
; Check for issues
call raid_monitor_check
; Returns number of issues found
; Get rebuild progress
mov rdi, [raid1]
mov rsi, 2 ; Device index
call raid_get_rebuild_progress
; Returns 0-100 percentage
Hot Spares
Hot spares are standby disks that automatically replace failed disks in a RAID array, enabling automatic recovery without manual intervention.
Features
- ✅ Up to 8 hot spares per RAID array
- ✅ Automatic activation on disk failure
- ✅ Automatic rebuild initiation
- ✅ Hot spare management (add, remove)
Hot Spare Activation
When a disk fails in a RAID array:
- System detects the failure
- Checks for available hot spares
- Automatically replaces failed disk with hot spare
- Initiates rebuild process
- Updates array status
Monitoring and Maintenance
The RAID monitoring system continuously checks all RAID arrays for issues and automatically handles recovery.
Monitoring Features
- ✅ Periodic health checks (every 60 seconds)
- ✅ Automatic failure detection
- ✅ Automatic rebuild initiation
- ✅ Rebuild progress tracking
- ✅ Status reporting
Monitoring Process
- Monitor checks each RAID array
- Detects failed devices
- Checks if rebuild is needed
- Initiates rebuild if hot spare available
- Updates rebuild progress
- Reports issues found
Data Structures
Physical Volume Structure (512 bytes)
- Magic number (PV_MAGIC)
- Version information
- Name (255 bytes)
- Device major/minor numbers
- Size and free space
- Extent size and counts
- VG UUID (if in volume group)
- Status flags
Volume Group Structure (1024 bytes)
- Magic number (VG_MAGIC)
- Version information
- Name (127 bytes)
- UUID (16 bytes)
- Extent size
- Total and free extents
- PV and LV counts
- Status flags
Logical Volume Structure (1024 bytes)
- Magic number (LV_MAGIC)
- Version information
- Name (127 bytes)
- UUID (16 bytes)
- VG UUID (16 bytes)
- Extent count
- Stripe parameters
- Segment count
- Status flags
RAID Array Structure (2048 bytes)
- Magic number (RAID_MAGIC)
- Version information
- RAID level
- Device and hot spare counts
- Chunk size
- Status flags
- Failed device count
- Total and usable size
- Name (256 bytes)
- Device pointers (32 devices)
- Hot spare pointers (8 hot spares)
- Failed device indices
- Rebuild progress per device
- Device status per device
Integration
Storage management integrates with:
- Storage Drivers - Uses AHCI, NVMe, USB storage, SCSI drivers
- Filesystem Drivers - Logical volumes can be formatted with any supported filesystem
- Device Management - Integrates with device enumeration and management
- Kernel Services - Uses kernel services for monitoring and maintenance
Build System
Storage management components are built using the Makefile in kernel/storage/:
cd kernel/storage
make
This produces object files that are linked into the kernel:
lvm_core.o- LVM core functionalitylvm_metadata.o- LVM metadata managementraid_core.o- RAID core functionalityraid_monitoring.o- RAID monitoring and hot spares
Limits and Constraints
- Maximum Physical Volumes: 256
- Maximum Volume Groups: 64
- Maximum Logical Volumes: 1024
- Maximum RAID Arrays: 64
- Maximum Devices per RAID: 32
- Maximum Hot Spares per RAID: 8
- Default Extent Size: 4MB
- Default Chunk Size: 64KB