Table of Contents
- Prerequisites
- Understanding Linux Storage Concepts
- Ansible Basics for Storage Automation
- Step-by-Step Automation Examples
- Best Practices for Storage Automation with Ansible
- Troubleshooting Common Issues
- References
Prerequisites
Before diving in, ensure you have the following:
- Ansible Control Node: A machine with Ansible installed (v2.10+ recommended). Use
pip install ansibleor your distribution’s package manager (e.g.,sudo apt install ansibleon Debian/Ubuntu). - Target Linux Nodes: One or more Linux servers (e.g., Ubuntu, RHEL, CentOS) where storage will be managed.
- SSH Access: Ansible requires passwordless SSH access to target nodes (configure with
ssh-keygenandssh-copy-id). - Root Privileges: Ansible tasks for storage management need
sudoaccess on target nodes (configuresudoersor usebecome: yesin playbooks). - Storage Hardware: A spare disk (e.g.,
/dev/sdb) on target nodes for testing (warning: use a non-production disk to avoid data loss!).
Understanding Linux Storage Concepts
To automate storage, you first need to grasp key Linux storage components. Here’s a quick overview:
- Disk Partitions: Disks (e.g.,
/dev/sda) are divided into partitions (e.g.,/dev/sda1) to organize space. Tools likepartedorfdiskmanage partitions. - LVM (Logical Volume Manager): A flexible storage system that abstracts physical disks into:
- Physical Volumes (PVs): Partitions or entire disks (e.g.,
/dev/sdb1) allocated to LVM. - Volume Groups (VGs): Pools of PVs (e.g.,
vg_data). - Logical Volumes (LVs): Virtual “disks” carved from VGs (e.g.,
lv_appdata), which can be resized dynamically.
- Physical Volumes (PVs): Partitions or entire disks (e.g.,
- Filesystems: LVs or partitions are formatted with filesystems (e.g.,
ext4,xfs) to store data. - Mounting: Filesystems are attached to a directory (mount point, e.g.,
/mnt/appdata) usingmount;/etc/fstabensures persistence across reboots.
Ansible Basics for Storage Automation
Ansible uses playbooks (YAML files) to define tasks. For storage, key modules include:
| Module | Purpose | Example Use Case |
|---|---|---|
parted | Manage disk partitions | Create a 10GB partition on /dev/sdb. |
lvg | Manage LVM volume groups | Create a VG named vg_data from /dev/sdb1. |
lvol | Manage LVM logical volumes | Create an LV named lv_appdata in vg_data. |
filesystem | Format filesystems | Format lv_appdata with ext4. |
mount | Mount filesystems and update /etc/fstab | Mount lv_appdata to /mnt/appdata. |
file | Create directories (mount points) | Create /mnt/appdata. |
Ansible’s idempotency (tasks run only if needed) ensures playbooks can be safely re-run without unintended changes.
Step-by-Step Automation Examples
Let’s walk through practical examples to automate storage tasks. All playbooks below can be saved as storage_automation.yml and run with:
ansible-playbook -i inventory.ini storage_automation.yml
4.1 Setting Up the Ansible Inventory
First, define your target nodes in an inventory file (e.g., inventory.ini):
[storage_nodes]
server1.example.com # Replace with your target node’s IP/hostname
4.2 Creating Disk Partitions with parted
Use the parted module to create a partition on a spare disk (e.g., /dev/sdb). This example creates a 20GB primary partition:
- name: Create a disk partition with parted
hosts: storage_nodes
become: yes # Run with sudo
tasks:
- name: Create a 20GB primary partition on /dev/sdb
community.general.parted:
device: /dev/sdb # Target disk
number: 1 # Partition number (1st partition)
state: present # Ensure partition exists
part_end: 20GB # Partition size (20GB)
label: gpt # Use GPT partition table (supports >2TB disks)
part_type: primary # Partition type
- name: Verify partition creation
command: lsblk /dev/sdb
register: lsblk_output
- name: Print partition details
debug:
var: lsblk_output.stdout_lines
Explanation:
device: /dev/sdb: Target disk (replace with your disk!).part_end: 20GB: Ends the partition at 20GB (adjust as needed).label: gpt: Uses GPT (instead of MBR) for modern systems.
4.3 Configuring LVM (PV, VG, LV)
Next, convert the partition to a Physical Volume (PV), create a Volume Group (VG), and carve a Logical Volume (LV):
- name: Configure LVM (PV, VG, LV)
hosts: storage_nodes
become: yes
tasks:
# Step 1: Create Physical Volume (PV) from the partition
- name: Create PV on /dev/sdb1
command: pvcreate /dev/sdb1
args:
creates: /dev/sdb1 # Idempotent: only run if PV doesn’t exist
# Step 2: Create Volume Group (VG) named 'vg_data' using /dev/sdb1
- name: Create VG 'vg_data' with /dev/sdb1
community.general.lvg:
vg: vg_data # VG name
pvs: /dev/sdb1 # PV to add to VG
state: present # Ensure VG exists
# Step 3: Create Logical Volume (LV) 'lv_appdata' (15GB) in 'vg_data'
- name: Create LV 'lv_appdata' (15GB) in vg_data
community.general.lvol:
vg: vg_data # Parent VG
lv: lv_appdata # LV name
size: 15g # LV size (15GB)
state: present # Ensure LV exists
Explanation:
pvcreate /dev/sdb1: Converts the partition to a PV (thecreatesflag ensures idempotency).lvgmodule: Createsvg_datausing/dev/sdb1as the PV.lvolmodule: Creates a 15GB LV namedlv_appdatainvg_data.
4.4 Formatting Filesystems
Use the filesystem module to format the LV with ext4 (or xfs, btrfs, etc.):
- name: Format LV with ext4 filesystem
hosts: storage_nodes
become: yes
tasks:
- name: Format /dev/vg_data/lv_appdata with ext4
community.general.filesystem:
fstype: ext4 # Filesystem type
dev: /dev/vg_data/lv_appdata # Target LV
force: no # Do NOT overwrite existing filesystems (safety!)
Warning: Set force: yes only if you intend to reformat (destroys data!).
4.5 Mounting Filesystems Persistently
Finally, mount the formatted LV to a directory (e.g., /mnt/appdata) and update /etc/fstab to ensure it mounts on reboot:
- name: Mount LV and update fstab
hosts: storage_nodes
become: yes
tasks:
- name: Create mount point directory /mnt/appdata
ansible.builtin.file:
path: /mnt/appdata # Mount point path
state: directory # Ensure directory exists
mode: '0755' # Permissions
- name: Mount /dev/vg_data/lv_appdata to /mnt/appdata
ansible.builtin.mount:
path: /mnt/appdata # Mount point
src: /dev/vg_data/lv_appdata # Source LV
fstype: ext4 # Filesystem type
state: mounted # Mount now and update /etc/fstab
Verification: After running the playbook, check the mount with df -h /mnt/appdata on the target node.
4.6 Managing LVM Snapshots (for Backups)
LVM snapshots create point-in-time copies of LVs, ideal for backups. Use the lvol module to create a snapshot:
- name: Create LVM snapshot of lv_appdata
hosts: storage_nodes
become: yes
tasks:
- name: Create a 5GB snapshot of lv_appdata
community.general.lvol:
vg: vg_data
lv: lv_appdata_snap # Snapshot name
snapshot: lv_appdata # Original LV to snapshot
size: 5g # Snapshot size (must be large enough for changes)
state: present
Usage: Mount the snapshot (e.g., /mnt/snap) to back up data, then delete it with state: absent.
4.7 Cleanup: Removing Storage Components
To reverse the setup (e.g., for testing), use this playbook to unmount, remove the LV/VG/PV, and delete the partition:
- name: Cleanup storage components
hosts: storage_nodes
become: yes
tasks:
- name: Unmount /mnt/appdata
ansible.builtin.mount:
path: /mnt/appdata
state: unmounted
- name: Remove LV lv_appdata
community.general.lvol:
vg: vg_data
lv: lv_appdata
state: absent
- name: Remove VG vg_data
community.general.lvg:
vg: vg_data
state: absent
- name: Remove PV /dev/sdb1
command: pvremove /dev/sdb1
args:
removes: /dev/sdb1 # Idempotent: only run if PV exists
- name: Delete partition /dev/sdb1
community.general.parted:
device: /dev/sdb
number: 1
state: absent
Best Practices for Storage Automation with Ansible
- Idempotency First: Always use
creates/removeswithcommandmodules, and avoidforce: yesunless necessary. - Test in Staging: Never run storage playbooks on production systems without testing in a staging environment.
- Backup Data: Take snapshots or backups before modifying storage (e.g., use
lvolsnapshots as shown earlier). - Use Variables: Define disk names, sizes, and mount points as variables for reusability:
vars: target_disk: /dev/sdb partition_size: 20GB vg_name: vg_data lv_name: lv_appdata mount_point: /mnt/appdata - Document Playbooks: Add comments explaining why tasks exist (e.g., “20GB partition for LVM PV”).
Troubleshooting Common Issues
- “Device or resource busy”: The filesystem is mounted. Unmount it first with
state: unmounted. - “Partition not found”: Verify the disk name (e.g.,
/dev/sdbvs./dev/sdc) withlsblkon the target node. - LVM Module Errors: Ensure
lvm2is installed on target nodes (sudo apt install lvm2orsudo yum install lvm2). - Permission Denied: Use
become: yesto run tasks as root.
References
- Ansible
partedModule Documentation - Ansible
lvgModule Documentation - Ansible
mountModule Documentation - LVM Documentation
- Ansible Best Practices Guide
By following this guide, you can automate Linux storage management with Ansible, reducing errors and saving time. Start small with testing environments, then scale to production!