I went through some steps of adding a second partition on the Root volume. This is because the EBS is 50GB and the first partition only had 8GB allocated. So here are my steps I did:
1.) Detach Volume and attach it as a secondary to another instance.
2.) Use gdisk to create the second partition:
Disk /dev/xvdk: 106954752 sectors, 51.0 GiB Logical sector size: 512 bytes Disk identifier (GUID): 433FEFB0-04CE-43BD-A1B7-269A18673537 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 106954718 Partitions will be aligned on 2048-sector boundaries Total free space is 4062 sectors (2.0 MiB) Number Start (sector) End (sector) Size Code Name 1 4096 16773119 8.0 GiB 8300 Linux filesystem 2 16773120 106954718 43.0 GiB EF00 EFI System
3.) Change FS to ext4 4.) Modify the 1st partitions /etc/fstab so the GUID matches. 5.) Reattach the root volume to the original instance as the root volume.
Now the EC2 does not boot up at all! It gets stuck in the booting screen when I view the image, and it eventually fails the Status checks. What am I doing wrong? Can someone tell me where else I need to change anything on the root partition or if there is anything else I should be doing? I’ve looked everywhere and been through this process at least 10x already! The EC2 instance is running Debian8
Advertisement
Answer
I would recommend resizing using a snapshot of the original volume. Below are the steps to resize a root volume (using AWS API Tools):
- Stop the EC2 instance
- Detach the root volume from the instance
- Create a snapshot of the root volume
- Create a new volume from the snapshot with new size (e.g. 50GB) within the same availability zone
- Attach the new volume to the instance
- Start the instance and access it via ssh
- Run
resize2fs
(e.g.sudo resize2fs /dev/xvda1
) to resize new root filesystem - Once confirmed everything works fine, remove the old root volume and the snapshot