Skip to content
Advertisement

Filesystem for a partition goes missing EC2 reboot

I created a d2.xlarge EC2 instance on AWS which returns the following output:

$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0    8G  0 disk 
`-xvda1 202:1    0    8G  0 part /
xvdb    202:16   0  1.8T  0 disk 
xvdc    202:32   0  1.8T  0 disk 
xvdd    202:48   0  1.8T  0 disk 

The default /etc/fstab looks like this

LABEL=cloudimg-rootfs   /        ext4   defaults,discard        0 0
/dev/xvdb       /mnt    auto    defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig       0       2

Now, I make an EXT4 filesystem for xvdc

$ sudo mkfs -t ext4 /dev/xvdc
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 488375808 4k blocks and 122101760 inodes
Filesystem UUID: 2391499d-c66a-442f-b9ff-a994be3111f8
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
    102400000, 214990848

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:            
done

blkid returns a UID for the filesystem

$ sudo blkid /dev/xvdc
/dev/xvdc: UUID="2391499d-c66a-442f-b9ff-a994be3111f8" TYPE="ext4"

Then, I mount it on /mnt5

$ sudo mkdir -p /mnt5
$ sudo mount /dev/xvdc /mnt5

It gets succesfully mounted. Till there, the things work fine.

Now, I reboot the machine(first stop it and then start it) and then SSH into the machine.

I do

$ sudo blkid /dev/xvdc

It returns me nothing. Where did the filesystem go which I created before the reboot? I guess the filesystem for mounts remain created even after the reboot cycle. Am I missing something to mount a partition on an AWS EC2 instance?

I followed this http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html and it does not seem to work as described above

Advertisement

Answer

You need to read up on EC2 Ephemeral Instance Store volumes. When you stop an instance with this type of volume the data on the volume is lost. You can reboot by performing a reboot/restart operation, but if you do a stop followed later by a start the data is lost. A stop followed by a start is not considered a “reboot” on EC2. When you stop an instance it is completely shut down and when you start it back later it is basically recreated on different backing hardware.

In other words what you describe isn’t an issue, it is expected behavior. You need to be very aware of how these volumes work before depending on them.

User contributions licensed under: CC BY-SA
10 People found this is helpful
Advertisement