I have an access to some server where there is a lot of data. I can’t copy the whole of data on my computer.
I can’t compile on the server the program I want because the server doesn’t have all libs I need.
I don’t think that the server admin would be very happy to see me coming and asking to him to install some libs just for me…
So, I try to figure if there is a way to open a file like with,
FILE *fopen(const char *filename, const char *mode);
or
void std::ifstream::open(const char* filename, ios_base::openmode mode = ios_base::in);
but over a SSH connection. Then reading the file like I do for usual program.
both computer and server are running linux
Advertisement
Answer
I assume you are working on your Linux laptop and the remote machine is some supercomputer.
First non-technical advice: ask permission first to access the data remotely. In some workplaces you are not allowed to do that, even if it technically possible.
You could sort-of use libssh for that purpose, but you’ll need some coding and read its documentation.
You could consider using some FUSE file system (on your laptop), e.g. some sshfs
; you would then be able to access some supercomputer files as /sshfilesystem/foo.bar
). It is probably the slowest solution, and probably not a very reliable one. I don’t really recommend it.
You could ask permission to use NFS mounts.
Maybe you might consider some HTTPS access (if the remote computer has it for your files) using some HTTP/HTTPS client library like libcurl (or the other way round, some HTTP/HTTPS server library like libonion)
And you might (but ask permission first!) use some TLS connection (e.g. start manually a server like program on the remote supercomputer) perhaps thru OpenSSL or libgnutls
At last, you should consider installing (i.e. asking politely the installation on the remote supercomputer) or using some database software (e.g. a PostgreSQL or MariaDB or Redis or MongoDB server) on the remote computer and make your program become a database client application …
BTW, things might be different if you access a few dozen of terabyte sized files in a random access (each run reading a few kilobytes inside them), or a million files, of which a given run access only a dozen of them with sequential reads, each file of a reasonable size (a few megabytes). In other words, DNA data, video films, HTML documents, source code, … are all different cases!