mark 1 mod 4
Changelog
This is where I tell everybody what I've done since mod3. You can skip this if it's your first time here.
1. Some formatting. Finally got around to adding some boldface and italics.
2. Some discussion given to sshfs v nfs tunneling for security. I now think it might be better to tunnel nfs and I explain why under "who nfs is not for" Watch for a "Howto set up sshfs" coming in mark 2 of this HowTo. I'll also explain HOW to tunnel nfs in mark 2
3. Mention is made (thank you very much, birdyWA) that nfs4 IS working in squeeze. Expect in depth coverage of nfs4 in mark2mod2 or mark2mod3. I'l need a bit of time to set up the squeeze experimental platform and break nfs4 enough times to feel confident giving a howto.
Who this tutorial is for:
Most anybody that wants to share files between *NIX machines by creating common directories. Also, anybody who wants a refresher on the basics of NFS3 and how to implement shares using CLI.
People who want a quick, easy, way to share folders behind a firewall or gateway.
People who are amenable to terminal code. It's ok if you're a novice. So long as you follow these directions, you definitely can make NFS work. There will be no treatment of GUI utilites here. I don't hate the GUI, I just don't know it. Somebody else will have to address that issue. You won't need the GUI to set up sane, robust, NFS shares. Read on, if you like.
Who this tutorial is not for:
People looking to network Windows platforms with *NIX platforms. NFS can be made to do this, but it's a pain and I will not touch the issue, here. You want Samba for that task.
People who need very tight security or people who want to share folders across a WAN. Again, NFS will play ball if you tunnel through ssh. That's not necessary if you use sshfs instead of NFS. These days, I actually think a tunnelled nfs share may be best for long term WAN connections. This is because nfs shares set up very much like normal filesystems. For instance, putting a permanent nfs share in fstab presents a sane and logical place to check out the parameters of your share as seen by the client machine. It sits in the stack with all of your other permanent mount filesystems.
sshfs doesn't work like that. Setting up an sshfs is REALLY REALLY quick and easy (it's so easy I couldn't believe I hadn't messed up the first time I did it). The caveat is that sshfs's don't set up "normally" - you can see them in mtab but you can't register them in fstab. I don't like losing the sanity of having everything in front of my nose with fstab. So, as of now, I'm advocating sshfs for any temporary connections, secure or otherwise (you can't believe how quick it sets up!). For permanent I'd rather set up a normal nfs share and tunnel it through sshfs. In case you think I'm teasing and leaving you all in the dark about both sshfs setup and HOW to tunnel your nfs shares, I'm not. I will soon be posting instructions on how to do both of those operations. bear with me.
As an aside, NFS4 will accept kerberos security, but it's not currently working in lenny. I have good info that it works in squeeze. When I have worked all of that out, I'll add a section to this howto.
So, What is NFS?
NFS, or Network File System, is an internet protocol that allows sharing of data between a client and server machine. The shared directory looks and acts like a local device to the client machine. The lines above are paraphrased from the manpage.
Basically, you will generate a directory in one machine called the server. This will be shared over some network connection with another machine called the client. The client machine will think it has a “normal” directory even though it will be talking to a drive on a totally different machine. It will look, act, taste, and smell like any the other folders on the client machine.
So, if Bob has a directory called, “Photos” that he's sharing with Sue, she will also see a directory marked “Photos”. Either one of them can look at the photos or add photos (if you give them access, that is). The users don't care if they are on the client or server. The directory will behave the same way to either user.
What do we need to get started?
Let's start with two *NIX machines. Use two Linux machines. Better yet, two Debian machines (just for now. That way we won't have ANY compatibility issues for our training session). I don't know if NFS works with Apple, but I bet it does. I am not teaching you to connect windows machines via nfs so no windows, please.
You also need a physical network. Any kind will do. Please set it up and work out any physical bugs before we go further. There are other tutorials to help you set up a basic network. You should be able to ping your machines from one another. If you're using IPTABLES as a local firewall, be sure to disable it. If you don't understand what I just said about IPTABLES, you don't need to worry about it.
We want a solid, functioning network to start with, so that we can be sure that any problems come from NFS and not some other process.
Time to start:
Before we begin, we need to pick our server. If you already have a server machine, it's a no-brainer. If you have not designated a particular machine to be your server, you can use either one. I recommend using the “better” of the two platforms as the server (ie. faster and/or bigger storage).
Got a server? Good. Fire it up.
You need to install some software from the repository to your server machine. SO! Pour a cup of coffee, open your terminal, and let's get started
Code: Select all
Code:
~/$su
~/#apt-get install nfs-common (you probably already have this, but type it, anyway)
~/#apt-get install nfs-kernel-server
Next, we'll modify the /etc/hosts file. This file is like a phonebook for your network. You don't NEED to do this, but if you don't, you'll need to set up your share based on IP addresses. I like names better. They make it easier to understand what you're looking at.
For editing config files, I use a terminal program called nano. You don't have to use it if you don't want to. If you use a GUI app, make sure you are opening it as root or you will not be allowed to save your changes.
Code: Select all
Code:
~/#nano /etc/hosts
Code: Select all
127.0.0.1 localhost
127.0.1.1 Aimee.Polaris Aimee
12.13.14.15 kiki.Polaris kiki
5.5.5.100 Penelope.Polaris Penelope
5.5.5.100 Lara.Polaris Lara
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Code: Select all
~/$su
~/#apt-get nfs-common (again, you probably have it already. It won't hurt to make sure.)
~/#ifconfig eth0
Code: Select all
eth0 Link encap:Ethernet HWaddr 00:1a:da:15:60:e5
inet addr:5.5.5.105 Bcast:5.3.4.255 Mask:255.255.255.0
inet6 addr: fe80::21a:70ff:fe15:6078/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:42369 errors:0 dropped:0 overruns:0 frame:0
TX packets:28219 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:49232950 (46.9 MiB) TX bytes:3357769 (3.2 MiB)
Interrupt:16 Base address:0xcc00
If all of this is new to you, consider the following model to help you understand hosts and domains:
User@host.domain . So, I could be polaris96@Aimee.Polaris . If you typed that address as a destination in the Polaris internal email system, I would get the message. See, you've been looking at this stuff for years. Now you know how to say it in Geek.
Back to the task at hand:
Code: Select all
127.0.0.1 localhost
127.0.1.1 Penelope.Polaris Penelope
5.5.5.105 Maeve.Polaris Maeve
Make sure to save your changes on the server machine. In nano, you do this by pressing [ctrl-o]. Now that we have addressed the client, we can use the name “Maeve” to designate that Host (machine). Close out of /etc/hosts. If you're using nano, this is done by pressing [ctrl-x]
Next we'll allow the client to call us. We do that by editing the /etc/hosts.allow file So Code:
Code: Select all
~/# nano /etc/hosts.allow
Code: Select all
# /etc/hosts.allow: list of hosts that are allowed to access the system.
# See the manual pages hosts_access(5) and hosts_options(5).
#
# Example: ALL: LOCAL @some_netgroup
# ALL: .foobar.edu EXCEPT terminalserver.foobar.edu
#
# If you're going to protect the portmapper use the name "portmap" for the
# daemon name. Remember that you can only use the keyword "ALL" and IP
# addresses (NOT host or domain names) for the portmapper, as well as for
# rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8)
# for further information.
#
Code: Select all
mountd: Maeve
Code: Select all
mountd: .Polaris
DO NOT FORGET THIS STEP. IF YOU DO, YOUR CLIENT MACHINE WILL “SEE” THE SHARE, BUT YOU WILL NOT BE ABLE TO MOUNT IT.
As an interesting sidenote, I noticed when reviewing this article that /etc/hosts.allow actually says you can't use host or domain names with rpc.mountd (the full name for mountd). Don't worry about it. The directions I'm giving come directly from a long functioning file service. I think it's probably vestigial - once upon a time you couldn't do it, but these days you can. If it still worries you, look at the manpage for mountd. It actually recommends adding host or domain names when referencing mountd in hosts.allow. You get used to this kind of thing. It's one of the quirks of our home built distro they'll sort it out, someday.
Next, we need to make a directory on the server that will be shared. You can put it anywhere you want to. I like to place mine in /home at the same level that users get a folder. We do it like this:
Code: Select all
~/#cd /home
~/home#mkdir shared
Code: Select all
~/home#chmod ugo+rwx shared
Next, we need to tell the NFS server to export the directory to our client. We do this using the /etc/exports file:
Code: Select all
~/home#cd /etc
~/etc#nano exports
Code: Select all
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/media/jukebox jenny(rw,sync,no_subtree_check) Aimee(rw,sync,no_subtree_check)
/home/Admin jenny(rw,sync,no_subtree_check) Aimee(rw,sync,no_root_squash,no_subtree_check)
First, you enter the path to the server directory that will be shared. Then, after a space (I use a tab) you enter a list of the hosts that are allowed to “see” it with a space between each client. For me, the clients are jenny and Aimee. You can put different export options inside parentheses following each hostname.
Make sure that you never leave spaces between the export options. Your syntax should follow this model:
Code: Select all
hostname(option1,option2,option3)
In our case, /etc exports might look like this:
Code: Select all
/home/shared Maeve(rw,async,no_subtree_check)
The 'async' has to do with how data is exchanged. Asynchronous is faster but synchronous is (very slightly) more robust. I use sync as nothing but force of habit. Either one is ok.
'no_subtree_check' is the default behaviour. You can leave it out. If you don't add it, you get annoying messages on the command line when you export the share. That's why I put it in.
Notice in my /etc/exports file where it says 'no_root_squash'. NFS normally kills any root access from a client machine. THAT IS SMART. 'no_root_squash' will allow admin level access in the server share from the client.
This is dangerous. Don't do it unless you know what you're all about. I don't actually have it enabled on my network. I stuck it in because, once in a while, you need a sledgehammer. USE WITH CAUTION.
After we save and exit from /etc/exports, we have one last thing to do:
Code: Select all
~/etc#exportfs -ra
At this point, you can use the nfs share. Everything that follows fits into the category of “polishing the system”
Permanent mounting
Sometimes, you want a share to hook up as soon as you power on. Here's a good way to do that. First, I like to make a directory on the client with the same path as the one I'll be attaching from the server.
In our case, this will be /home/shared. I'm not going over how to make it, again. Look back to the server setup if you need precise directions.
After we have set up a client directory as our “normal mountpoint,” we need to add a line to /etc/fstab, like this (Make sure you are working on the CLIENT):
Code: Select all
~/#nano /etc/fstab
Code: Select all
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/md1 / reiserfs defaults 0 1
/dev/sda9 /boot ext2 defaults 0 2
/dev/VG_0/LV_0 /home xfs defaults 0 2
/dev/sda7 none swap sw 0 0
#dvd drives
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0
/dev/scd1 /media/cdrom1 udf,iso9660 user,noauto 0 0
#NFS drives
Penelope:/media/jukebox /media/jukebox nfs defaults 0 0
Penelope:/home/Admin /home/Admin nfs noauto 0 0
servername:[path on the server] [path on Client] nfs defaults 0 0
Obviously, the server and client paths come from your own network.
'nfs' is the fstype DO NOT USE NFS4 if you're running lenny. Exportfs and mount support it, but I'm pretty sure mountd DOES NOT. I will update this HOW TO when that changes.
'defaults' should be fine for almost everybody. 'noauto' means do not automatically mount the fs at startup.
You can put some descriptive data in fstab with a 'noauto' option to allow an “easier” manual mount. For instance, I can type:
Code: Select all
~/#mount /home/Admin
Code: Select all
~/#mount -t nfs Penelope:/home/Admin /home/Admin
And there it is. You now have the tools to set up basic file service.
Common Problems during initial setup / configuration
/etc/hosts.allow – BY FAR the most frequent issue is that you forgot to allow access to mountd. If you can “see” the share on the client, but not mount it (even as root). This is your issue.
Incorrect access levels. Check that the server directory is accessible to the client for mounting. You may need some chmod work to get it right. Also, remember that the Client directory must have correclty set permissions, too. If you don't need security, make it all 'chmod ugo+rwx [filename]'
Syntax errors in your files. All of the work in this How-To came right off my network (some names and canonicals were changed to protect the guilty lol), so it should be good as reference. READ THE MANPAGES I like to do this:
Code: Select all
~/$man exports > export.man
Where next?
NFS is not very secure. The skills you learned here will make it fairly easy to implement the sshfs if you want high security.
If you like working on the server machine from the client, learn how to use ssh. It's very easy to get terminal access with ssh, but make sure to put the server's address into the client's /etc/hosts file. You know how, now. If you like remote terminal, I ask that you stay away from the telnet command. Telnet is an unsecure nightmare. Stick with ssh
You might also want to learn how to set up an internal mail server. Sendmail is another security nightmare, but I recommend it as a first step because it's easy. Set it up, play with it, then REMOVE IT and use postfix for any real emailing.
In Conclusion
Everything I have spoken about here is LOW TECH by modern standards. The good news is that it works great and makes a small footprint for big utility. Be careful to stay INSIDE the firewall or gateway when you're setting up this kind of system. I hope this will whet your appetite for using the terminal – it's like granpa's Nash pickup, archaic, but still useful and built like brick outhouse.
Good Reference
Manpages, Debian online documentation, How-Tos on this forum, and, of course, other users. Please try to find the answer to your problem in the available documents before starting a thread. I also have no problem answering questions about this material posted on this thread, but I travel a lot on business. Bear with me if I don't get to you right away.
Code: Select all