<br><br><div class="gmail_quote">On Sat, Sep 25, 2010 at 12:08 PM, Raymond Wagner <span dir="ltr"><<a href="mailto:raymond@wagnerrp.com">raymond@wagnerrp.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div class="im"> On 9/24/2010 18:56, Tyler T wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
I run about six to ten FE's (depending how one defines these). The admin<br>
load is a draggg so a central model has some advantages<br>
</blockquote>
If I had multiple frontend clients and needed to<br>
upgrade I would upgrade the server and only one copy of the frontend netboot<br>
image, then copy that frontend image for all the other frontends.<br>
</blockquote>
I'd softlink all the various FE rootfs-es to accomplish the same thing<br>
while saving 6-10X disk space and improving performance (6-10X more<br>
efficient use of disk cache).<br>
</blockquote>
<br></div>
If you're using NFS as your remote rootfs, that would cause problems. With NFS, symlinks are handled client side, meaning you would need the main image mounted in the proper location for the links to be resolved. You could hardlink the files if they were on the same physical disk, but then you might end up with problems updating the files. Most systems with that setup either alter the environment so all those folders can remain read-only, or they use something like UnionFS or AUFS to provide a write-able overlay against the shared read-only root.<br>
<br>
I use something similar for my systems. I maintain a single x64 gentoo disk image that I keep up to date. Whenever I want to update my systems, I clone copies of that image, and share the copies over iSCSI. When the system boots up for the first time on that fresh image, they log into NFS, pull an overlay of various config files, and my themecache, and then reboot into their own differentiated image. Since they're running on copy-on-write cloned images, the only storage they take is whatever their overlay consumes. That also means its fairly cheap to keep around multiple old snapshots, so if an upgrade fails, it only takes me a few minutes to switch back to the old image.<div>
<div></div><div class="h5"><br></div></div></blockquote><div><br><br>surely in a netboot+nfs scenario you don't need a full filesystem for each client? I've not set one up in a while, but that certainly wasn't my recollection. <br>
</div></div>