Millions of files in /var/www & inode / out of space issue.

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
65 messages Options
1234
Reply | Threaded
Open this post in threaded view
|

Millions of files in /var/www & inode / out of space issue.

Keith-125
Q. How do I make the default web folder /var/www/ capable of holding
millions of files (say 50GB worth of small 2kb-12kb files) so that I
won't get inode issues ?

The problem is that my server has the default disk layout as I didn't
expect to have millions of files (I though they would be stored in the
DB). When I started the app it generated all the files and I got out of
space warnings. I tried moving the folder containing the files and
making a symlink back but that didn't work because nginx is in a chroot.

The two option I think I have are.

1. Reinstall the OS and make a dedicated /var/www partition but how I
increase the inode limit I have no idea.
2. Make a new partition, format it, copy the files from the original
partition and swap them around and restart nginx. ( Do i  run newfs with
some option to make more inodes ?)

Thanks
Keith.

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Zé Loff-2
On Tue, Feb 19, 2013 at 12:35:31AM +0000, Keith wrote:

> Q. How do I make the default web folder /var/www/ capable of holding
> millions of files (say 50GB worth of small 2kb-12kb files) so that I
> won't get inode issues ?
>
> The problem is that my server has the default disk layout as I
> didn't expect to have millions of files (I though they would be
> stored in the DB). When I started the app it generated all the files
> and I got out of space warnings. I tried moving the folder
> containing the files and making a symlink back but that didn't work
> because nginx is in a chroot.
>
> The two option I think I have are.
>
> 1. Reinstall the OS and make a dedicated /var/www partition but how
> I increase the inode limit I have no idea.
> 2. Make a new partition, format it, copy the files from the original
> partition and swap them around and restart nginx. ( Do i  run newfs
> with some option to make more inodes ?)
>
> Thanks
> Keith.
>

man newfs

(btw, you're looking for the -i option)
--

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Ted Unangst-6
In reply to this post by Keith-125
On Tue, Feb 19, 2013 at 00:35, Keith wrote:
> Q. How do I make the default web folder /var/www/ capable of holding
> millions of files (say 50GB worth of small 2kb-12kb files) so that I
> won't get inode issues ?

Yes, newfs -i with a smaller number.  Note that the number of inodes
is highly influential on the amount of memory and time required to run
fsck.  Typically the request is the opposite, reducing the number of
inodes so that large partitions can fsck in a reasonable amount of
time.  You don't want to push it too far.

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Janne Johansson-3
In reply to this post by Keith-125
2013/2/19 Keith <[hidden email]>:
> Q. How do I make the default web folder /var/www/ capable of holding
> millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
> get inode issues ?

Since you probably aren't going to have 50G/2k number of files in a
single dir, then you'd be wise to make several filesystems for the
directories you have there, especially for the fsck reasons mentioned
by others in this thread.
Fsck'ing 10 5G fs:es with lots of inodes will be far more fun than one
of 50G in size. And chances are quite big that not all of those 10
will have issues even on unclean shutdowns, so you would be able to
skip over a few in such an event.

--
May the most significant bit of your life be positive.

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Otto Moerbeek
On Tue, Feb 19, 2013 at 08:42:01AM +0100, Janne Johansson wrote:

> 2013/2/19 Keith <[hidden email]>:
> > Q. How do I make the default web folder /var/www/ capable of holding
> > millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
> > get inode issues ?
>
> Since you probably aren't going to have 50G/2k number of files in a
> single dir, then you'd be wise to make several filesystems for the
> directories you have there, especially for the fsck reasons mentioned
> by others in this thread.
> Fsck'ing 10 5G fs:es with lots of inodes will be far more fun than one
> of 50G in size. And chances are quite big that not all of those 10

A 50G filesysten created with defaults has more than 6 million inodes
and on a system without a decent amount of memory checks pretty quick.

If you run ffs2 with softdep, and optimization kicks in that will make
the number of *used* inodes the driving factor, instead of the total
number of inodes on a fs.

> will have issues even on unclean shutdowns, so you would be able to
> skip over a few in such an event.

Likely all will be unclean and need to be checked.

Anyway make sure the max number of files per directory doe not grow
without bound. Use a max of a couple of 10000 files per directory as a
rule of thumb.

        -Otto

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Otto Moerbeek
On Tue, Feb 19, 2013 at 09:09:49AM +0100, Otto Moerbeek wrote:

> On Tue, Feb 19, 2013 at 08:42:01AM +0100, Janne Johansson wrote:
>
> > 2013/2/19 Keith <[hidden email]>:
> > > Q. How do I make the default web folder /var/www/ capable of holding
> > > millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
> > > get inode issues ?
> >
> > Since you probably aren't going to have 50G/2k number of files in a
> > single dir, then you'd be wise to make several filesystems for the
> > directories you have there, especially for the fsck reasons mentioned
> > by others in this thread.
> > Fsck'ing 10 5G fs:es with lots of inodes will be far more fun than one
> > of 50G in size. And chances are quite big that not all of those 10
>
> A 50G filesysten created with defaults has more than 6 million inodes
> and on a system without a decent amount of memory checks pretty quick.

ehh, *with*

>
> If you run ffs2 with softdep, and optimization kicks in that will make
> the number of *used* inodes the driving factor, instead of the total
> number of inodes on a fs.
>
> > will have issues even on unclean shutdowns, so you would be able to
> > skip over a few in such an event.
>
> Likely all will be unclean and need to be checked.
>
> Anyway make sure the max number of files per directory doe not grow
> without bound. Use a max of a couple of 10000 files per directory as a
> rule of thumb.
>
> -Otto

MJ
Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

MJ
In reply to this post by Keith-125
Which app are you running that is generating millions of tiny files in a single directory?  Regardless, in this case OpenBSD is not the right tool for the job. You need either FreeBSD or a Solaris variant to handle this problem because you need ZFS.


What limits does ZFS have?
---------------------------------------
The limitations of ZFS are designed to be so large that they will never be encountered in any practical operation. ZFS can store 16 Exabytes in each storage pool, file system, file, or file attribute. ZFS can store billions of names: files or directories in a directory, file systems in a file system, or snapshots of a file system. ZFS can store trillions of items: files in a file system, file systems, volumes, or snapshots in a pool.


I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were then that would pretty much eliminate the need for my one and only FreeBSD box ;-)



On Feb 19, 2013, at 2:35 AM, Keith <[hidden email]> wrote:

> Q. How do I make the default web folder /var/www/ capable of holding millions of files (say 50GB worth of small 2kb-12kb files) so that I won't get inode issues ?
>
> The problem is that my server has the default disk layout as I didn't expect to have millions of files (I though they would be stored in the DB). When I started the app it generated all the files and I got out of space warnings. I tried moving the folder containing the files and making a symlink back but that didn't work because nginx is in a chroot.
>
> The two option I think I have are.
>
> 1. Reinstall the OS and make a dedicated /var/www partition but how I increase the inode limit I have no idea.
> 2. Make a new partition, format it, copy the files from the original partition and swap them around and restart nginx. ( Do i  run newfs with some option to make more inodes ?)
>
> Thanks
> Keith.

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Paolo Aglialoro
Or you could just use ZFS, XFS, whateverFS in a separate unix/linux box and
go NFS on it, simulating a true external storage appliance :)


On Tue, Feb 19, 2013 at 11:47 AM, MJ <[hidden email]> wrote:

> Which app are you running that is generating millions of tiny files in a
> single directory?  Regardless, in this case OpenBSD is not the right tool
> for the job. You need either FreeBSD or a Solaris variant to handle this
> problem because you need ZFS.
>
>
> What limits does ZFS have?
> ---------------------------------------
> The limitations of ZFS are designed to be so large that they will never be
> encountered in any practical operation. ZFS can store 16 Exabytes in each
> storage pool, file system, file, or file attribute. ZFS can store billions
> of names: files or directories in a directory, file systems in a file
> system, or snapshots of a file system. ZFS can store trillions of items:
> files in a file system, file systems, volumes, or snapshots in a pool.
>
>
> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were
> then that would pretty much eliminate the need for my one and only FreeBSD
> box ;-)
>
>
>
> On Feb 19, 2013, at 2:35 AM, Keith <[hidden email]> wrote:
>
> > Q. How do I make the default web folder /var/www/ capable of holding
> millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
> get inode issues ?
> >
> > The problem is that my server has the default disk layout as I didn't
> expect to have millions of files (I though they would be stored in the DB).
> When I started the app it generated all the files and I got out of space
> warnings. I tried moving the folder containing the files and making a
> symlink back but that didn't work because nginx is in a chroot.
> >
> > The two option I think I have are.
> >
> > 1. Reinstall the OS and make a dedicated /var/www partition but how I
> increase the inode limit I have no idea.
> > 2. Make a new partition, format it, copy the files from the original
> partition and swap them around and restart nginx. ( Do i  run newfs with
> some option to make more inodes ?)
> >
> > Thanks
> > Keith.

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Rafal Bisingier-2
Hi,

Or you could fix your application, to not do stupid things (like
generating millions of files in a single directory) in the first
place... ;-)


On 2013-02-19 at 12:10 CET
Paolo Aglialoro <[hidden email]> wrote:

>Or you could just use ZFS, XFS, whateverFS in a separate unix/linux box and
>go NFS on it, simulating a true external storage appliance :)
>
>
>On Tue, Feb 19, 2013 at 11:47 AM, MJ <[hidden email]> wrote:
>
>> Which app are you running that is generating millions of tiny files in a
>> single directory?  Regardless, in this case OpenBSD is not the right tool
>> for the job. You need either FreeBSD or a Solaris variant to handle this
>> problem because you need ZFS.
>>
>>
>> What limits does ZFS have?
>> ---------------------------------------
>> The limitations of ZFS are designed to be so large that they will never be
>> encountered in any practical operation. ZFS can store 16 Exabytes in each
>> storage pool, file system, file, or file attribute. ZFS can store billions
>> of names: files or directories in a directory, file systems in a file
>> system, or snapshots of a file system. ZFS can store trillions of items:
>> files in a file system, file systems, volumes, or snapshots in a pool.
>>
>>
>> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were
>> then that would pretty much eliminate the need for my one and only FreeBSD
>> box ;-)
>>
>>
>>
>> On Feb 19, 2013, at 2:35 AM, Keith <[hidden email]> wrote:
>>
>> > Q. How do I make the default web folder /var/www/ capable of holding
>> millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
>> get inode issues ?
>> >
>> > The problem is that my server has the default disk layout as I didn't
>> expect to have millions of files (I though they would be stored in the DB).
>> When I started the app it generated all the files and I got out of space
>> warnings. I tried moving the folder containing the files and making a
>> symlink back but that didn't work because nginx is in a chroot.
>> >
>> > The two option I think I have are.
>> >
>> > 1. Reinstall the OS and make a dedicated /var/www partition but how I
>> increase the inode limit I have no idea.
>> > 2. Make a new partition, format it, copy the files from the original
>> partition and swap them around and restart nginx. ( Do i  run newfs with
>> some option to make more inodes ?)
>> >
>> > Thanks
>> > Keith.
>




--
Greetings
Rafal Bisingier

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Keith-125
In reply to this post by MJ
On 19/02/2013 10:47, MJ wrote:

> Which app are you running that is generating millions of tiny files in a single directory?  Regardless, in this case OpenBSD is not the right tool for the job. You need either FreeBSD or a Solaris variant to handle this problem because you need ZFS.
>
>
> What limits does ZFS have?
> ---------------------------------------
> The limitations of ZFS are designed to be so large that they will never be encountered in any practical operation. ZFS can store 16 Exabytes in each storage pool, file system, file, or file attribute. ZFS can store billions of names: files or directories in a directory, file systems in a file system, or snapshots of a file system. ZFS can store trillions of items: files in a file system, file systems, volumes, or snapshots in a pool.
>
>
> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were then that would pretty much eliminate the need for my one and only FreeBSD box ;-)
>
>
>
> On Feb 19, 2013, at 2:35 AM, Keith <[hidden email]> wrote:
>
>> Q. How do I make the default web folder /var/www/ capable of holding millions of files (say 50GB worth of small 2kb-12kb files) so that I won't get inode issues ?
>>
>> The problem is that my server has the default disk layout as I didn't expect to have millions of files (I though they would be stored in the DB). When I started the app it generated all the files and I got out of space warnings. I tried moving the folder containing the files and making a symlink back but that didn't work because nginx is in a chroot.
>>
>> The two option I think I have are.
>>
>> 1. Reinstall the OS and make a dedicated /var/www partition but how I increase the inode limit I have no idea.
>> 2. Make a new partition, format it, copy the files from the original partition and swap them around and restart nginx. ( Do i  run newfs with some option to make more inodes ?)
>>
>> Thanks
>> Keith.
>>
It's a usenet indexing application called Newznab. It consists of two
parts, some php scripts that do the indexing that are generating the
pesky "nbz.gz" files and then there's the web front end.

This running on my home server / firewall and I think it's almost
working I just need to get the partitions sorted out and it should be
fine. I don't want to switch to FreeBSD for ZFS or introduce another
machine for a NFS Volume.

To be honest I didn't think indexing usenet would be such a big deal,
but it's a turning out to be quite a resource hog.

Keith

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Wayne Oliver
In reply to this post by Rafal Bisingier-2
On 19 Feb 2013, at 1:40 PM, Rafal Bisingier wrote:

> Hi,
>
> Or you could fix your application, to not do stupid things (like
> generating millions of files in a single directory) in the first
> place... ;-)

+1

>
>
> On 2013-02-19 at 12:10 CET
> Paolo Aglialoro <[hidden email]> wrote:
>
>> Or you could just use ZFS, XFS, whateverFS in a separate unix/linux box and
>> go NFS on it, simulating a true external storage appliance :)
>>
>>
>> On Tue, Feb 19, 2013 at 11:47 AM, MJ <[hidden email]> wrote:
>>
>>> Which app are you running that is generating millions of tiny files in a
>>> single directory?  Regardless, in this case OpenBSD is not the right tool
>>> for the job. You need either FreeBSD or a Solaris variant to handle this
>>> problem because you need ZFS.
>>>
>>>
>>> What limits does ZFS have?
>>> ---------------------------------------
>>> The limitations of ZFS are designed to be so large that they will never be
>>> encountered in any practical operation. ZFS can store 16 Exabytes in each
>>> storage pool, file system, file, or file attribute. ZFS can store billions
>>> of names: files or directories in a directory, file systems in a file
>>> system, or snapshots of a file system. ZFS can store trillions of items:
>>> files in a file system, file systems, volumes, or snapshots in a pool.
>>>
>>>
>>> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were
>>> then that would pretty much eliminate the need for my one and only FreeBSD
>>> box ;-)
>>>
>>>
>>>
>>> On Feb 19, 2013, at 2:35 AM, Keith <[hidden email]> wrote:
>>>
>>>> Q. How do I make the default web folder /var/www/ capable of holding
>>> millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
>>> get inode issues ?
>>>>
>>>> The problem is that my server has the default disk layout as I didn't
>>> expect to have millions of files (I though they would be stored in the DB).
>>> When I started the app it generated all the files and I got out of space
>>> warnings. I tried moving the folder containing the files and making a
>>> symlink back but that didn't work because nginx is in a chroot.
>>>>
>>>> The two option I think I have are.
>>>>
>>>> 1. Reinstall the OS and make a dedicated /var/www partition but how I
>>> increase the inode limit I have no idea.
>>>> 2. Make a new partition, format it, copy the files from the original
>>> partition and swap them around and restart nginx. ( Do i  run newfs with
>>> some option to make more inodes ?)
>>>>
>>>> Thanks
>>>> Keith.
>>
>
>
>
>
> --
> Greetings
> Rafal Bisingier

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Nick Holland
In reply to this post by MJ
On 02/19/13 05:47, MJ wrote:

> Which app are you running that is generating millions of tiny files
> in a single directory?  Regardless, in this case OpenBSD is not the
> right tool for the job. You need either FreeBSD or a Solaris variant
> to handle this problem because you need ZFS.
>
>
> What limits does ZFS have? ---------------------------------------
> The limitations of ZFS are designed to be so large that they will
> never be encountered in any practical operation. ZFS can store 16
> Exabytes in each storage pool, file system, file, or file attribute.
> ZFS can store billions of names: files or directories in a directory,
> file systems in a file system, or snapshots of a file system. ZFS can
> store trillions of items: files in a file system, file systems,
> volumes, or snapshots in a pool.
>
>
> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it
> were then that would pretty much eliminate the need for my one and
> only FreeBSD box ;-)

The usual stated reason is "license", it is completely unacceptable to
OpenBSD.

The other reason usually not given which I suspect would become obvious
were the license not an instant non-starter is the nature of ZFS.  As it
is a major memory hog, it works well only on loaded 64 bit platforms.
Since most of our 64 bit platforms are older, and Alpha and SGI machines
with many gigabytes of memory are rare, you are probably talking an
amd64 and maybe some sparc64 systems.

Also...see the number of "ZFS Tuning Guides" out there.  How...1980s.
The OP here has a "special case" use, but virtually all ZFS uses involve
knob twisting and experimentation, which is about as anti-OpenBSD as you
can get.  Granted, there are a lot of people who love knob-twisting, but
that's not what OpenBSD is about.

I use ZFS, and have a few ZFS systems in production, and what it does is
pretty amazing, but mostly in the sense of the gigabytes of RAM it
consumes for basic operation (and unexplained file system wedging).
I've usually seen it used as a way to avoid good system design.  Yes,
huge file systems can be useful, but usually in papering over basic
design flaws.

Nick.

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Andres Perera-4
On Tue, Feb 19, 2013 at 8:11 AM, Nick Holland
<[hidden email]> wrote:

> I use ZFS, and have a few ZFS systems in production, and what it does is
> pretty amazing, but mostly in the sense of the gigabytes of RAM it
> consumes for basic operation (and unexplained file system wedging).
> I've usually seen it used as a way to avoid good system design.  Yes,
> huge file systems can be useful, but usually in papering over basic
> design flaws.

funnily enough, that "avoid[ing] good system design" is exactly what
makes it useful for desktop over server. i don't want to spend any
time figuring out how much gigs for /usr/{src,xenocara}. i also don't
want to partition /usr/ports only to find out later on that there's an
"object" or "tmp" sub-directory that i want on a different fs but i
can't because i've hit the 16 partition limit

if i ever install an application for experimental reasons, because
it's not a production machine, i don't want to rethink everything to
fit inside the disklabel constraints either. "good system design"
doesn't apply because it's a case where, gasp, the admin couldn't
possibly plan ahead

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Eric S Pulley
In reply to this post by Nick Holland
> On 02/19/13 05:47, MJ wrote:
>> Which app are you running that is generating millions of tiny files
>> in a single directory?  Regardless, in this case OpenBSD is not the
>> right tool for the job. You need either FreeBSD or a Solaris variant
>> to handle this problem because you need ZFS.
>>
>>
>> What limits does ZFS have? ---------------------------------------
>> The limitations of ZFS are designed to be so large that they will
>> never be encountered in any practical operation. ZFS can store 16
>> Exabytes in each storage pool, file system, file, or file attribute.
>> ZFS can store billions of names: files or directories in a directory,
>> file systems in a file system, or snapshots of a file system. ZFS can
>> store trillions of items: files in a file system, file systems,
>> volumes, or snapshots in a pool.
>>
>>
>> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it
>> were then that would pretty much eliminate the need for my one and
>> only FreeBSD box ;-)
>
> The usual stated reason is "license", it is completely unacceptable to
> OpenBSD.
>
> The other reason usually not given which I suspect would become obvious
> were the license not an instant non-starter is the nature of ZFS.  As it
> is a major memory hog, it works well only on loaded 64 bit platforms.
> Since most of our 64 bit platforms are older, and Alpha and SGI machines
> with many gigabytes of memory are rare, you are probably talking an
> amd64 and maybe some sparc64 systems.
>
> Also...see the number of "ZFS Tuning Guides" out there.  How...1980s.
> The OP here has a "special case" use, but virtually all ZFS uses involve
> knob twisting and experimentation, which is about as anti-OpenBSD as you
> can get.  Granted, there are a lot of people who love knob-twisting, but
> that's not what OpenBSD is about.
>
> I use ZFS, and have a few ZFS systems in production, and what it does is
> pretty amazing, but mostly in the sense of the gigabytes of RAM it
> consumes for basic operation (and unexplained file system wedging).
> I've usually seen it used as a way to avoid good system design.  Yes,
> huge file systems can be useful, but usually in papering over basic
> design flaws.
>
> Nick.
>
>

I feel anyone expecting to run any of the recently hatched filesystem on
10+ year old hardware falls into the design flaw category you mention. As
for needing to turn nobs to get it to work properly this is not necessary
if you use a modern 64bit box. Most of the tuning guides are written for
the guys trying to use it on their old hardware. Or trying to reach
"performance" numbers for whatever, usually misguided, reason. On a modern
amd64 box it pretty much just works.

As for a port to OpenBSD I'd love it, or port of LVM, but the biggest
hurdle IMO is the same one that plagues so many other good potential
OpenBSD ports. Getting someone competent and dedicated enough to do the
work.

I'm neither of those two things when it comes to porting, so I can only
blame myself that I'm using FreeBSD on my file server and desktop instead
of Open as I'd really like. However, I still have deep reservations about
trusting ZFS long term since Oracle closed it off to the community again.
I don't feel FreeBSD will be able to truly maintain the port over time. I
hope I'm wrong but we will see. So it may be for the best that Open
doesn't waste too much time on it.

--
ESP

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Matthias Appel
Am 19.02.2013 18:01, schrieb Eric S Pulley:

[snip]

> I feel anyone expecting to run any of the recently hatched filesystem on
> 10+ year old hardware falls into the design flaw category you mention. As
> for needing to turn nobs to get it to work properly this is not necessary
> if you use a modern 64bit box. Most of the tuning guides are written for
> the guys trying to use it on their old hardware. Or trying to reach
> "performance" numbers for whatever, usually misguided, reason. On a modern
> amd64 box it pretty much just works.

Maybe I don't see the big picture, but I assume, if ZFS is opt in, and
not the default FS, memory consumption would only hit those who
*really* run ZFS on their boxes


> As for a port to OpenBSD I'd love it, or port of LVM, but the biggest
> hurdle IMO is the same one that plagues so many other good potential
> OpenBSD ports. Getting someone competent and dedicated enough to do the
> work.
I have to confess /me is neither competent nor dedicated, but I assume
ZFS support for OpenBSD hast to be rewritten fron scratch.

And by talking of ZFS, why not consider
ext3/4,reiser,xfs,jfs,ntfs,whatever-fs to be ported to OpenBSD?

Don't get me wrong, I would *love* to see ZFS in OpenBSD...but done in
an OpenBSD-worthy way!
> I'm neither of those two things when it comes to porting, so I can only
> blame myself that I'm using FreeBSD on my file server and desktop instead
> of Open as I'd really like. However, I still have deep reservations about
> trusting ZFS long term since Oracle closed it off to the community again.
> I don't feel FreeBSD will be able to truly maintain the port over time. I
> hope I'm wrong but we will see. So it may be for the best that Open
> doesn't waste too much time on it.
>


Yupp, I think, that's (beside the CDDL part of ZFS) it  the major
turn-off in any kind of productive enviroment.

At the moment I don't know how FreeBSD handles the ZFS development, but
maintaining a not-really-fully-ZFS besides Oracle is a no-go, IMHO.
Maybe forking it and calling it whatever-name-you-want-FS, would be
better (but would violate CDDL, as far as I can see)..

If you want to have ZFS, you will have to bite the bullet and throw some
$$$ on Oracles hive and get a fully licensed ZFS alongside with Solaris.

If thats not an option, move along and choose someting different.

So, long story short, I do not see any option to use ZFS on a free system.

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Jan Stary
In reply to this post by Ted Unangst-6
> On Tue, Feb 19, 2013 at 00:35, Keith wrote:
> > Q. How do I make the default web folder /var/www/ capable of holding
> > millions of files (say 50GB worth of small 2kb-12kb files) so that I
> > won't get inode issues ?

newfs defaults to -f 2k and -b 16k which is fine if you
know in advance you will hold 2k-12k files. As for inodes,
the default of -i is to create an inode for every 4 frags,
that is 8192 bytes. So on a 50G filesystem this should
give you over 6.1 millon inodes. What does df -hi say?

But first of all, fix your crappy app to not do that.

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Jiri B-2
In reply to this post by Matthias Appel
On Wed, Feb 20, 2013 at 12:32:02AM +0100, Matthias Appel wrote:
> And by talking of ZFS, why not consider
> ext3/4,reiser,xfs,jfs,ntfs,whatever-fs to be ported to OpenBSD?

Where are the diffs? For example real improvement would be FAT/NTFS
speed on OpenBSD, as it is much much slower than on Linux.

jirib

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Jérémie Courrèges-Anglas-2
Jiri B <[hidden email]> writes:

> On Wed, Feb 20, 2013 at 12:32:02AM +0100, Matthias Appel wrote:
>> And by talking of ZFS, why not consider
>> ext3/4,reiser,xfs,jfs,ntfs,whatever-fs to be ported to OpenBSD?
>
> Where are the diffs? For example real improvement would be FAT/NTFS
> speed on OpenBSD, as it is much much slower than on Linux.

Even with ''mount -o sync ...'' on the Linux side?

--
Jérémie Courrèges-Anglas
GPG Key fingerprint: 61DB D9A0 00A4 67CF 2A90  8961 6191 8FBF 06A1 1494

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Luca Ferrari
In reply to this post by Jan Stary
I suspect the application described here should not use a filesystem,
probably a database will be better for the aim. However, assuming it
is not possible to fix/change the application behavior, I guess using
several filesystems/mount points will help. While ZFS (and many
others) will be good at handling this particular case, I guess it is
not worth switching from OpenBSD to something else just to have a
"stupid" application running.

Luca

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Matthias Appel
In reply to this post by Jiri B-2
Am 20.02.2013 09:21, schrieb Jiri B:
> On Wed, Feb 20, 2013 at 12:32:02AM +0100, Matthias Appel wrote:
>> And by talking of ZFS, why not consider
>> ext3/4,reiser,xfs,jfs,ntfs,whatever-fs to be ported to OpenBSD?
> Where are the diffs? For example real improvement would be FAT/NTFS
> speed on OpenBSD, as it is much much slower than on Linux.

There are two main differences:

*ZFS was open source (FSF would say free) until Oracle acquired Sun (and
now, ?nobodys? knows, how it's licensed.....but licensing does not
matter, as long as there is no code release from Oracle)
AFAIK no other FS has undergone such dramatic change both in licensing
and information policy.


*IMHO ZFS hast to be reversed, just like NTFS. There has to be
compatibility between Oracles ZFS and the free versions of it. If this
is none, we are not talking about ZFS on *NIX, this would be ZFS-ish.
And reversing the whole thing is much work.Reversing NTFS pays off,
because there are bazillions of NTFS formatted HDDs out there, so the
potential user base is quite big.
But ZFS, how big is the user base for ZFS?


What I did not know is, that Oracle dropped the trademark for ZFS in
late 2011.
Why would they do this?




Regards,

Matthias

1234