Millions of files in /var/www & inode / out of space issue.

classic Classic list List threaded Threaded
65 messages Options
1234
Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Keith-125
On 20/02/2013 07:36, Jan Stary wrote:

>> On Tue, Feb 19, 2013 at 00:35, Keith wrote:
>>> Q. How do I make the default web folder /var/www/ capable of holding
>>> millions of files (say 50GB worth of small 2kb-12kb files) so that I
>>> won't get inode issues ?
> newfs defaults to -f 2k and -b 16k which is fine if you
> know in advance you will hold 2k-12k files. As for inodes,
> the default of -i is to create an inode for every 4 frags,
> that is 8192 bytes. So on a 50G filesystem this should
> give you over 6.1 millon inodes. What does df -hi say?
>
> But first of all, fix your crappy app to not do that.
>
Hi, thanks for the info. Yesterday I did a backup, format, restore of
the /var/www partition although to be honest I wasn't really sure what i
was doing with regards to the newfs command. I tried running "newfs
-i"with different values and settled on "newfs -i 1 /var/www" as it
seemed at the time to makes the make the most inodes and that was just
based on how much output was generated while newfs was running.

# df -hi
Filesystem     Size    Used   Avail Capacity iused   ifree  %iused
Mounted on
/dev/sd0a     1005M    135M    819M    14%    3272  152630     2% /
/dev/sd0k     1005M    2.0K    955M     0%       1  155901     0% /home
/dev/sd0n     21.0G    2.0K   20.0G     0%       1 2832253     0% /scratch
/dev/sd0d      3.9G   14.0K    3.7G     0%      21  545641     0% /tmp
/dev/sd0f      2.0G    461M    1.4G    24%   13537  272285     5% /usr
/dev/sd0g     1005M    193M    762M    20%    9547  146355     6% /usr/X11R6
/dev/sd0h      6.8G    2.0G    4.5G    31%   41346  868092     5% /usr/local
/dev/sd0j      2.0G    2.0K    1.9G     0%       1  285821     0% /usr/obj
/dev/sd0i      1.9G    2.0K    1.8G     0%       1  285821     0% /usr/src
/dev/sd0e      6.3G   37.2M    6.0G     1%     740  856730     0% /var
/dev/sd0m     1001M    6.5M    944M     1%      53  155849     0% /var/log
/dev/sd0l      4.7G    1.2G    3.3G    26%  449170 2206316    17% /var/www
/dev/sd1a      1.8T    1.6T    147G    92%  720111 60427023     1%
/mnt/Media2TB
/dev/sd2a     55.0G   11.3G   41.0G    22%     208 7353262     0% /var/mysql

The above "df -hi" output was done today after the wiped the app and
started it again from scratch. It had been running for about 12 hours
and there was about 450,000 files. How many files do you think I'll be
able to store with this number of inodes ? I'd never used dump or
restore before and was supprised as how easy it was to backup, format
and restore the files so that will come in handy if I need to move this
partition later to a larger disk. I'll think I will just have to keep an
eye on my inodes until I get a feel for how many I need.

I don't know how to fix the app or why the developers decided to make so
many files on disk so I asked in their chat room........

<Keef>: I don't know how many files I had at the time that I was getting
issues probably about 1/2 million but I have since wiped the partition
and reformatted with more inodes but.... I ended up asking for help in
with my inode problem on a OpenBSD mailing lists and they were asking
why the newznab app wrote the files to disk in the first place. So I
thought I'd should ask here...
<ll>: do you want 20GB of files in your db?
<forkless>: i know i dont
<ll>: nor i
<ll>: and thats the reason realy
<Safra>: lol
<Safra>: Then you will get "why is my nzbfiles table corrupt"?
<Safra>: =p
<Safra>: I cant download anything?
<Safra>: lol
<Safra>: "fix it for me NOW"
<Safra>: =p
<forkless>: then the next step will be why arent the cover in the db either
<forkless>: and before you know it your db is 100GB
<Keef>: So how many files do the typical newnab users end up having and
how much space should I partition up for ?
<forkless>: i've only got a 120k releases or so but i dont nearly index all
<forkless>: i guess depends on your needs

I guess they have a good point as they have to support the app.

Cheers
Keith

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Jan Stary
On Feb 20 20:58:49, [hidden email] wrote:

> On 20/02/2013 07:36, Jan Stary wrote:
> >>On Tue, Feb 19, 2013 at 00:35, Keith wrote:
> >>>Q. How do I make the default web folder /var/www/ capable of holding
> >>>millions of files (say 50GB worth of small 2kb-12kb files) so that I
> >>>won't get inode issues ?
> >newfs defaults to -f 2k and -b 16k which is fine if you
> >know in advance you will hold 2k-12k files. As for inodes,
> >the default of -i is to create an inode for every 4 frags,
> >that is 8192 bytes. So on a 50G filesystem this should
> >give you over 6.1 millon inodes. What does df -hi say?
> >
> >But first of all, fix your crappy app to not do that.
> >
> Hi, thanks for the info. Yesterday I did a backup, format, restore
> of the /var/www partition

You said before you need to store 50G worth of files.
So why did you create a 4.7G partition for it?

> although to be honest I wasn't really sure
> what i was doing with regards to the newfs command.

Apparently:

> I tried running
> "newfs -i"with different values and settled on "newfs -i 1 /var/www"
> as it seemed at the time to makes the make the most inodes and that
> was just based on how much output was generated while newfs was
> running.

This is insane. Have you actually _read_ the manpage?
I am a bit surprised newfs even lets you do that.
You are creating one inode for every one byte.

> # df -hi
> Filesystem     Size    Used   Avail Capacity iused   ifree  %iused
> Mounted on
> /dev/sd0a     1005M    135M    819M    14%    3272  152630     2% /
> /dev/sd0k     1005M    2.0K    955M     0%       1  155901     0% /home
> /dev/sd0n     21.0G    2.0K   20.0G     0%       1 2832253     0% /scratch
> /dev/sd0d      3.9G   14.0K    3.7G     0%      21  545641     0% /tmp
> /dev/sd0f      2.0G    461M    1.4G    24%   13537  272285     5% /usr
> /dev/sd0g     1005M    193M    762M    20%    9547  146355     6% /usr/X11R6
> /dev/sd0h      6.8G    2.0G    4.5G    31%   41346  868092     5% /usr/local
> /dev/sd0j      2.0G    2.0K    1.9G     0%       1  285821     0% /usr/obj
> /dev/sd0i      1.9G    2.0K    1.8G     0%       1  285821     0% /usr/src
> /dev/sd0e      6.3G   37.2M    6.0G     1%     740  856730     0% /var
> /dev/sd0m     1001M    6.5M    944M     1%      53  155849     0% /var/log
> /dev/sd0l      4.7G    1.2G    3.3G    26%  449170 2206316    17% /var/www
> /dev/sd1a      1.8T    1.6T    147G    92%  720111 60427023     1%
> /mnt/Media2TB
> /dev/sd2a     55.0G   11.3G   41.0G    22%     208 7353262     0% /var/mysql
>
> The above "df -hi" output was done today after the wiped the app and
> started it again from scratch. It had been running for about 12
> hours and there was about 450,000 files. How many files do you think
> I'll be able to store with this number of inodes ?

You have 449170 used inodes and 2206316 free inodes now
(hint: 'iused', 'ifree'). Read 'man df' in its entirety.
(Do you actually know what an inode is?)

> I don't know how to fix the app or why the developers decided to
> make so many files on disk so I asked in their chat room........
>
> <Keef>: I don't know how many files I had at the time that I was
> getting issues probably about 1/2 million

That's why I asked exactly what errors you were getting.
As said before, if you created a 50G partition (but now
I don't even know if you did) with the default newfs,
that would give you over 6M inodes, so it ould have
been something else. Was the partition actually big enough?
Was the demented app trying to create all those files
in one directory?

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Bryan Brake
On Wed, Feb 20, 2013 at 4:07 PM, Jan Stary <[hidden email]> wrote:

>
>
> > I tried running
> > "newfs -i"with different values and settled on "newfs -i 1 /var/www"
> > as it seemed at the time to makes the make the most inodes and that
> > was just based on how much output was generated while newfs was
> > running.
>
> This is insane. Have you actually _read_ the manpage?
> I am a bit surprised newfs even lets you do that.
> You are creating one inode for every one byte.
>

Am I right in thinking that this will take forever to fsck? and you
surely wouldn't want to do the fsck at boot, as it will a large amount
of RAM?

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Stuart Henderson
In reply to this post by Keith-125
On 2013-02-20, Keith <[hidden email]> wrote:
>>
> Hi, thanks for the info. Yesterday I did a backup, format, restore of
> the /var/www partition although to be honest I wasn't really sure what i
> was doing with regards to the newfs command. I tried running "newfs
> -i"with different values and settled on "newfs -i 1 /var/www" as it
> seemed at the time to makes the make the most inodes and that was just
> based on how much output was generated while newfs was running.

Those aren't inodes, they're superblock backups, clue is in the text
printed by newfs.

> # df -hi
> Filesystem     Size    Used   Avail Capacity iused   ifree  %iused Mounted on
> /dev/sd0l      4.7G    1.2G    3.3G    26%  449170 2206316    17% /var/www
>
> The above "df -hi" output was done today after the wiped the app and
> started it again from scratch. It had been running for about 12 hours
> and there was about 450,000 files. How many files do you think I'll be
> able to store with this number of inodes ?

I would think you'd be able to store 2206316 files purely based on the
number of inodes, but this would be limited by the minimum file size.

$ df -hi /tmp; touch /tmp/bleh; df -hi /tmp | tail -1
Filesystem     Size    Used   Avail Capacity iused   ifree  %iused  Mounted on
mfs:21643      991M    110M    831M    12%   16175  253967     6%   /tmp
mfs:21643      991M    110M    831M    12%   16176  253966     6%   /tmp

><ll>: do you want 20GB of files in your db?
><forkless>: i know i dont
..
><Safra>: Then you will get "why is my nzbfiles table corrupt"?

There is absolutely no reason for a database to corrupt itself just by
having 20GB of data in it.

It's at least as likely that a filesystem would corrupt itself,
and databases often have better recovery mechanisms than many types
of filesystem.

Please at least tell me that these files are split across a number
of directories and not all lumped together in one....

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Stuart Henderson
In reply to this post by Matthias Appel
On 2013-02-20, Matthias Appel <[hidden email]> wrote:
> *ZFS was open source (FSF would say free) until Oracle acquired Sun

The source was available, but it relies on Sun/Oracle patents.
The CDDL license it was provided under allows use of those patents,
but only subject to certain conditions, and there are indemnification
clauses that some projects cannot agree to.

> *IMHO ZFS hast to be reversed, just like NTFS. There has to be
> compatibility between Oracles ZFS and the free versions of it.

Then you don't have a license to use the patents.

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Jeremie Le Hen-2
In reply to this post by Matthias Appel
On Wed, Feb 20, 2013 at 12:32:02AM +0100, Matthias Appel wrote:

>
> Yupp, I think, that's (beside the CDDL part of ZFS) it  the major
> turn-off in any kind of productive enviroment.
>
> At the moment I don't know how FreeBSD handles the ZFS development, but
> maintaining a not-really-fully-ZFS besides Oracle is a no-go, IMHO.
> Maybe forking it and calling it whatever-name-you-want-FS, would be
> better (but would violate CDDL, as far as I can see)..
>
> If you want to have ZFS, you will have to bite the bullet and throw some
> $$$ on Oracles hive and get a fully licensed ZFS alongside with Solaris.
>
> If thats not an option, move along and choose someting different.
>
> So, long story short, I do not see any option to use ZFS on a free system.

There are two versions of ZFS: Oracle's ZFS in Solaris 11 and the other
ZFS, which is the open-source evolution of the latest ZFS from
OpenSolaris.  This open-source version is mainly developped within
IllumOS, which can be considered as the OpenSolaris heir and  is backed
by the Nexenta company.  Two others companies, Joyent and Delphix, also
hired former Sun Solaris developers and are putting some efforts in it.

FreeBSD basically pulls the changes from IllumOS regurlarly.  A handful
of bugfixes did go in the other direction though, but not that much.
IIRC, I've also seen one or two bugfixes committed into FreeBSD that
came from ZFS On Linux.

--
Jeremie Le Hen

Scientists say the world is made up of Protons, Neutrons and Electrons.
They forgot to mention Morons.

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Bryan Allen
I apologize this is off-topic, but I'm somewhat close to the illumos project
and would like to correct a few things.

+------------------------------------------------------------------------------
| On 2013-02-21 22:12:45, Jeremie Le Hen wrote:
|
| > So, long story short, I do not see any option to use ZFS on a free system.

This is not correct, as Jeremie notes below. Here's some delicious pudding
proof, though.

https://github.com/illumos/illumos-gate/tree/master/usr/src/uts/common/fs/zfs

There is zero reason not to have ZFS in a free system. Consider its inclusion
in FreeBSD.

(I can't really imagine its inclusion in OpenBSD, though. License arguments are
incredibly boring, but it just doesn't seem at all likely.)
 
| There are two versions of ZFS: Oracle's ZFS in Solaris 11 and the other
| ZFS, which is the open-source evolution of the latest ZFS from
| OpenSolaris.  This open-source version is mainly developped within
| IllumOS, which can be considered as the OpenSolaris heir and  is backed
| by the Nexenta company.  Two others companies, Joyent and Delphix, also
| hired former Sun Solaris developers and are putting some efforts in it.

This is also slightly incorrect. illumos (not IllumOS) is not backed by
Nexenta. illumos is an open source project that Joyent, Delphix and Nexenta all
contribute to. To date:

Joyent's major contributions to illumos include ZFS Write I/O Throttle and a
port of the Linux KVM hypervisor.

Delphix recently upstreams ZFS feature flags, making ZFS versions more
portable.

Nexenta's contributions tend to come in the form of HBA driver work, as that's
their business model (storage).

All companies provide bug fixes of various sorts as well.

The number of non-employee contributors is small, but exists. There is a lot of
legacy in the build system, so writing code and running builds is somewhat
non-trivial.

illumos is the core OS and utilities, similar to the OS/NET source
distributions if you're familiar with Solaris development.

Or like kernel.org, if you like. (The kernel plus other stuff (like ZFS).)

illumos is what you use to build illumos-based distributions, like SmartOS,
OmniOS, or OpenIndiana.

| FreeBSD basically pulls the changes from IllumOS regurlarly.  A handful
| of bugfixes did go in the other direction though, but not that much.
| IIRC, I've also seen one or two bugfixes committed into FreeBSD that
| came from ZFS On Linux.

illumos has seen some bug fixes from the FreeBSD folks, as you mention, but
they are primarily a consumer still. (Love seeing ZFS and DTrace on FreeBSD!)

zfsonlinux is developed by LLNL, and is core to their supercomputing
infrastructure. My experience with it has been pretty solid over the last year.

Cheers.
--
bdha
cyberpunk is dead. long live cyberpunk.

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Jeremie Le Hen-2
On Thu, Feb 21, 2013 at 05:15:35PM -0500, Bryan Horstmann-Allen wrote:
> I apologize this is off-topic, but I'm somewhat close to the illumos project
> and would like to correct a few things.
>
> [...things corrected...]

Well, thank you very much for correcting me and providing us high quality
informations!

Regards,
--
Jeremie Le Hen

Scientists say the world is made up of Protons, Neutrons and Electrons.
They forgot to mention Morons.

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS

Matthias Appel
In reply to this post by Jeremie Le Hen-2
Am 21.02.2013 22:12, schrieb Jeremie Le Hen:

> On Wed, Feb 20, 2013 at 12:32:02AM +0100, Matthias Appel wrote:
>> Yupp, I think, that's (beside the CDDL part of ZFS) it  the major
>> turn-off in any kind of productive enviroment.
>>
>> At the moment I don't know how FreeBSD handles the ZFS development, but
>> maintaining a not-really-fully-ZFS besides Oracle is a no-go, IMHO.
>> Maybe forking it and calling it whatever-name-you-want-FS, would be
>> better (but would violate CDDL, as far as I can see)..
>>
>> If you want to have ZFS, you will have to bite the bullet and throw some
>> $$$ on Oracles hive and get a fully licensed ZFS alongside with Solaris.
>>
>> If thats not an option, move along and choose someting different.
>>
>> So, long story short, I do not see any option to use ZFS on a free system.
> There are two versions of ZFS: Oracle's ZFS in Solaris 11 and the other
> ZFS, which is the open-source evolution of the latest ZFS from
> OpenSolaris.  This open-source version is mainly developped within
> IllumOS, which can be considered as the OpenSolaris heir and  is backed
> by the Nexenta company.  Two others companies, Joyent and Delphix, also
> hired former Sun Solaris developers and are putting some efforts in it.
>
  Yes, there are two (ore more) versions of ZFS, as you mentioned before.


If this is the right thing, that's another story!

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS

Matthias Appel
Am 22.02.2013 00:40, schrieb Matthias Appel:

> Am 21.02.2013 22:12, schrieb Jeremie Le Hen:
>> On Wed, Feb 20, 2013 at 12:32:02AM +0100, Matthias Appel wrote:
>>> Yupp, I think, that's (beside the CDDL part of ZFS) it  the major
>>> turn-off in any kind of productive enviroment.
>>>
>>> At the moment I don't know how FreeBSD handles the ZFS development, but
>>> maintaining a not-really-fully-ZFS besides Oracle is a no-go, IMHO.
>>> Maybe forking it and calling it whatever-name-you-want-FS, would be
>>> better (but would violate CDDL, as far as I can see)..
>>>
>>> If you want to have ZFS, you will have to bite the bullet and throw
>>> some
>>> $$$ on Oracles hive and get a fully licensed ZFS alongside with
>>> Solaris.
>>>
>>> If thats not an option, move along and choose someting different.
>>>
>>> So, long story short, I do not see any option to use ZFS on a free
>>> system.
>> There are two versions of ZFS: Oracle's ZFS in Solaris 11 and the other
>> ZFS, which is the open-source evolution of the latest ZFS from
>> OpenSolaris.  This open-source version is mainly developped within
>> IllumOS, which can be considered as the OpenSolaris heir and  is backed
>> by the Nexenta company.  Two others companies, Joyent and Delphix, also
>> hired former Sun Solaris developers and are putting some efforts in it.
>>
>  Yes, there are two (ore more) versions of ZFS, as you mentioned before.

That is what I wanted to say....so if there Is ZFS-a and ZFS-b, why call
both of them ZFS?

>
> If this is the right thing, that's another story!

Either do it right,  or don't do it.....but it's not my effort that goes
into ZFS (and this is good so, I am a user, not a coder!)..so they have
to decide.
I only have to deice, if I use it....and I don't do it!

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Theo de Raadt
In reply to this post by Bryan Allen
> There is zero reason not to have ZFS in a free system. Consider its inclusion
> in FreeBSD.

Just because FreeBSD decided to compromise in regards to ZFS, does not
mean everyone else has to as well.  They could include all sorts of
other code with similar licenses, yet there they often stand firm.

None of that matters here.

As to the rest of what you say about ZFS, I doubt anyone here really
cares about ZFS as regards the subject of this list -- OpenBSD.

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS

bofh-6
In reply to this post by Matthias Appel
On Feb 21, 2013, at 6:57 PM, Matthias Appel <[hidden email]> wrote:
.
>
> That is what I wanted to say....so if there Is ZFS-a and ZFS-b, why call both of them ZFS?

ZFS has version numbers.  They are backward but not forward compatible so newer code can mount older ZFS but not the other way round.  As version increases, capabilities increases, from supporting compression, more compression options, dedup and finally, in the version in Solaris 11, encryption as well.

All Illumos/opensolaris versions of ZFS do not support ZFS type encryption, sadly.

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Juan Francisco Cantero Hurtado
In reply to this post by Bryan Allen
On Thu, Feb 21, 2013 at 05:15:35PM -0500, Bryan Horstmann-Allen wrote:

> I apologize this is off-topic, but I'm somewhat close to the illumos project
> and would like to correct a few things.
>
> +------------------------------------------------------------------------------
> | On 2013-02-21 22:12:45, Jeremie Le Hen wrote:
> |
> | > So, long story short, I do not see any option to use ZFS on a free system.
>
> This is not correct, as Jeremie notes below. Here's some delicious pudding
> proof, though.
>
> https://github.com/illumos/illumos-gate/tree/master/usr/src/uts/common/fs/zfs
>
> There is zero reason not to have ZFS in a free system. Consider its inclusion
> in FreeBSD.
>
> (I can't really imagine its inclusion in OpenBSD, though. License arguments are
> incredibly boring, but it just doesn't seem at all likely.)

The problem with licenses is different between FreeBSD/NetBSD/Linux and
OpenBSD. FreeBSD uses a extra layer for compatibility with opensolaris
and they have support for loadable kernel modules. NetBSD uses a similar
approach.

ZFS on Linux uses FUSE, I don't know if they also use a extra layer for
compatibility with opensolaris.

OpenBSD doesn't have support for loadable kernel modules or FUSE, so
OpenBSD should include the code inside of the kernel. This is a big
difference with FreeBSD/NetBSD/Linux.

Also FreeBSD had adapted their kernel for the peculiarities of ZFS. Did
you try the first version of FreeBSD with ZFS?. The performance was
horrible.

Here in the BSD world, we have HAMMER, a good alternative with a license
compatible and a reasonable requirements.

If ZFS had a license compatible, the problem would be the same of
HAMMER, someone should do the job. I think the most of OpenBSD
developers already have a to-do big enough

>  
> | There are two versions of ZFS: Oracle's ZFS in Solaris 11 and the other
> | ZFS, which is the open-source evolution of the latest ZFS from
> | OpenSolaris.  This open-source version is mainly developped within
> | IllumOS, which can be considered as the OpenSolaris heir and  is backed
> | by the Nexenta company.  Two others companies, Joyent and Delphix, also
> | hired former Sun Solaris developers and are putting some efforts in it.
>
> This is also slightly incorrect. illumos (not IllumOS) is not backed by
> Nexenta. illumos is an open source project that Joyent, Delphix and Nexenta all
> contribute to. To date:
>
> Joyent's major contributions to illumos include ZFS Write I/O Throttle and a
> port of the Linux KVM hypervisor.
>
> Delphix recently upstreams ZFS feature flags, making ZFS versions more
> portable.
>
> Nexenta's contributions tend to come in the form of HBA driver work, as that's
> their business model (storage).
>
> All companies provide bug fixes of various sorts as well.
>
> The number of non-employee contributors is small, but exists. There is a lot of
> legacy in the build system, so writing code and running builds is somewhat
> non-trivial.
>
> illumos is the core OS and utilities, similar to the OS/NET source
> distributions if you're familiar with Solaris development.
>
> Or like kernel.org, if you like. (The kernel plus other stuff (like ZFS).)
>
> illumos is what you use to build illumos-based distributions, like SmartOS,
> OmniOS, or OpenIndiana.
>
> | FreeBSD basically pulls the changes from IllumOS regurlarly.  A handful
> | of bugfixes did go in the other direction though, but not that much.
> | IIRC, I've also seen one or two bugfixes committed into FreeBSD that
> | came from ZFS On Linux.
>
> illumos has seen some bug fixes from the FreeBSD folks, as you mention, but
> they are primarily a consumer still. (Love seeing ZFS and DTrace on FreeBSD!)
>
> zfsonlinux is developed by LLNL, and is core to their supercomputing
> infrastructure. My experience with it has been pretty solid over the last year.
>
> Cheers.
> --
> bdha
> cyberpunk is dead. long live cyberpunk.

--
Juan Francisco Cantero Hurtado http://juanfra.info

Reply | Threaded
Open this post in threaded view
|

Re: Millions of files in /var/www & inode / out of space issue.

Juan Francisco Cantero Hurtado
In reply to this post by Nick Holland
On Tue, Feb 19, 2013 at 07:41:11AM -0500, Nick Holland wrote:

> On 02/19/13 05:47, MJ wrote:
> > Which app are you running that is generating millions of tiny files
> > in a single directory?  Regardless, in this case OpenBSD is not the
> > right tool for the job. You need either FreeBSD or a Solaris variant
> > to handle this problem because you need ZFS.
> >
> >
> > What limits does ZFS have? ---------------------------------------
> > The limitations of ZFS are designed to be so large that they will
> > never be encountered in any practical operation. ZFS can store 16
> > Exabytes in each storage pool, file system, file, or file attribute.
> > ZFS can store billions of names: files or directories in a directory,
> > file systems in a file system, or snapshots of a file system. ZFS can
> > store trillions of items: files in a file system, file systems,
> > volumes, or snapshots in a pool.
> >
> >
> > I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it
> > were then that would pretty much eliminate the need for my one and
> > only FreeBSD box ;-)
>
> The usual stated reason is "license", it is completely unacceptable to
> OpenBSD.
>
> The other reason usually not given which I suspect would become obvious
> were the license not an instant non-starter is the nature of ZFS.  As it
> is a major memory hog, it works well only on loaded 64 bit platforms.
> Since most of our 64 bit platforms are older, and Alpha and SGI machines
> with many gigabytes of memory are rare, you are probably talking an
> amd64 and maybe some sparc64 systems.
>
> Also...see the number of "ZFS Tuning Guides" out there.  How...1980s.
> The OP here has a "special case" use, but virtually all ZFS uses involve
> knob twisting and experimentation, which is about as anti-OpenBSD as you
> can get.  Granted, there are a lot of people who love knob-twisting, but
> that's not what OpenBSD is about.
>
> I use ZFS, and have a few ZFS systems in production, and what it does is
> pretty amazing, but mostly in the sense of the gigabytes of RAM it
> consumes for basic operation (and unexplained file system wedging).
> I've usually seen it used as a way to avoid good system design.  Yes,
> huge file systems can be useful, but usually in papering over basic
> design flaws.

If you don't like the RAM consumption of ZFS for basic operations,
enable the deduplication. You will cry like a baby :D

--
Juan Francisco Cantero Hurtado http://juanfra.info

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Andres Perera-4
In reply to this post by Juan Francisco Cantero Hurtado
On Thu, Feb 21, 2013 at 9:59 PM, Juan Francisco Cantero Hurtado
<[hidden email]> wrote:

> OpenBSD doesn't have support for loadable kernel modules or FUSE, so
> OpenBSD should include the code inside of the kernel. This is a big
> difference with FreeBSD/NetBSD/Linux.

lkm(4) is outdated with wrong information about a feature no longer present?

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Juan Francisco Cantero Hurtado
On Thu, Feb 21, 2013 at 10:54:58PM -0430, Andres Perera wrote:
> On Thu, Feb 21, 2013 at 9:59 PM, Juan Francisco Cantero Hurtado
> <[hidden email]> wrote:
>
> > OpenBSD doesn't have support for loadable kernel modules or FUSE, so
> > OpenBSD should include the code inside of the kernel. This is a big
> > difference with FreeBSD/NetBSD/Linux.
>
> lkm(4) is outdated with wrong information about a feature no longer present?

My fault :)

--
Juan Francisco Cantero Hurtado http://juanfra.info

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Rod Whitworth-3
In reply to this post by Andres Perera-4
On Thu, 21 Feb 2013 22:54:58 -0430, Andres Perera wrote:

>On Thu, Feb 21, 2013 at 9:59 PM, Juan Francisco Cantero Hurtado
><[hidden email]> wrote:
>
>> OpenBSD doesn't have support for loadable kernel modules or FUSE, so
>> OpenBSD should include the code inside of the kernel. This is a big
>> difference with FreeBSD/NetBSD/Linux.
>
>lkm(4) is outdated with wrong information about a feature no longer present?
>

From cvsweb:src/lkm/ap/Attic/README

Revision 1.3
Mon Feb 24 22:30:12 2003 UTC (10 years ago) by matthieu
Branches: MAIN
CVS tags: HEAD
FILE REMOVED
Changes since revision 1.2: +1 -1 lines
Bye, unused code.

R/

*** NOTE *** Please DO NOT CC me. I <am> subscribed to the list.
Mail to the sender address that does not originate at the list server is tarpitted. The reply-to: address is provided for those who feel compelled to reply off list. Thankyou.

Rod/
---
This life is not the real thing.
It is not even in Beta.
If it was, then OpenBSD would already have a man page for it.

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Philip Guenther-2
On Thu, Feb 21, 2013 at 8:29 PM, Rod Whitworth <[hidden email]> wrote:
> On Thu, 21 Feb 2013 22:54:58 -0430, Andres Perera wrote:
...

>>lkm(4) is outdated with wrong information about a feature no longer present?
>
> From cvsweb:src/lkm/ap/Attic/README
>
> Revision 1.3
> Mon Feb 24 22:30:12 2003 UTC (10 years ago) by matthieu
> Branches: MAIN
> CVS tags: HEAD
> FILE REMOVED
> Changes since revision 1.2: +1 -1 lines
> Bye, unused code.

This is too subtle for me.  How is that relevant to the question Andres asked?


Philip Guenther

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Eric Furman-3
On Thu, Feb 21, 2013, at 11:43 PM, Philip Guenther wrote:

> On Thu, Feb 21, 2013 at 8:29 PM, Rod Whitworth <[hidden email]>
> wrote:
> > On Thu, 21 Feb 2013 22:54:58 -0430, Andres Perera wrote:
> ...
> >>lkm(4) is outdated with wrong information about a feature no longer present?
> >
> > From cvsweb:src/lkm/ap/Attic/README
> >
> > Revision 1.3
> > Mon Feb 24 22:30:12 2003 UTC (10 years ago) by matthieu
> > Branches: MAIN
> > CVS tags: HEAD
> > FILE REMOVED
> > Changes since revision 1.2: +1 -1 lines
> > Bye, unused code.
>
> This is too subtle for me.  How is that relevant to the question Andres
> asked?

Agreed. So why can I find lkm(4) in the man pages and it references
OpenBSD 5.0??
This is the first time I was even aware OBSD had anything to do with
lkm.

Reply | Threaded
Open this post in threaded view
|

Re: Precisions on ZFS (was: Millions of files in /var/www & inode / out of space issue.)

Antoine Verheijen
On 2013-02-21, at 11:21 PM, Eric Furman wrote:

> On Thu, Feb 21, 2013, at 11:43 PM, Philip Guenther wrote:
>> On Thu, Feb 21, 2013 at 8:29 PM, Rod Whitworth <[hidden email]>
>> wrote:
>>> On Thu, 21 Feb 2013 22:54:58 -0430, Andres Perera wrote:
>> ...
>>>> lkm(4) is outdated with wrong information about a feature no longer
present?

>>>
>>> From cvsweb:src/lkm/ap/Attic/README
>>>
>>> Revision 1.3
>>> Mon Feb 24 22:30:12 2003 UTC (10 years ago) by matthieu
>>> Branches: MAIN
>>> CVS tags: HEAD
>>> FILE REMOVED
>>> Changes since revision 1.2: +1 -1 lines
>>> Bye, unused code.
>>
>> This is too subtle for me.  How is that relevant to the question Andres
>> asked?
>
> Agreed. So why can I find lkm(4) in the man pages and it references
> OpenBSD 5.0??
> This is the first time I was even aware OBSD had anything to do with
> lkm.

Because the lkm interface is used to load dynamic kernel modules in
OpenBSD, like the man page says.

I have been doing this for the OpenAFS client from OpenBSD 3.6 through
to 5.2, inclusive, at least for i386.

I have no idea what src/lkm used to do but modload work just fine using
the lkm interface.

------------------------------------------------------------------------
Antoine Verheijen                   Email: [hidden email]
AICT (formerly CNS)                 Phone: (780) 492-9312
University of Alberta               Fax:   (780) 492-1729

1234