overlapping IPv6 fragments in pf

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

overlapping IPv6 fragments in pf

Alexander Bluhm
Hi,

Drop overlapping IPv6 fragments in pf reassembly code.  RFC 5722
is demanding that.  Our stack does that already.

The text "constituent fragments, including those not yet received,
MUST be discarded" does not make clear, how long to do that.  The
easiest way for pf is to wait until the packet is complete and drop
it then.

Is that a good idea or should we wait for the reassembly timeout?

ok?

bluhm


Index: net/pf_norm.c
===================================================================
RCS file: /data/mirror/openbsd/cvs/src/sys/net/pf_norm.c,v
retrieving revision 1.148
diff -u -p -r1.148 pf_norm.c
--- net/pf_norm.c 3 Jan 2012 17:06:38 -0000 1.148
+++ net/pf_norm.c 11 Jan 2012 20:58:58 -0000
@@ -95,9 +95,11 @@ struct pf_fragment {
 
  RB_ENTRY(pf_fragment) fr_entry;
  TAILQ_ENTRY(pf_fragment) frag_next;
+ TAILQ_HEAD(pf_fragq, pf_frent) fr_queue;
  u_int32_t fr_timeout;
  u_int16_t fr_maxlen; /* maximum length of single fragment */
- TAILQ_HEAD(pf_fragq, pf_frent) fr_queue;
+ u_int8_t fr_flags;
+#define FR_OVERLAP 0x01
 };
 
 struct pf_fragment_tag {
@@ -313,9 +315,10 @@ pf_fillup_fragment(struct pf_fragment_cm
  }
 
  *(struct pf_fragment_cmp *)frag = *key;
+ TAILQ_INIT(&frag->fr_queue);
  frag->fr_timeout = time_second;
  frag->fr_maxlen = frent->fe_len;
- TAILQ_INIT(&frag->fr_queue);
+ frag->fr_flags = 0;
 
  RB_INSERT(pf_frag_tree, &pf_frag_tree, frag);
  TAILQ_INSERT_HEAD(&pf_fragqueue, frag, frag_next);
@@ -363,10 +366,13 @@ pf_fillup_fragment(struct pf_fragment_cm
  if (prev != NULL && prev->fe_off + prev->fe_len > frent->fe_off) {
  u_int16_t precut;
 
+ frag->fr_flags |= FR_OVERLAP;
  precut = prev->fe_off + prev->fe_len - frent->fe_off;
- if (precut >= frent->fe_len)
- goto bad_fragment;
- DPFPRINTF(LOG_NOTICE, "overlap -%d", precut);
+ if (precut >= frent->fe_len) {
+ DPFPRINTF(LOG_NOTICE, "new frag overlapped");
+ goto drop_fragment;
+ }
+ DPFPRINTF(LOG_NOTICE, "frag head overlap %d", precut);
  m_adj(frent->fe_m, precut);
  frent->fe_off += precut;
  frent->fe_len -= precut;
@@ -377,9 +383,10 @@ pf_fillup_fragment(struct pf_fragment_cm
  {
  u_int16_t aftercut;
 
+ frag->fr_flags |= FR_OVERLAP;
  aftercut = frent->fe_off + frent->fe_len - after->fe_off;
- DPFPRINTF(LOG_NOTICE, "adjust overlap %d", aftercut);
  if (aftercut < after->fe_len) {
+ DPFPRINTF(LOG_NOTICE, "frag tail overlap %d", aftercut);
  m_adj(after->fe_m, aftercut);
  after->fe_off += aftercut;
  after->fe_len -= aftercut;
@@ -387,6 +394,7 @@ pf_fillup_fragment(struct pf_fragment_cm
  }
 
  /* This fragment is completely overlapped, lose it */
+ DPFPRINTF(LOG_NOTICE, "old frag overlapped");
  next = TAILQ_NEXT(after, fr_next);
  TAILQ_REMOVE(&frag->fr_queue, after, fr_next);
 
@@ -593,6 +601,23 @@ pf_reassemble6(struct mbuf **m0, struct
 
  if (!pf_isfull_fragment(frag))
  return (PF_PASS);  /* drop because *m0 is NULL, no error */
+
+ /*
+ * RFC5722:  When reassembling an IPv6 datagram, if one or more its
+ * constituent fragments is determined to be an overlapping fragment,
+ * the entire datagram (and any constituent fragments, including those
+ * not yet received) MUST be silently discarded.
+ */
+ if (frag->fr_flags & FR_OVERLAP) {
+ pf_free_fragment(frag);
+ REASON_SET(reason, PFRES_FRAG);
+ /*
+ * We cannot return PF_DROP here as this requires a valid
+ * mbuf *m0 in pf_test().  But we can savely return PF_PASS
+ * as with *m0 == NULL the packet will be dropped.
+ */
+ return (PF_PASS);
+ }
 
  /* We have all the data */
  extoff = frent->fe_extoff;

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Fernando Gont-2
On 01/12/2012 12:03 AM, Alexander Bluhm wrote:
> Drop overlapping IPv6 fragments in pf reassembly code.  RFC 5722
> is demanding that.  Our stack does that already.
>
> The text "constituent fragments, including those not yet received,
> MUST be discarded" does not make clear, how long to do that.  The
> easiest way for pf is to wait until the packet is complete and drop
> it then.
>
> Is that a good idea or should we wait for the reassembly timeout?

I'd argue that you should drop all the "constituent fragments" as soon
as you receive them.

Since there's no legitimate reason of overlapping fragments, get rid of
them asap. And if there were more fragments (for the same packet)
coming, they will be dropped as a result of the reassembly timeout.

Thanks,
--
Fernando Gont
e-mail: [hidden email] || [hidden email]
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Kenneth Gober
On Thu, Jan 12, 2012 at 3:31 AM, Fernando Gont <[hidden email]> wrote:

> On 01/12/2012 12:03 AM, Alexander Bluhm wrote:
> > Drop overlapping IPv6 fragments in pf reassembly code.  RFC 5722
> > is demanding that.  Our stack does that already.
> >
> > The text "constituent fragments, including those not yet received,
> > MUST be discarded" does not make clear, how long to do that.  The
> > easiest way for pf is to wait until the packet is complete and drop
> > it then.
> >
> > Is that a good idea or should we wait for the reassembly timeout?
>
> I'd argue that you should drop all the "constituent fragments" as soon
> as you receive them.
>
> Since there's no legitimate reason of overlapping fragments, get rid of
> them asap. And if there were more fragments (for the same packet)
> coming, they will be dropped as a result of the reassembly timeout.
>

They will be dropped if the remaining fragments can't be assembled
into a complete packet.  But when overlapping fragments are combined
with the possibility of fragment duplication, I'd argue that you have to
store just enough state to be able to either immediately drop newly
arriving fragments or tag them as "not to be reassembled".

-ken

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Alexander Bluhm
In reply to this post by Fernando Gont-2
On Thu, Jan 12, 2012 at 05:31:00AM -0300, Fernando Gont wrote:
> I'd argue that you should drop all the "constituent fragments" as soon
> as you receive them.
>
> Since there's no legitimate reason of overlapping fragments, get rid of
> them asap. And if there were more fragments (for the same packet)
> coming, they will be dropped as a result of the reassembly timeout.

I have reconsidered it and drop the fragments immediately.  The
packet to be reassembled will be dropped after timeout.  Now pf
works like our IPv6 stack.

- Sort pf_fragment fields while there.
- If the fr_queue is empty, we had overlapping fragments, don't add
  new ones.
- If we detect overlapping IPv6 fragments, flush the fr_queue and
  drop all fragments immediately.
- Rearrange debug output, to make clear what happens.
- An IPv4 fragment that is totaly overlapped does not inclease the
  bad fragment counter.
- Put an KASSERT into pf_isfull_fragment() to make sure the fr_queue
  is never emtpy there.

ok?

bluhm


Index: net/pf_norm.c
===================================================================
RCS file: /data/mirror/openbsd/cvs/src/sys/net/pf_norm.c,v
retrieving revision 1.148
diff -u -p -r1.148 pf_norm.c
--- net/pf_norm.c 3 Jan 2012 17:06:38 -0000 1.148
+++ net/pf_norm.c 12 Jan 2012 18:58:05 -0000
@@ -95,9 +95,9 @@ struct pf_fragment {
 
  RB_ENTRY(pf_fragment) fr_entry;
  TAILQ_ENTRY(pf_fragment) frag_next;
+ TAILQ_HEAD(pf_fragq, pf_frent) fr_queue;
  u_int32_t fr_timeout;
  u_int16_t fr_maxlen; /* maximum length of single fragment */
- TAILQ_HEAD(pf_fragq, pf_frent) fr_queue;
 };
 
 struct pf_fragment_tag {
@@ -313,9 +313,9 @@ pf_fillup_fragment(struct pf_fragment_cm
  }
 
  *(struct pf_fragment_cmp *)frag = *key;
+ TAILQ_INIT(&frag->fr_queue);
  frag->fr_timeout = time_second;
  frag->fr_maxlen = frent->fe_len;
- TAILQ_INIT(&frag->fr_queue);
 
  RB_INSERT(pf_frag_tree, &pf_frag_tree, frag);
  TAILQ_INSERT_HEAD(&pf_fragqueue, frag, frag_next);
@@ -326,7 +326,15 @@ pf_fillup_fragment(struct pf_fragment_cm
  return (frag);
  }
 
- KASSERT(!TAILQ_EMPTY(&frag->fr_queue));
+ if (TAILQ_EMPTY(&frag->fr_queue)) {
+ /*
+ * Overlapping IPv6 fragments have been detected.  Do not
+ * reassemble packet but also drop future fragments.
+ * This will be done for this ident/src/dst combination
+ * until fragment queue timeout.
+ */
+ goto drop_fragment;
+ }
 
  /* Remember maximum fragment len for refragmentation */
  if (frent->fe_len > frag->fr_maxlen)
@@ -363,10 +371,15 @@ pf_fillup_fragment(struct pf_fragment_cm
  if (prev != NULL && prev->fe_off + prev->fe_len > frent->fe_off) {
  u_int16_t precut;
 
+ if (frag->fr_af == AF_INET6)
+ goto flush_fragentries;
+
  precut = prev->fe_off + prev->fe_len - frent->fe_off;
- if (precut >= frent->fe_len)
- goto bad_fragment;
- DPFPRINTF(LOG_NOTICE, "overlap -%d", precut);
+ if (precut >= frent->fe_len) {
+ DPFPRINTF(LOG_NOTICE, "new frag overlapped");
+ goto drop_fragment;
+ }
+ DPFPRINTF(LOG_NOTICE, "frag head overlap %d", precut);
  m_adj(frent->fe_m, precut);
  frent->fe_off += precut;
  frent->fe_len -= precut;
@@ -377,9 +390,12 @@ pf_fillup_fragment(struct pf_fragment_cm
  {
  u_int16_t aftercut;
 
+ if (frag->fr_af == AF_INET6)
+ goto flush_fragentries;
+
  aftercut = frent->fe_off + frent->fe_len - after->fe_off;
- DPFPRINTF(LOG_NOTICE, "adjust overlap %d", aftercut);
  if (aftercut < after->fe_len) {
+ DPFPRINTF(LOG_NOTICE, "frag tail overlap %d", aftercut);
  m_adj(after->fe_m, aftercut);
  after->fe_off += aftercut;
  after->fe_len -= aftercut;
@@ -387,6 +403,7 @@ pf_fillup_fragment(struct pf_fragment_cm
  }
 
  /* This fragment is completely overlapped, lose it */
+ DPFPRINTF(LOG_NOTICE, "old frag overlapped");
  next = TAILQ_NEXT(after, fr_next);
  TAILQ_REMOVE(&frag->fr_queue, after, fr_next);
 
@@ -402,6 +419,22 @@ pf_fillup_fragment(struct pf_fragment_cm
 
  return (frag);
 
+ flush_fragentries:
+ /*
+ * RFC5722:  When reassembling an IPv6 datagram, if one or
+ * more its constituent fragments is determined to be an
+ * overlapping fragment, the entire datagram (and any constituent
+ * fragments, including those not yet received) MUST be
+ * silently discarded.
+ */
+ DPFPRINTF(LOG_NOTICE, "flush overlapping fragments");
+ while ((prev = TAILQ_FIRST(&frag->fr_queue)) != NULL) {
+ TAILQ_REMOVE(&frag->fr_queue, prev, fr_next);
+
+ m_freem(prev->fe_m);
+ pool_put(&pf_frent_pl, prev);
+ pf_nfrents--;
+ }
  bad_fragment:
  REASON_SET(reason, PFRES_FRAG);
  drop_fragment:
@@ -415,6 +448,8 @@ pf_isfull_fragment(struct pf_fragment *f
 {
  struct pf_frent *frent, *next;
  u_int16_t off, total;
+
+ KASSERT(!TAILQ_EMPTY(&frag->fr_queue));
 
  /* Check if we are completely reassembled */
  if (TAILQ_LAST(&frag->fr_queue, pf_fragq)->fe_mff)

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Fernando Gont-2
Hi, Alexander,

On 01/12/2012 04:04 PM, Alexander Bluhm wrote:
> I have reconsidered it and drop the fragments immediately.  The
> packet to be reassembled will be dropped after timeout.  

Sorry: immediately, or after a timeout?


Now pf
> works like our IPv6 stack.
>
> - Sort pf_fragment fields while there.
> - If the fr_queue is empty, we had overlapping fragments, don't add
>   new ones.

Not sure what this means.

P.S.: Will try your patch this weekend.

Thanks!

Best regards,
--
Fernando Gont
e-mail: [hidden email] || [hidden email]
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Fernando Gont-2
In reply to this post by Kenneth Gober
On 01/12/2012 02:45 PM, Kenneth Gober wrote:

>     I'd argue that you should drop all the "constituent fragments" as soon
>     as you receive them.
>
>     Since there's no legitimate reason of overlapping fragments, get rid of
>     them asap. And if there were more fragments (for the same packet)
>     coming, they will be dropped as a result of the reassembly timeout.
>
> They will be dropped if the remaining fragments can't be assembled
> into a complete packet.  But when overlapping fragments are combined
> with the possibility of fragment duplication, I'd argue that you have to
> store just enough state to be able to either immediately drop newly
> arriving fragments or tag them as "not to be reassembled".

??

If you detect duplicate/overlapping fragments, you drop all the
fragments that you have for that packet.

If there were more fragment coming, they will be kept in the fragment
reassembly queue until a reassembly timeout.

Thanks,
--
Fernando Gont
e-mail: [hidden email] || [hidden email]
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Alexander Bluhm
In reply to this post by Fernando Gont-2
On Fri, Jan 13, 2012 at 11:01:43AM -0300, Fernando Gont wrote:
> On 01/12/2012 04:04 PM, Alexander Bluhm wrote:
> > I have reconsidered it and drop the fragments immediately.  The
> > packet to be reassembled will be dropped after timeout.  
>
> Sorry: immediately, or after a timeout?

We have a list of fragments that belong to one packet.  When an
incomming fragment overlaps any fragment in the list, all fragments
are dropped immediately and the list gets empty.  But the state
with the empty list will be kept until timeout.

> > - If the fr_queue is empty, we had overlapping fragments, don't add
> >   new ones.
>
> Not sure what this means.

When more fragments arrive and the list is empty, there was an
overlap before.  Then the new fragment is dropped.  This state will
exist until timeout.

> P.S.: Will try your patch this weekend.

Thanks.  I have already commited it, so you can use -current.  It
is probably not in the snapshots yet.  More testing is always good.

bluhm

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Fernando Gont-2
On 01/13/2012 12:11 PM, Alexander Bluhm wrote:

> On Fri, Jan 13, 2012 at 11:01:43AM -0300, Fernando Gont wrote:
>> On 01/12/2012 04:04 PM, Alexander Bluhm wrote:
>>> I have reconsidered it and drop the fragments immediately.  The
>>> packet to be reassembled will be dropped after timeout.  
>>
>> Sorry: immediately, or after a timeout?
>
> We have a list of fragments that belong to one packet.  When an
> incomming fragment overlaps any fragment in the list, all fragments
> are dropped immediately and the list gets empty.  But the state
> with the empty list will be kept until timeout.

If there was a fragment overlap, there was malicious activity, and
you're certainly not going to get any legitimate fragment reassembled.
Therefore, IMO, it doesn't make sense to tie resources (i.e., keep
state) for that.

If you do not keep state, then.
1) If you don't get any further fragments, you saved memory and other
resources, or,
2) If you do get other fragments, they'll be queued, and since other
fragments were dropped, there won't be any reassembled datagram, and
after a "reassembly timeout" the fragments will get dropped anyway.

Thanks,
--
Fernando Gont
e-mail: [hidden email] || [hidden email]
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Alexander Bluhm
On Fri, Jan 13, 2012 at 02:13:09PM -0300, Fernando Gont wrote:

> If there was a fragment overlap, there was malicious activity, and
> you're certainly not going to get any legitimate fragment reassembled.
> Therefore, IMO, it doesn't make sense to tie resources (i.e., keep
> state) for that.
>
> If you do not keep state, then.
> 1) If you don't get any further fragments, you saved memory and other
> resources, or,
> 2) If you do get other fragments, they'll be queued, and since other
> fragments were dropped, there won't be any reassembled datagram, and
> after a "reassembly timeout" the fragments will get dropped anyway.

When I get 4 fragments overlapping like this:

1. |------|
2.     |------|
3. |------|
4.        |---|

I have to drop them all, "including those not yet received".

After framgent 2 I know there is an overlap, so drop 1 and 2.  If
I would not keep the state, I would not know that there was a problem
and would reassemble 3 and 4 to the complete packet.

My algoritm creates the state when 1 arrives,
drops 1 and 2 when 2 arrives,
drop 3 and 4 immediately when they arrive,
finaly drop the state after timeout.

An empty state is little memory compared to packet data.

bluhm

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Simon Perreault-2
In reply to this post by Fernando Gont-2
On 2012-01-13 12:13, Fernando Gont wrote:
> If you do not keep state, then.
> 1) If you don't get any further fragments, you saved memory and other
> resources, or,
> 2) If you do get other fragments, they'll be queued, and since other
> fragments were dropped, there won't be any reassembled datagram, and
> after a "reassembly timeout" the fragments will get dropped anyway.

I disagree with your proposal.

The amount of memory saved in case 1) is insignificant. It's a few bytes. The
fragments themselves are not kept in memory. And there are no "other resources".

In case 2) you end up having to keep fragments in memory. This is bad.

Also, there would be no guarantee in case 2) that reassembly does not happen
because of possible packet duplication.

Simon

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Theo de Raadt
In reply to this post by Alexander Bluhm
> On Fri, Jan 13, 2012 at 02:13:09PM -0300, Fernando Gont wrote:
> > If there was a fragment overlap, there was malicious activity, and
> > you're certainly not going to get any legitimate fragment reassembled.
> > Therefore, IMO, it doesn't make sense to tie resources (i.e., keep
> > state) for that.
> >
> > If you do not keep state, then.
> > 1) If you don't get any further fragments, you saved memory and other
> > resources, or,
> > 2) If you do get other fragments, they'll be queued, and since other
> > fragments were dropped, there won't be any reassembled datagram, and
> > after a "reassembly timeout" the fragments will get dropped anyway.
>
> When I get 4 fragments overlapping like this:
>
> 1. |------|
> 2.     |------|
> 3. |------|
> 4.        |---|
>
> I have to drop them all, "including those not yet received".

That last bit is crazy.  You cannot maintain state until the potential
packets fall out of the fragment cache.

> After framgent 2 I know there is an overlap, so drop 1 and 2.  If
> I would not keep the state, I would not know that there was a problem
> and would reassemble 3 and 4 to the complete packet.
>
> My algoritm creates the state when 1 arrives,
> drops 1 and 2 when 2 arrives,
> drop 3 and 4 immediately when they arrive,
> finaly drop the state after timeout.
>
> An empty state is little memory compared to packet data.

Actually, an empty state is a big deal, since it is way more important
and persistant in memory than packet data.   mbufs can be dropped if there
is no room, time, etc to handle them.  But the state table requires some
pretty nasty code to handle expiration and it is bad when it fills up.

The state table exists to map legit traffic.  It is not there to
maintain a cache of "the bad guys".  There could be a lot of bad guys.

From remote I could build a bot-net of machines that send you
illegal fragments and, if you create a state of the bad mappings, you'd
run your state table out.  When the state table gets full, it will
eject other stuff which is more important. Like ssh sessions and such.

We allocate resources (ie. states) to create a mapping of the
'good sessions'.  It is impossible to allocate sufficient resources
to "keep a list of the bad traffic/sessions/guys".

If rfc 5722 is truly saying you must drop packets in the future,
then the only answer is to flush the entire ipv6 fragment cache.
Then the new fragments won't map an existing fragment.  It is a new
fragment set...

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Alexander Bluhm
On Fri, Jan 13, 2012 at 11:44:20AM -0700, Theo de Raadt wrote:
> > I have to drop them all, "including those not yet received".
>
> That last bit is crazy.  You cannot maintain state until the potential
> packets fall out of the fragment cache.

After discussion with deraadt@ it came clear that dropping future
fragments can result in a denial of service.  So free the fragment
reassembly state immediately.

Unfortunately this triggered another bug.  When a fragment could
not be reassembled, it was logged.  During logging the reassmble
code is run again and the freed mbuf ended in the fragment cache.
Quick fix is not to log after pf_setup_pdesc() failure.  Long term
solution would be to move the call to pf_normalize() from
pf_setup_pdesc() to pf_test().

bluhm

ok?


Index: net/pf_norm.c
===================================================================
RCS file: /data/mirror/openbsd/cvs/src/sys/net/pf_norm.c,v
retrieving revision 1.149
diff -u -p -r1.149 pf_norm.c
--- net/pf_norm.c 13 Jan 2012 11:24:35 -0000 1.149
+++ net/pf_norm.c 13 Jan 2012 21:02:07 -0000
@@ -326,15 +326,7 @@ pf_fillup_fragment(struct pf_fragment_cm
  return (frag);
  }
 
- if (TAILQ_EMPTY(&frag->fr_queue)) {
- /*
- * Overlapping IPv6 fragments have been detected.  Do not
- * reassemble packet but also drop future fragments.
- * This will be done for this ident/src/dst combination
- * until fragment queue timeout.
- */
- goto drop_fragment;
- }
+ KASSERT(!TAILQ_EMPTY(&frag->fr_queue));
 
  /* Remember maximum fragment len for refragmentation */
  if (frent->fe_len > frag->fr_maxlen)
@@ -372,7 +364,7 @@ pf_fillup_fragment(struct pf_fragment_cm
  u_int16_t precut;
 
  if (frag->fr_af == AF_INET6)
- goto flush_fragentries;
+ goto free_fragment;
 
  precut = prev->fe_off + prev->fe_len - frent->fe_off;
  if (precut >= frent->fe_len) {
@@ -391,7 +383,7 @@ pf_fillup_fragment(struct pf_fragment_cm
  u_int16_t aftercut;
 
  if (frag->fr_af == AF_INET6)
- goto flush_fragentries;
+ goto free_fragment;
 
  aftercut = frent->fe_off + frent->fe_len - after->fe_off;
  if (aftercut < after->fe_len) {
@@ -419,22 +411,19 @@ pf_fillup_fragment(struct pf_fragment_cm
 
  return (frag);
 
- flush_fragentries:
+ free_fragment:
  /*
  * RFC5722:  When reassembling an IPv6 datagram, if one or
  * more its constituent fragments is determined to be an
  * overlapping fragment, the entire datagram (and any constituent
  * fragments, including those not yet received) MUST be
  * silently discarded.
+ *
+ * We do not drop future fragments, as this could be used
+ * for a denial of service attack.
  */
  DPFPRINTF(LOG_NOTICE, "flush overlapping fragments");
- while ((prev = TAILQ_FIRST(&frag->fr_queue)) != NULL) {
- TAILQ_REMOVE(&frag->fr_queue, prev, fr_next);
-
- m_freem(prev->fe_m);
- pool_put(&pf_frent_pl, prev);
- pf_nfrents--;
- }
+ pf_free_fragment(frag);
  bad_fragment:
  REASON_SET(reason, PFRES_FRAG);
  drop_fragment:
Index: net/pf.c
===================================================================
RCS file: /data/mirror/openbsd/cvs/src/sys/net/pf.c,v
retrieving revision 1.792
diff -u -p -r1.792 pf.c
--- net/pf.c 21 Dec 2011 23:00:16 -0000 1.792
+++ net/pf.c 13 Jan 2012 21:19:13 -0000
@@ -6650,7 +6650,12 @@ pf_test(sa_family_t af, int fwdir, struc
     == -1) {
  if (action == PF_PASS)
  return (PF_PASS);
- pd.pflog |= PF_LOG_FORCE;
+ /*
+ * We must not log the packet here.  pflog_bpfcopy() calls
+ * pf_setup_pdesc() again and then the packet might end up
+ * in the reassembly queue but the mbuf will be freed.
+ */
+ pd.pflog = 0;
  goto done;
  }
  pd.eh = eh;

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Alexander Bluhm
In reply to this post by Theo de Raadt
On Fri, Jan 13, 2012 at 11:44:20AM -0700, Theo de Raadt wrote:
> > I have to drop them all, "including those not yet received".
>
> That last bit is crazy.  You cannot maintain state until the potential
> packets fall out of the fragment cache.

This is also true for the reassembly implementation in the IPv6 stack.
Let's fix that, too.

bluhm

ok?


Index: netinet6/frag6.c
===================================================================
RCS file: /data/mirror/openbsd/cvs/src/sys/netinet6/frag6.c,v
retrieving revision 1.40
diff -u -p -r1.40 frag6.c
--- netinet6/frag6.c 10 Jan 2012 17:09:02 -0000 1.40
+++ netinet6/frag6.c 13 Jan 2012 21:54:26 -0000
@@ -285,14 +285,6 @@ frag6_input(struct mbuf **mp, int *offp,
  q6->ip6q_dst = ip6->ip6_dst;
  q6->ip6q_unfrglen = -1; /* The 1st fragment has not arrived. */
  q6->ip6q_nfrag = 0;
- } else if (LIST_EMPTY(&q6->ip6q_asfrag)) {
- /*
- * Overlapping fragments have been detected.  Do not
- * reassemble packet but also drop future fragments.
- * This will be done for this ident/src/dst combination
- * until fragment queue timeout.
- */
- goto dropfrag;
  }
 
  /*
@@ -411,8 +403,8 @@ frag6_input(struct mbuf **mp, int *offp,
  /*
  * RFC5722:  When reassembling an IPv6 datagram, if one or more its
  * constituent fragments is determined to be an overlapping fragment,
- * the entire datagram (and any constituent fragments, including those
- * not yet received) MUST be silently discarded.
+ * the entire datagram (and any constituent fragments) MUST be silently
+ * discarded.
  */
  if (paf6 != LIST_END(&q6->ip6q_asfrag)) {
  i = (paf6->ip6af_off + paf6->ip6af_frglen) - ip6af->ip6af_off;
@@ -549,8 +541,10 @@ frag6_input(struct mbuf **mp, int *offp,
  free(af6, M_FTABLE);
  }
  ip6stat.ip6s_fragdropped += q6->ip6q_nfrag;
+ TAILQ_REMOVE(&frag6_queue, q6, ip6q_queue);
  frag6_nfrags -= q6->ip6q_nfrag;
- q6->ip6q_nfrag = 0;
+ free(q6, M_FTABLE);
+ frag6_nfragpackets--;
 
  dropfrag:
  in6_ifstat_inc(dstifp, ifs6_reass_fail);

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Kenneth Gober
In reply to this post by Theo de Raadt
On Fri, Jan 13, 2012 at 1:44 PM, Theo de Raadt <[hidden email]>wrote:

> From remote I could build a bot-net of machines that send you
> illegal fragments and, if you create a state of the bad mappings, you'd
> run your state table out.  When the state table gets full, it will
> eject other stuff which is more important. Like ssh sessions and such.
>
> We allocate resources (ie. states) to create a mapping of the
> 'good sessions'.  It is impossible to allocate sufficient resources
> to "keep a list of the bad traffic/sessions/guys".
>

Couldn't you also build a bot-net of machines that send legal fragments?

I don't see the difference between a denial-of-service attack that fills up
your
state table with states that have empty fragment lists, versus a
denial-of-service
attack that fills up your state table with states that have non-empty
fragment lists.

To put it another way, an attacker who consumes states by sending
overlapping
fragments that cause you to retain state (and discard fragments) until the
reassembly
timeout expires, is no worse than an attacker who consumes states by sending
non-overlapping fragments that cause you to retain state (and store
fragments) until
the reassembly timeout expires.  Arguably, the latter case is even worse.

-ken

Reply | Threaded
Open this post in threaded view
|

Re: overlapping IPv6 fragments in pf

Theo de Raadt
> > From remote I could build a bot-net of machines that send you
> > illegal fragments and, if you create a state of the bad mappings, you'd
> > run your state table out.  When the state table gets full, it will
> > eject other stuff which is more important. Like ssh sessions and such.
> >
> > We allocate resources (ie. states) to create a mapping of the
> > 'good sessions'.  It is impossible to allocate sufficient resources
> > to "keep a list of the bad traffic/sessions/guys".
> >
>
> Couldn't you also build a bot-net of machines that send legal fragments?

All fragments are equal and legal until they either (a) defragment to
form complete packets, (b) get pushed out because the fragment cache
is full, or (c) timeout of the fragment cache because of age.  The change
being talked about is when fragments purposefully overlap or collide.
In that case, they are discarded.

The problem is there is an RFC which says that fragments _received in
the future -- after that collision event_ should also be discarded.
That requirement is bogus; it cannot be supported without substantial
storage.

> I don't see the difference between a denial-of-service attack that fills up
> your
> state table with states that have empty fragment lists, versus a
> denial-of-service
> attack that fills up your state table with states that have non-empty
> fragment lists.

Ah.  I can see you've not read the fragment cache code.

> To put it another way, an attacker who consumes states by sending
> overlapping
> fragments that cause you to retain state (and discard fragments) until the
> reassembly
> timeout expires, is no worse than an attacker who consumes states by sending
> non-overlapping fragments that cause you to retain state (and store
> fragments) until
> the reassembly timeout expires.  Arguably, the latter case is even worse.

The latter case is not worse, since it is unavoidable.  Packets get lost.

As it happens, the fragment cache is only 1000 entries in size.  What the
RFC was demanding is impossible with only 1000 entries.

When loss or collisions happen, there is no justification for keeping
"state of that event" or "doing more work".