Commit Graph

93 Commits

Author SHA1 Message Date
Rusty Russell 18aabc3596 pytest: use query_gossip in test_gossip_query_channel_range.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-30 07:08:07 +00:00
Rusty Russell 1386fedfb6 pytest: use query_gossip in test_query_short_channel_id.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-30 07:08:07 +00:00
Rusty Russell d534a146d2 pytest: clean up test_gossip_timestamp_filter, use query_gossip.
It relies on the fact that nodes don't do their own gossip queries.
Use devtools instead.

This revealed that the entire logic was broken!  It just happened to work.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-30 07:08:07 +00:00
Rusty Russell d24c850899 gossipd: restore a flag for fast pruning
I was seeing some accidental pruning under load / Travis, and in
particular we stopped accepting channel_updates because they were 103
seconds old.  But making it too long makes the prune test untenable,
so restore a separate flag that this test can use.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-27 00:01:34 +00:00
Rusty Russell 39c9dcbafc ratelimit: adjust based on --dev-fast-gossip, test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-20 06:55:00 +00:00
Rusty Russell 147eaced2e developer: consolidiate gossip timing options into one --dev-fast-gossip.
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.

Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.

Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-20 06:55:00 +00:00
Rusty Russell a92ead48bf gossipd: ignore redundant channel_update and node_announcement.
If you send a message which simply changes timestamp and signature, we
drop it.  You shouldn't be doing that, and the door to ignoring them
was opened by by option_gossip_query_ex, which would allow clients to
ignore updates with the same checksum.

This is more aggressive at reducing spam messages, but we allow refreshes
(to be conservative, we allow them even when 1/2 of the way through the
refresh period).

I dropped the now-unnecessary sleep from test_gossip_pruning, too.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-20 06:55:00 +00:00
Rusty Russell 0bab2580fc gossipd: clean up local channel updates.
Make update_local_channel use a timer if it's too soon to make another
update.

1. Implement cupdate_different() which compares two updates.
2. make update_local_channel() take a single arg for timer usage.
3. Set timestamp of non-disable update back 5 minutes, so we can
   always generate a disable update if we need to.
4. Make update_local_channel() itself do the "unchanged update" suppression.
   gossipd: clean up local channel updates.
5. Keep pointer to the current timer so we override any old updates with
   a new one, to avoid a race.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-20 06:55:00 +00:00
Rusty Russell 70c4ac6d74 gossipd: suppress our own too-close node_announcement messages.
Never make them less than gossip_min_interval apart.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-20 06:55:00 +00:00
trueptolemy 059a6e0e0d pytest: Test excluding nodes in `getroute` 2019-09-16 12:22:06 +08:00
Rusty Russell 1c0d435f5e pytest: remove flaky part of test_gossip's test_gossip_no_empty_announcements
This "wait_for" failed on Travis, but it's unnecessary anyway.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-06 14:35:01 +02:00
Rusty Russell c99906a9a9 per-peer-daemons: tie in gossip filter.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-06 14:35:01 +02:00
Rusty Russell 5292f11818 pytest: test (fail) that we don't repeat gossip back to the node we got it from
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-09-06 14:35:01 +02:00
Christian Decker 8b8538024d bitcoind: Defer initialization of filteredblock_call->result
During sync it is highly likely that we can coalesce multiple calls and share
results among them. We also report back failures for non-existing blocks early
on, so we don't run into issues with blocks that our bitcoind doesn't have
yet.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2019-08-20 00:07:38 +00:00
Christian Decker 187e493ab8 gossip: Stop backfilling the future
This was caused by us not checking against the max_blockheight, but rather the
min_blockheight which can be negative with a newly created node. This is still
safe since we check for duplicates anyway in `wallet_filteredblock_add`.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2019-08-20 00:07:38 +00:00
Rusty Russell 944439853a pytest: two tests for gossip of channels in as-yet-unknown blocks.
Two tests which crash lightningd in different ways.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-08-20 00:07:38 +00:00
Rusty Russell bf3b77a947 Travis: skip testing VALGRIND=1 DEVELOPER=0, remove the slowest non-developer tests.
I don't remember ever seeing a bug which only showed up in VALGRIND=1 with developer
mode disabled, so don't test that, and spread out the other test more evenly.

In addition, disable the worst-performing tests in DEVELOPER=0 mode.

Here timings from my build machine: the worst 6 (- DEVELOPER=0 VALGRIND=0)
with the same tests (+ DEVELOPER=1 VALGRIND=1)

-452.42s call     tests/test_pay.py::test_channel_spendable
+87.69s call     tests/test_pay.py::test_channel_spendable
-335.66s call     tests/test_gossip.py::test_gossip_store_compact_on_load
+47.41s call     tests/test_gossip.py::test_gossip_store_compact_on_load
-332.07s call     tests/test_connection.py::test_opening_tiny_channel
+89.71s call     tests/test_connection.py::test_opening_tiny_channel
-331.97s call     tests/test_pay.py::test_channel_spendable_large
+56.23s call     tests/test_pay.py::test_channel_spendable_large
-305.28s call     tests/test_invoices.py::test_invoice_routeboost
+37.57s call     tests/test_invoices.py::test_invoice_routeboost
-284.28s call     tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
+49.12s call     tests/test_plugin.py::test_htlc_accepted_hook_forward_restart

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-08-14 11:14:38 +00:00
ZmnSCPxj 3e74ca4b86 gossipd/routing.c: Correctly handle a duplicated entry in `exclude` of `getroute`. 2019-08-02 16:06:15 +02:00
ZmnSCPxj a5fb37298c tests/test_gossip.py: Add test to check that duplicated exclusions in `getroute` have no lasting effect. 2019-08-02 16:06:15 +02:00
Rusty Russell 54ce4ed1cf pytest: fail tests if we get any LOG_BROKEN level messages, unless flagged.
And clean up some dev ones which actually happen (mainly by calling
channel_fail_permanent which logs UNUSUAL, rather than
channel_internal_error which logs BROKEN).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-07-02 03:26:10 +00:00
Rusty Russell c303d7d534 gossipd: only do (automatic) store compaction at startup.
Rewriting the gossip_store is much more trivial when we don't have
any pointers into it, so add some simple offline compaction code
and disable the automatic compaction code.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-21 20:03:10 -05:00
Rusty Russell c15d9ed37c gossip_store: make copy of corrupt gossip_store on failure.
This should help debugging vastly.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-21 22:03:35 +00:00
Rusty Russell 8928f0b5f9 gossipd: remove gossip entirely if we hit a problem on load.
The crashes in #2750 are mostly caused by us trying to partially truncate
the store.  The simplest fix for release is to discard the whole thing if
we detect a problem.

This is a workaround: it'd be far nicer to try to recover.

Fixes: #2750
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-21 22:03:35 +00:00
Rusty Russell 9bf0467967 pytest: fix test_gossip_store_load_no_channel_update
It wasn't invalid due to a missing channel_update, but in fact was a
bad checksum due to a cut & paste bug.  Fix that, and assert it's not
actually truncating.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-21 22:03:35 +00:00
Rusty Russell 47b5f2e837 gossipd: truncate gossip_store.tmp for compaction.
If something went wrong and there was an old one, we were
appending to it!

Reported-by: @SimonVrouwe
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-20 02:53:52 +00:00
Rusty Russell 5e3690b3c5 gossipd: delete channel_amount from the store when we delete channel_announcement.
Otherwise we slowly build up cruft: compaction simply moves them since
they're not deleted.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-15 10:52:05 +02:00
Rusty Russell 10c503b4b4 gossip_store: clean up a truncated store.
We might have channel_announcements which have no channel_update: normally
these don't get written into the store until there is one, but if the
store was truncated it can happen.  We then get upset on compaction, since
we don't have an in-memory representation of the channel_announcement.

Similarly, we leave the node_announcement pending until after that
channel_announcement, leading to a similar case.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-15 10:52:05 +02:00
Rusty Russell adc52b6ee8 pytest: add test for dangling channel_announcement/node_announcement after gossip_store.
This can happen if the store was truncated.

Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-15 10:52:05 +02:00
Rusty Russell a35ab51a06 pytest: gossip_store test for channel_amount truncated.
We pass, but this test should have been added a while ago with the
original code.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-15 10:52:05 +02:00
Rusty Russell 909f22f117 pytest: gossip_store test for node_announcement before update.
We pass, but this test should have been added a while ago with the fix.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-15 10:52:05 +02:00
Rusty Russell eb5cc47bdd gossipd: count deleted records correctly when loading gossip_store.
The result of an incorrect count was that we failed on next compaction.

Fixes: #2743
Fixes: #2742
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-14 02:17:32 +00:00
Rusty Russell 12a523f7c5 pytest: add (xfail) test for store load miscount.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-14 02:17:32 +00:00
Rusty Russell 0d2a4830ed ccan: update to faster and correct crc32c implementation.
I decided to try a faster implementation, only to find our crc32c was
not correct!  Ouch.

I removed the crc32c functions from ccan/crc, and added a new crc32c
module which has the Mark Adler x86-64-optimized variants.

We bump gossip_store version again, since csums have changed.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-11 23:40:10 +00:00
Rusty Russell 409368e058 pytest: move test_channel_drainage to test_pay.py
This is where payment tests should go.  Also mark it xfail for the moment,
and remove developer-only tag (propagating gossip is only 60 seconds, which
is OK).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-11 23:19:11 +00:00
Michael Schmoock 4a242edc1f test: drains a channel to crash the daemon 2019-06-11 23:19:11 +00:00
Rusty Russell db0a28501b gossip: bump version to remove lingering issues with master.
There were several gossip breakages in master; bumping version means
upgrades get a clean store (not just those upgrading from stable version).

Fixes: #2719
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-10 21:31:38 +02:00
Michael Schmoock 42d6bf564c test: fix flaky test_gossip_notices_close with wait_for_mempool 2019-06-10 11:11:48 +00:00
Rusty Russell 5161b79bfc gossipd/gossip_store: keep count of deleted entries, don't use bs->count.
We didn't count some records before, so we could compare the two counters.
This is much simpler, and avoids reliance on bs.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-04 01:29:39 +00:00
Rusty Russell 728bb4e662 common/gossip_store: handle timestamp filtering.
This means we intercept the peer's gossip_timestamp_filter request
in the per-peer subdaemon itself.  The rest of the semantics are fairly
simple however.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-04 01:29:39 +00:00
Rusty Russell 948490ec58 gossipd: add timestamp in gossip store header.
(We don't increment the gossip_store version, since there are only a
few commits since the last time we did this).

This lets the reader simply filter messages; this is especially nice since
the channel_announcement timestamp is *derived*, not in the actual message.

This also creates a 'struct gossip_hdr' which makes the code a bit
clearer.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-04 01:29:39 +00:00
Rusty Russell 5591c0b5d8 gossipd: don't send gossip stream, let per-peer daemons read it themselves.
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project.  But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.

This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.

We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-04 01:29:39 +00:00
Rusty Russell df00f20e4a gossipd: erase old entries from the store, don't just append.
We use the high bit of the length field: this way we can still check
that the checksums are valid on deleted fields.

Once this is done, serially reading the gossip_store file will result
in a complete, ordered, minimal gossip broadcast.  Also, the horrible
corner case where we might try to delete things from the store during
load time is completely gone: we only load non-deleted things.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-04 01:29:39 +00:00
Rusty Russell 696dc6b597 gossipd: disable gossip_store upgrade.
We're about to bump version again, and the code to upgrade it was
quite hairy (and buggy!).  It's not worthwhile for such a
poorly-tested path: I will just add code to limit how much incoming
gossip we get to avoid flooding when we upgrade, however.

I also use a modern gossip_store version in our test_gossip_store_load
test, instead of relying on the upgrade path.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-04 01:29:39 +00:00
Rusty Russell 21fe518513 gossip_store: fix 'bad node_announcement' by allowing node_announcement on un-updated channel.
When we first receive a channel_update, we write both the
channel_announcement and that channel_update to the store: we need
that first update so we can set the channel_announcement timestamp.

However, the channel_update can be replaced later.  This means we can
have a channel_announcement, a node_update which relies on it, then
the channel_update later.

So move the "this applies to a pending announcement" check lower, where
gossip_store can use it too.  Has a nice side-effect of avoiding
one lookup of the node id.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-03 11:04:25 -07:00
Rusty Russell 048a650a6b pytest: more comprehensive tests for test_gossip_store_compact.
First, we should have a channel_update so we actually do some compaction!
(Reported-by @SimonVrouwe).  But we should also handle the cases where:

1. A channel_announcement is *not* directly followed by a
   channel_update (happens when the channel_update is replaced).
2. A node_announcement predates a channel_update for the peer
   (again, can happen once a channel_update is replaced).
3. A local/private channel_creation is not directly followed by an
   update.

In addition, we might as well check that we can *load* such a store,
before compaction.

This checks the corner cases which occur in real gossip stores.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-03 11:04:25 -07:00
Rusty Russell 1147e65602 pytest: make test_gossip_notices_close more reliable.
It's possible that it hasn't got the node_announcement messages;
it will still list the nodes, however (the channel_announcement tells
it the nodes exist).  Check for the alias field instead.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-06-03 11:04:25 -07:00
Rusty Russell 6ee2cd8ce3 openingd: fix hangup when gossipd compacts.
My raspberry pi node hung up on my other node:
   lightning_openingd-... chan #1: Got bad message from gossipd: 0db1

This is because we didn't handle that message in one path.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-05-16 15:54:17 -04:00
Rusty Russell 7ede5aac31 gossip_store: change format so we store raw messages.
Save some overhead, plus gets us ready for giving subdaemons direct
store access.  This is the first time we *upgrade* the gossip_store,
rather than just discarding.

The downside is that we need to add an extra message after each
channel_announcement, containing the channel capacity.

After:
  store_load_msec:28337-30288(28975+/-7.4e+02)
  vsz_kb:582304-582316(582306+/-4.8)
  store_rewrite_sec:11.240000-11.800000(11.55+/-0.21)
  listnodes_sec:1.800000-1.880000(1.84+/-0.028)
  listchannels_sec:22.690000-26.260000(23.878+/-1.3)
  routing_sec:2.280000-9.570000(6.842+/-2.8)
  peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)

Differences:
  -vsz_kb:582320
  +vsz_kb:582316
  -listnodes_sec:2.100000-2.170000(2.118+/-0.026)
  +listnodes_sec:1.800000-1.880000(1.84+/-0.028)
  -peer_write_all_sec:51.600000-52.550000(52.188+/-0.34)
  +peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-05-13 05:16:18 +00:00
Rusty Russell cccce75e56 patch refine-test_gossip_persistence.patch 2019-04-11 18:31:34 -07:00
Rusty Russell ec50ec6a71 gossipd: make gossip loading stats accurate.
They didn't count the header sizes when reporting bytes, which is
misleading.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2019-04-11 18:31:34 -07:00