Commit Graph

1275 Commits

Author SHA1 Message Date
Rusty Russell f083a699e2 gossipd: separate init and activate.
This means gossipd is live and we can tell it things, but it won't
receive incoming connections.  The split also means that the main daemon
continues (eg. loading peers from db) while gossipd is loading from the store,
potentially speeding startup.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-30 12:01:36 +02:00
Christian Decker 61317859f8 master: Move the gossipd initialization after the other inits
If we start accepting peer connections before we initialized some of the other
parts (mainly the chaintopology) we could end up asking for stuff that isn't
ready yet (blockchain head for example).

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-30 12:01:36 +02:00
Rusty Russell 91d149b990 lightningd: insert db statement checking in io_loop.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-27 16:20:35 +02:00
practicalswift abf510740d Force the use of the POSIX C locale for all commands and their subprocesses 2018-04-27 14:02:59 +02:00
ZmnSCPxj 2e73317a39 invoice: Define specific error codes for duplicate label and preimage. 2018-04-26 11:42:17 +00:00
ZmnSCPxj d5a67ec87a chaintopology: Protect against underflow when computing first_blocknum.
Fixes: #1423

(Hopefully)

Reported-by: @NicolasDorier
2018-04-26 11:40:43 +00:00
Rusty Russell 83e847575c gossipd: don't handle multiple connect requests, combine them in lightningd.
Christian points out that this is the pattern used elsewhere, for example.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell 435e85a5b2 lightningd: move "tell gossipd peer is no longer important" to drop_to_chain.
Reported-by: @ZmnSCPxj
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell 8a16963f22 channeld: get told when announce depth already reached.
If channeld dies for some reason (eg, reconnect) and we didn't yet announce
the channel, we can miss doing so.  This is unusual, because if lightningd
restarts it rearms the callback which gives us funding_locked, so it only
happens if just channel dies before sending the announcement message.

This problem applies to both temporary announcement (for gossipd) and
the real one.  For the temporary one, simply re-send on startup, and
remote the error msg gossipd gives if it sees a second one.  For the
real one, we need a flag to tell us the depth is sufficient; the peer
will ignore re-sends anyway.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell e72e54f8d1 json_listpeers: use channel connected flag for JSON.
If a channel is active (ie. not onchaind) and has an owner, this should
be equivalent.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell bc4809aa85 gossipd: make sure master only ever sees one active connection.
When we get a reconnection, kill the current remote peer, and wait for the
master to tell us it's dead.  Then we hand it the new peer.

Previously, we would end up with gossipd holding multiple peers, and
the logging was really hard to interpret; I'm not completely convinced
that we did the right thing when one terminated, either.

Note that this now means we can have peers with neither ->local nor ->remote
populated, so we check that more carefully.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell be1f33b265 gossipd: have master explicitly tell us when peer is disconnected.
Currently we intuit it from the fd being closed, but that may happen out
of order with when the master thinks it's dead.

So now if the gossip fd closes we just ignore it, and we'll get a
notification from the master when the peer is disconnected.

The notification is slightly ugly in that we have to disable it for
a channel when we manually hand the channel back to gossipd.

Note: as stands, this is racy with reconnects.  See the next patch.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell 1e282ecb7a subd: record which ones connect to a peer.
This comes in useful for the next patch.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell ab9d9ef3b8 gossipd: drain fd instead of passing around gossip index.
(This was sitting in my gossip-enchancement patch queue, but it simplifies
this set too, so I moved it here).

In 94711969f we added an explicit gossip_index so when gossipd gets
peers back from other daemons, it knows what gossip it has sent (since
gossipd can send gossip after the other daemon is already complete).

This solution is insufficient for the more general case where gossipd
wants to send other messages reliably, so replace it with the other
solution: have gossipd drain the "gossip fd" which the daemon returns.

This turns out to be quite simple, and is probably how I should have
done it originally :(

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell 9430a455ff closing: don't go into temporary failure because we completed negotiation.
It only lasts until the next block, but it's weird.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell 72c459dd6c gossipd: keep reaching struct only when we're actively connecting, and don't retry
1. Lifetime of 'struct reaching' now only while we're actively doing connect.
2. Always free after a single attempt: if it's an important peer, retry
   on a timer.
3. Have a single response message to master, rather than relying on
   peer_connected on success and other msgs on failure.
4. If we are actively connecting and we get another command for the same
   id, just increment the counter

The result is much simpler in the master daemon, and much nicer for
reconnection: if they say to connect they get an immediate response,
rather than waiting for 10 retries.  Even if it's an important peer,
it fires off another reconnect attempt, unless it's actively
connecting now.

This removes exponential backoff: that's restored in next patch.  It
also doesn't handle multiple addresses for a single peer.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell a1f77cab3c lightningd: tell gossipd that peers we load from db are important.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell 8c2c1fe1c2 openingd: tell gossipd that the peer is important once funding tx in place.
And on channel_fail_permanent and closing (the two places we drop to
chain), we tell gossipd it's no longer important.

Fixes: #1316
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
Rusty Russell c9fa9817f6 gossipd: explicitly track which peers are important.
These don't have a maximum number of reconnect attempts, and ensure
that we try to reconnect when the peer dies.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-26 05:47:57 +00:00
ZmnSCPxj 079778e357 invoice: Check duplicate preimage when explicitly sprcified.
Reported-by: @mcudev
2018-04-26 05:47:09 +00:00
Christian Decker 96352858d6 chaintopology: Simplify rescan offset computation
Simplification of the offset calculation to use the rescan parameter, and rename
of `wallet_first_blocknum`. We now use either relative rescan from our last
known location, or absolute if a negative rescan was given. It's all handled in
a single location (except the case in which the blockcount is below our
precomputed offset), so this should reduce surprises.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-25 14:33:38 +02:00
Christian Decker 0f191f5d4f opts: Add the --rescan option
This is intended to recover from an inconsistent state, involving
`onchaind`. Should we for some reason not restore the `onchaind` process
correctly we can instruct `lightningd` to go back in time and just replay
everything.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-25 14:33:38 +02:00
Christian Decker 4b22760cf9 onchaind: Replay stored channeltxs to restore onchaind state
Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-25 14:33:38 +02:00
Christian Decker 244d4e49e1 onchaind: Store channeltxs so we can restore later
Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-25 14:33:38 +02:00
Christian Decker f44ea9f32e channel: Allow channel lookup by database id
Since we reference the channel ID to allow cascades in the database we also need
the ability to look up a channel by its database ID.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-25 14:33:38 +02:00
Christian Decker 5e505e9c53 onchaind: Add a level of indirection to txwatches and txowatches
This will allow us in the next commit to store the transactions that triggered
this event in the DB and thus allowing us to replay them later on.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-25 14:33:38 +02:00
Christian Decker 4547afba33 onchaind: Move preimage transfer into onchaind startup
We used to queue the preimages to be sent to onchaind only after receiving the
onchaind_init_reply. Once we start replaying we might end up in a situation in
which we queue the tx that onchaind should react to before providing it with the
preimages. This commit just moves the preimages being sent, making it atomic
with the init, and without changing the order.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-25 14:33:38 +02:00
Christian Decker c635396766 common: Moving some bech32 related utilities to bech32_util
These were so far only used for bolt11 construction, but we'll need them for the
DNS seed as well, so here we just pull them out into their own unit and prefix
them.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-25 12:34:55 +02:00
ZmnSCPxj eb42804fcc invoice: Support providing preimage when making invoice. 2018-04-24 11:54:02 +02:00
Rusty Russell 16d5015d56 lightningd: fix shutdown with unconfirmed channel.
We free the peers explicitly, but we don't free the unconfirmed channel:
the result is that it gets freed twice.

The workaround is to free the unconfirmed channel explicitly, but really
the peer should be tal_link'ed as it's basically a reference counted
structure.

1.974911451 lightningd(17906):INFO: 03b4bca72572889d4b44cd0f194f73d54972af367e1917579283122ee10fa05f54 chan #1: Owning subdaemon lightning_openingd died (62464)
1.980118094 lightningd(17906):BROKEN: FATAL SIGNAL 6
1.980150447 lightningd(17906):BROKEN: backtrace: common/daemon.c:42 (crashdump) 0x432ba0
1.980161268 lightningd(17906):BROKEN: backtrace: (null):0 ((null)) 0x7faeb18ff4af
1.980167045 lightningd(17906):BROKEN: backtrace: (null):0 ((null)) 0x7faeb18ff428
1.980171271 lightningd(17906):BROKEN: backtrace: (null):0 ((null)) 0x7faeb1901029
1.980175847 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:98 (call_error) 0x47543e
1.980181814 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:170 (check_bounds) 0x4755fb
1.980188065 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:180 (to_tal_hdr) 0x475649
1.980193756 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:504 (tal_free) 0x47600d
1.980199402 lightningd(17906):BROKEN: backtrace: lightningd/peer_control.c:118 (delete_peer) 0x423990
1.980205498 lightningd(17906):BROKEN: backtrace: lightningd/opening_control.c:574 (destroy_uncommitted_channel) 0x419df3
1.980212380 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:240 (notify) 0x4757b0
1.980218052 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:400 (del_tree) 0x475c61
1.980223398 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:511 (tal_free) 0x476093
1.980229174 lightningd(17906):BROKEN: backtrace: lightningd/opening_control.c:549 (opening_channel_errmsg) 0x419d1a
1.980236227 lightningd(17906):BROKEN: backtrace: lightningd/subd.c:590 (destroy_subd) 0x42cf43
1.980242348 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:240 (notify) 0x4757b0
1.980247771 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:400 (del_tree) 0x475c61
1.980252814 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:410 (del_tree) 0x475cb1
1.980258356 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:410 (del_tree) 0x475cb1
1.980263311 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:511 (tal_free) 0x476093
1.980269189 lightningd(17906):BROKEN: backtrace: lightningd/lightningd.c:412 (main) 0x4144ed

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-23 20:18:15 +00:00
Rusty Russell d2b4e09e27 lightningd: re-allow closing negotiation when CLOSINGD_COMPLETE
d822ba1ee accidentally removed this case, which is important: if the
other side didn't get our final matching closing_signed, it will
reconnect and try again.  We consider the channel no longer "active"
and thus ignore it, and get upset when it send the
`channel_reestablish` message.

We could just consider CLOSINGD_COMPLETE to be active, but then we'd
have to wait for the closing transaction to be mined before we'd allow
another connection.

We can't special case it when the peer reconnects, because there
could be (in theory) multiple channels for that peer in CLOSINGD_COMPLETE,
and we don't know which one to reestablish.

So, we need to catch this when they send the reestablish, and hand
that msg to closingd to do negotiation again.  We already have code
to note that we're in CLOSINGD_COMPLETE and thus ignore any result
it gives us.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-23 20:18:15 +00:00
Rusty Russell 5551c161ca gossipd: finish startup before master prints that it's ready.
We're about to remove automatic retrying of connect, and that uncovered
that we actually print out our "Server started" message before we create
the listening socket.

Move the init higher (outside the db transaction) and make it a
request/response, the loop until it's done.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-23 20:18:15 +00:00
Rusty Russell 8e976150ad json_fundchannel: fix release vs connect/nongossip race.
The new connect code revealed an existing race: we tell gossipd to
release the peer, but at the same time it connects in.  gossipd fails
the release because the peer is remote, and json_fundchannel fails.

Instead, we catch this race when we get peer_connected() and we were
trying to open a channel.  It means keeping a list of fundchannels which
are awaiting a gossipd response though.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-23 20:18:15 +00:00
Rusty Russell bee795ed68 channeld: don't do explicit state update.
We missed it in some corner cases where we crashed/were killed between
being told of the lockin and sending the channel_normal_operation message.
When we were restarted, we were told both sides were locked in already,
so we never updated the state.

Pull the entire "tell channeld" logic into channel_control.c, and make
it clear that we need to keep waching if we cant't tell channeld.  I think
we did get this correct in practice, since funding_announce_cb has the
same test, but it's better to be clear.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-23 20:18:15 +00:00
Rusty Russell 22fe2c921f lightningd: commit short-channel-id to db when we create it.
We'd usually commit to the db soon, but there's a window where it
could be missed.

Also moves loc into the block it's used and make it tmpctx to avoid
an explicit free.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-23 20:18:15 +00:00
Rusty Russell 7604f27fb8 lightningd: make sure openingd and uncommitted_channel free each other.
Without this, we can get errors on shutdown:

Valgrind error file: valgrind-errors.27444
==27444== Invalid read of size 8
==27444==    at 0x1950E2: secp256k1_pubkey_load (secp256k1.c:127)
==27444==    by 0x19CF87: secp256k1_ec_pubkey_serialize (secp256k1.c:189)
==27444==    by 0x14FED9: towire_pubkey (towire.c:59)
==27444==    by 0x15AAFB: towire_gossipctl_peer_disconnected (gen_gossip_wire.c:969)
==27444==    by 0x1253EF: opening_channel_errmsg (opening_control.c:526)
==27444==    by 0x1386A3: destroy_subd (subd.c:589)
==27444==    by 0x18222C: notify (tal.c:240)
==27444==    by 0x1826E1: del_tree (tal.c:400)
==27444==    by 0x182733: del_tree (tal.c:410)
==27444==    by 0x182733: del_tree (tal.c:410)
==27444==    by 0x182B1F: tal_free (tal.c:511)
==27444==    by 0x11FC53: main (lightningd.c:410)
==27444==  Address 0x6c3af98 is 72 bytes inside a block of size 216 free'd
==27444==    at 0x4C30D3B: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==27444==    by 0x1827BC: del_tree (tal.c:421)
==27444==    by 0x182B1F: tal_free (tal.c:511)
==27444==    by 0x11F3C7: shutdown_subdaemons (lightningd.c:211)
==27444==    by 0x11FC27: main (lightningd.c:406)
==27444==  Block was alloc'd at
==27444==    at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==27444==    by 0x182296: allocate (tal.c:250)
==27444==    by 0x182863: tal_alloc_ (tal.c:448)
==27444==    by 0x12F2DF: new_peer (peer_control.c:74)
==27444==    by 0x125600: new_uncommitted_channel (opening_control.c:576)
==27444==    by 0x125870: peer_accept_channel (opening_control.c:668)
==27444==    by 0x13032A: peer_sent_nongossip (peer_control.c:427)
==27444==    by 0x116B9E: peer_nongossip (gossip_control.c:60)
==27444==    by 0x116F2B: gossip_msg (gossip_control.c:172)
==27444==    by 0x138323: sd_msg_read (subd.c:503)
==27444==    by 0x137C02: read_fds (subd.c:330)
==27444==    by 0x175550: next_plan (io.c:59)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-23 20:18:15 +00:00
Rusty Russell 05ba976a41 lightningd: --dev-no-reconnect needs to always suppress reconnection.
It didn't in the restore-from-db case.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-23 20:18:15 +00:00
ZmnSCPxj 2cee1ab20f peer_control: Make close wait for complete closure, with timeout.
Also report tx and txid, and whether we closed unilaterally or
bilaterally, if we could close the channel.

Also make a manpage.

Fixes: #1207
Fixes: #714
Fixes: #622
2018-04-23 05:24:46 +00:00
conanoc 7170521895 change spaces to tabs, align function parameters 2018-04-21 15:55:00 +02:00
conanoc 0733770559 Adjust indents 2018-04-21 15:55:00 +02:00
ZmnSCPxj 774af5f817 payalgo: Describe `maxdelay` argument of `pay`. 2018-04-17 17:29:36 +02:00
ZmnSCPxj, ZmnSCPxj jxPCSmnZ 11ca729d85 wallet, payalgo: Save detail of payment failures for later reporting. (#1345)
Pointless for remote failures as those are never sent by
the erring node, but for local failures we can give more
detail.
2018-04-16 15:29:40 +02:00
conanoc b2f7e9af4a Support debugging with lldb
Running with lldb cause SIGINT, which makes waitpid() returns
error with errno as EINTR. This patch retry waitpid() to ignore
EINTR errors.
2018-04-15 17:42:24 +02:00
Rusty Russell 7ca4422d7d closing_control: always prefer lower fee, not closest to ideal.
We had an intermittant test failure, where the fee we negotiated was
further from our ideal than the final commitment transaction.  It worked
fine if the other side sent the mutual close first, but not if we sent
our unilateral close first.

ERROR: test_closing_different_fees (__main__.LightningDTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tests/test_lightningd.py", line 1319, in test_closing_different_fees
    wait_for(lambda: p.rpc.listpeers(l1.info['id'])['peers'][0]['channels'][0]['status'][1] == 'ONCHAIN:Tracking mutual close transaction')
  File "tests/test_lightningd.py", line 74, in wait_for
    raise ValueError("Error waiting for {}", success)
ValueError: ('Error waiting for {}', <function LightningDTests.test_closing_different_fees.<locals>.<lambda> at 0x7f4b43e31a60>)

Really, if we're prepared to negotiate it, we should be prepared to
accept it ourselves.  Simply take the cheapest tx which is above our
minimum.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-15 15:32:14 +02:00
Christian Decker f27cd3e43f topo: Remove in-memory txs from the block struct
The only use for these was to compute their txids so we could notify depth
in case of reorgs.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-13 00:04:37 +02:00
Christian Decker 23984ecde4 chaintopology: Use the DB to locate transactions and rebroadcast txs
Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-13 00:04:37 +02:00
Christian Decker 86b6402e5c chaintopology: Refactor get_tx_depth to use the DB backed tx store
We are slowly hollowing out the in-memory blockchain representation to make
restarts easier.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-13 00:04:37 +02:00
Christian Decker aa696370af txwatch: Switch to passing only txid into the depth callbacks
All of the callback functions were only using the tx to generate the txid again,
so we just pass that in directly and save passing the tx itself.

This is a simplification to move to the DB backed depth callbacks. It'd be
rather wasteful to read the rawtx and deserialize just to serialize right away
again to find the txid, when we already searched the DB for exactly that txid.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-13 00:04:37 +02:00
Christian Decker 50600ae241 wallet: Store transactions we are watching, broadcast or own
This will later allow us to determine the transaction confirmation count, and
recover transactions for rebroadcasts.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
2018-04-13 00:04:37 +02:00
Rusty Russell b0c2e3cd5c gossipd: use a separate CSV file for the gossip_store types.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2018-04-11 15:58:18 +02:00