Commit Graph

2321 Commits

Author SHA1 Message Date
Rusty Russell 66d8ce7c8c pytest: clarify test_onchain_multihtlc_our_unilateral / test_onchain_multihtlc_their_unilateral
Use the names, not calculations on an array.  It's simply clearer,
especially when debugging.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-08-04 11:16:01 -05:00
niftynei 4e503f7d0a bkpr/listpeeers: add lease_fees back to funds; separate out in listpeers
First, how we record "our_funds" and then apply pushes vs lease_fees
(for liquidity ad buys/sales) was exactly opposite.

For pushes we were reporting the total funded into the channel, with the
push representing how much we'd later moved to the peer.

For lease_fees we were rerporting the total in the channel, with the
push representing how much was already moved to the peer.

We fix this (from a view perspective) by re-adding lease fees to what's
reported in the channel funding totals. Since this is now new behavior
(for leased channel values), we added new fields so we can take the old
field names thru a deprecation cycle.

We also make it possible to differentiate btw a push and a lease_fee
(before they were all the same), by adding to new fields to `listpeers`:
`fee_paid_msat` and `fee_rcvd_msat`.

This allows us to avoid math in the bookkeeper, instead we just pick
the numbers out directly and record them.

Fixes #5472

Changelog-Added: JSON-RPC: `listpeers` now has a few new fields for `funding` (`remote_funds_msat`, `local_funds_msat`, `fee_paid_msat`, `fee_rcvd_msat`).
Changelog-Deprecated: JSON-RPC: `listpeers`.`funded` fields `local_msat` and `remote_msat` are now deprecated.
2022-07-31 21:53:05 +09:30
Rusty Russell 8c9fa457ba pytest: fix flake in test_gossip_timestamp_filter
```
        # 0x0100 = channel_announcement
        # 0x0102 = channel_update
        # (Node announcement may have any timestamp)
        types = Counter([m[0:4] for m in msgs])
        assert types['0100'] == 1
>       assert types['0102'] == 2
E       assert 1 == 2

tests/test_gossip.py:324: AssertionError
```

Examining the logs shows that we ask l4 for timestamps
"first_timestamp=1658892115 timestamp_range=13", and the timestamp on
the missing update is exactly `1658892128`.

So round the end time up by 1 for filtering.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-28 15:08:44 +09:30
Rusty Russell 22ff007d64 connectd: control connect backoff from lightningd.
We used to tell connectd to remember our connect delay, and hand it
back (increased if necessary).

Instead, simply record when we last tried to connect.  If it was less
than 10 minutes ago, double delay (up to 5 minutes max), otherwise
reset delay to 1 second.

This covers all scenarios: whether we reconnect then immediately
disconnect, or never successfully connect, it doesn't matter.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5453
2022-07-28 15:08:44 +09:30
Rusty Russell 9cad7d6a6a lightningd: don't consider AWAITING_UNILATERAL to be "active".
It's not active: we don't want to connect.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-28 15:08:44 +09:30
Rusty Russell a259698906 pytest: test that we don't try to reconnect in AWAITING_UNILATERAL.
We would just send an error, and it's annoying.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-28 15:08:44 +09:30
niftynei e048292fdf bkpr-zeroconf: Zeroconfs will emit 'channel_proposed' event
Keep the accounts as an 'append only' log, instead we move the marker
for the 'channel_open' forward when a 'channel_open' comes out.

We also neatly hide the 'channel_proposed' events in 'inspect' if
there's a 'channel_open' for that same event.

If you call inspect before the 'channel_open' is confirmed, you'll see
the tag as 'channel_proposed', afterwards it shows up as
'channel_open'. However the event log rolls forward -- listaccountevents
will show the correct history of the proposal then open confirming (plus
any routing that happened before the channel confirmed).
2022-07-28 12:08:18 +09:30
niftynei 30aa1d79fb bkpr: for zerconfs, we still wanna know you're opening a channel
We need a record of the channel account before you start sending
payments through it. Normally we don't start allowing payments to be
sent until after the channel has locked in but zeroconf does away with
this assumption.

Instead we push out a "channel_proposed" event, which should only show
up for zeroconfs.
2022-07-28 12:08:18 +09:30
niftynei 3c79a456c0 test-db-provider: if postgres in tests, startup a bookkeeper db
FXIME: Has a edge case where if you disable the bookkeeper, it'll
blowup because you've got an option that isn't present anywhere...
2022-07-28 12:08:18 +09:30
niftynei e5d3ce3b1f bkpr incomestmt: properly escape things for the CSVs
First off, when we pull data out of JSON, unescape it so we don't end up
with extraneous escapes in our bookkeeping data. I promise, it's worth
it.

Then, when we print descriptions out to the csvs, we gotta wrap
everything in quotes... but also we have to change all the double-quotes
to singles so that adding the quotes doesn't do anything untoward.

We also just pass it thru json_escape to get rid of linebreaks etc.

Note that in the tests we do a byte comparison instead of converting the
CSV dumps to strings because python will escape the strings on
conversion...
2022-07-28 12:08:18 +09:30
niftynei 5146baa00b bkpr csvs: koinly + cointracker only accept fees on the same line
So we print out invoice fees on the same line for those CSVs! This means
we have to do a little bit of gymnastics (but not too bad):
	- we save the fee amount onto the income event now so we can use
it later
	- we ignore every "invoice_fee" event for the koinly/cointracker

Note that since we're not skipping income events in the loops we also
move the newline character to the start of every `_entry` function so
skipped records dont incur empth lines.

Changelog-Added: bkpr: print out invoice fees on the same line for `koinly` and `cointracker` csv types
2022-07-28 12:08:18 +09:30
niftynei 352b419755 bkpr: save invoice description data to the database and display it
It'll be really nice to be able to read description data about an
invoice, if we've got it!
2022-07-28 12:08:18 +09:30
niftynei ab94c557c7 bkpr: add test for bookkeeper being added after channel has closed
We rescan and pick up the channel's close tx, but can't go back and
get the open info, since we've already deleted the channel from the
database.
2022-07-28 12:08:18 +09:30
niftynei 0617690981 coin_mvt/bkpr: add "stealable" tag to stealable outputs
If we expect further events for an onchain output (because we can steal
it away from the 'external'/rightful owner), we mark them.

This prevents us from marking a channel as 'onchain-resolved' before
all events that we're interested in have actually hit the chain.

Case that this matters:
Peer publishes a (cheating) unilateral close and a timeout htlc (which
we can steal).
We then steal the timeout htlc.

W/o the stealable flag, we'd have marked the channel as resolved when
the peer published the timeout htlc, which is incorrect as we're still
waiting for the resolution of that timeout htlc (b/c we *can* steal it).
2022-07-28 12:08:18 +09:30
niftynei c1cef773ca bkpr: make sure there's always at least on difference in blockheights
We can't divide by zero, so make sure it's always 1 (with a small loss
of precision for other cases, ce la vie)
2022-07-28 12:08:18 +09:30
niftynei d72033882f bkpr: check for channel resolution for any "originated" event
We were failing to mark channels as resolved b/c we weren't using later
events to external (but originated from this account) events as signals
to run the channel resolution check.

This fixes that, and adds a test.
2022-07-28 12:08:18 +09:30
niftynei 563910e667 bkpr: add docs, change names to 'bkpr-*'
Adds schema definitions and manpages for bkpr- commands; also renames
the commands to all start with 'bkpr-', so they're easier to identify/
make runes about.
2022-07-28 12:08:18 +09:30
niftynei e2ef44c043 bkpr: add 'msat' suffix to all msat denominated fields
Makes our json schema parsing work as expected.
2022-07-28 12:08:18 +09:30
niftynei a3d82d5a01 bkpr: exclude non-wallet events in the balance snapshot
Anchor outputs are ignored by the clightning wallet, but we keep track
of them in the bookkeeper. This causes problems when we do the balance
checks on restart w/ the balance_snapshot -- it results in us printing
out a journal_entry to 'get rid of' the anchors that the clightning node
doesnt know about.

Instead, we mark some outputs as 'ignored' and exclude these from our
account balance sums when we're comparing to the clightning snapshot.
2022-07-28 12:08:18 +09:30
niftynei eae1236db7 tests,bkpr: liquid fails all these for different reasons
- external wallet not supported yet for elements
- the close fails to propagate b/c the outputs are dusty (FIXME)
2022-07-28 12:08:18 +09:30
niftynei e7ed196f87 bkpr: separate the invoice_fees from the invoice paid
For income events, break out the amount paid in routing fees vs the
total amount of the *invoice* that is paid.

Also printout these fees, when available, on listaccountevents
2022-07-28 12:08:18 +09:30
niftynei 4326c08927 test nit: wait_for_mempool cleanup
Wait for mempool=1 before making a block
2022-07-28 12:08:18 +09:30
niftynei a7b7ea5d49 bkpr: add a 'consolidate-fees' flag to the income stmt
Defaults to true. This is actually what you want to see -- the fees for
an account's txid collapsed into a single entry.
2022-07-28 12:08:18 +09:30
niftynei 83c6cf25d2 bkpr: 'to_miner' spends are considered terminal
This is a rare case where we RBF the output of a penalty until it no
longer has an output value we can reclaim. We ignore the txid for these
events when closing a channel.
2022-07-28 12:08:18 +09:30
niftynei 1dd52ba003 bkpr: mark external deposits (withdraws) via blockheight when confirmed
We issue events for external deposits (withdrawals) before the tx is
confirmed in a block. To avoid double counting these, we don't count
them as confirmed/included until after they're confirmed. We do this
by keeping the blockheight as zero until the withdraw for the input for
them comes through.

Note that since we don't have any way to note when RBF'd withdraws
aren't eligible for block inclusion anymore, we don't really have a good
heuristic to trim them. Which is fine, they *will* show up in account
events however.
2022-07-28 12:08:18 +09:30
niftynei 25f0c76c9a tests: move 'bookkeeper' centric tests to their own file
Should we pull these out into a separate test suite?
2022-07-28 12:08:18 +09:30
niftynei 5f41d9247e bkpr: properly account for onchain fees for channel closes
onchain fees are weird at channel close because:

- you may be missing an trimmed htlc (which went to fees)
- the balance from close may have been rounded (msats cant land on
chain)
- the close might have been a past state and you've actually
  ended up with more money onchain than you had in the channel. wut

This commit accounts for all of this appropriately, with some tests.

channel_close.debit should equal onchain_fee.credit (for that txid)
plus sum(chain_event.credit [wallet/channel_acct]).

In the penalty case, channel_close.debit becomes channel_close.debit +
penalty_adj.debit, i.e.

	channel-close.debit + (penalty_adj.debit) =
		onchain_fee.credit
	      + sum(chain_event.credit [wallet/channel_acct])
2022-07-28 12:08:18 +09:30
Rusty Russell 1c26ebdb31 pytest: fix flake in test_wumbo_channels
We might only have seen one side of the channel, as shown below.  Wait
for both:

```
_____________________________ test_wumbo_channels ______________________________
[gw2] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python3

node_factory = <pyln.testing.utils.NodeFactory object at 0x7f5d51743b10>
bitcoind = <pyln.testing.utils.BitcoinD object at 0x7f5d51699d10>

    @pytest.mark.openchannel('v1')
    @pytest.mark.openchannel('v2')
    def test_wumbo_channels(node_factory, bitcoind):
        l1, l2, l3 = node_factory.get_nodes(3,
                                            opts=[{'large-channels': None},
                                                  {'large-channels': None},
                                                  {}])
        conn = l1.rpc.connect(l2.info['id'], 'localhost', port=l2.port)
    
        expected_features = expected_peer_features(wumbo_channels=True)
        if l1.config('experimental-dual-fund'):
            expected_features = expected_peer_features(wumbo_channels=True,
                                                       extra=[21, 29])
    
        assert conn['features'] == expected_features
        assert only_one(l1.rpc.listpeers(l2.info['id'])['peers'])['features'] == expected_features
    
        # Now, can we open a giant channel?
        l1.fundwallet(1 << 26)
        l1.rpc.fundchannel(l2.info['id'], 1 << 24)
    
        # Get that mined, and announced.
        bitcoind.generate_block(6, wait_for_mempool=1)
    
        # Connect l3, get gossip.
        l3.rpc.connect(l1.info['id'], 'localhost', port=l1.port)
        wait_for(lambda: len(l3.rpc.listnodes(l1.info['id'])['nodes']) == 1)
        wait_for(lambda: 'features' in only_one(l3.rpc.listnodes(l1.info['id'])['nodes']))
    
        # Make sure channel capacity is what we expected.
>       assert ([c['amount_msat'] for c in l3.rpc.listchannels()['channels']]
                == [Millisatoshi(str(1 << 24) + "sat")] * 2)
E       assert [16777216000msat] == [16777216000m...777216000msat]
E         Right contains one more item: 16777216000msat
E         Full diff:
E         - [16777216000msat, 16777216000msat]
E         + [16777216000msat]
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-27 19:31:04 +09:30
niftynei 2c2bcc8eb4 flake: permit test_v2_open_sigs_restart_while_dead to succeed/fail
There's a race btw disconnecting and returning a successful RPC call for
openchannel_signed; if we disconnect quickly, we get an RPC error back.

Too slow and it returns w/o an error.

This needs to be cleaned up on a whole, work that I'm planning to get
into as part of a funder re-write. For now, let's just let the test
continue whether this call succeeds or fails.
2022-07-27 19:31:04 +09:30
niftynei f4abc3a661 tests: local flake fix; l1 was waiting too long to reconnect
Impacts local tests, when TIMEOUT is set low...
2022-07-26 15:11:30 -07:00
Rusty Russell 17b9bd5ca3 pytest: fix test_commando_rune flake.
We reset counters every minute, so ratelimit tests can flake since we
might hit that boundary.

Instead, wait for the reset then test explicitly, assuming that takes
less than 60 seconds.

```
        for rune, cmd, params in failures:
            print("{} {}".format(cmd, params))
            with pytest.raises(RpcError, match='Not authorized:') as exc_info:
                l2.rpc.call(method='commando',
                            payload={'peer_id': l1.info['id'],
                                     'rune': rune['rune'],
                                     'method': cmd,
>                                    'params': params})
E               Failed: DID NOT RAISE <class 'pyln.client.lightning.RpcError'>
...
DEBUG:root:Calling commando with payload {'peer_id': '0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518', 'rune': 'O8Zr-ULTBKO3_pKYz0QKE9xYl1vQ4Xx9PtlHuist9Rk9NCZwbnVtPTAmcmF0ZT0zJnJhdGU9MQ==', 'method': 'getinfo', 'params': {}}
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:48:56 -07:00
Rusty Russell 85180dbfee pytest: fix flake in test_feerates
As the comment in set_feerates says: "Technically, this waits until
it's called, not until it's processed.".

And the wait_for() line doesn't work, since that condition is already true.

```
    @unittest.skipIf(TEST_NETWORK == 'liquid-regtest', "Fees on elements are different")
    @unittest.skipIf(
        not DEVELOPER or DEPRECATED_APIS, "Without DEVELOPER=1 we snap to "
        "FEERATE_FLOOR on testnets, and we test the new API."
    )
    def test_feerates(node_factory):
        l1 = node_factory.get_node(options={'log-level': 'io',
                                            'dev-no-fake-fees': True}, start=False)
        l1.daemon.rpcproxy.mock_rpc('estimatesmartfee', {
            'error': {"errors": ["Insufficient data or no feerate found"], "blocks": 0}
        })
        l1.start()
    
        # All estimation types
        types = ["opening", "mutual_close", "unilateral_close", "delayed_to_us",
                 "htlc_resolution", "penalty"]
    
        # Try parsing the feerates, won't work because can't estimate
        for t in types:
            with pytest.raises(RpcError, match=r'Cannot estimate fees'):
                feerate = l1.rpc.parsefeerate(t)
    
        # Query feerates (shouldn't give any!)
        wait_for(lambda: len(l1.rpc.feerates('perkw')['perkw']) == 2)
        feerates = l1.rpc.feerates('perkw')
        assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
        assert 'perkb' not in feerates
        assert feerates['perkw']['max_acceptable'] == 2**32 - 1
        assert feerates['perkw']['min_acceptable'] == 253
        for t in types:
            assert t not in feerates['perkw']
    
        wait_for(lambda: len(l1.rpc.feerates('perkb')['perkb']) == 2)
        feerates = l1.rpc.feerates('perkb')
        assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
        assert 'perkw' not in feerates
        assert feerates['perkb']['max_acceptable'] == (2**32 - 1)
        assert feerates['perkb']['min_acceptable'] == 253 * 4
        for t in types:
            assert t not in feerates['perkb']
    
        # Now try setting them, one at a time.
        # Set CONSERVATIVE/2 feerate, for max
        l1.set_feerates((15000, 0, 0, 0), True)
        wait_for(lambda: len(l1.rpc.feerates('perkw')['perkw']) == 2)
        feerates = l1.rpc.feerates('perkw')
        assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
        assert 'perkb' not in feerates
>       assert feerates['perkw']['max_acceptable'] == 15000 * 10
E       assert 4294967295 == (15000 * 10)

tests/test_misc.py:1392: AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:48:56 -07:00
Rusty Russell 0a9a87ec10 pytest: fix test_commando_stress
On fast machines, we don't get failures sometimes on commando commands.

(*But* we still got "New cmd replacing old" messages, which is how I
realized we weren't freeing them promptly, hence the previous fix).

```
        # Should have exactly one discard msg from each discard
>       nodes[0].daemon.wait_for_logs([r"New cmd from .*, replacing old"] * discards)

tests/test_plugin.py:2839: 
...
>                   raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E                   TimeoutError: Unable to find "[]" in logs.

```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:33:40 -07:00
Rusty Russell 8da361b49b pytest: fix flake in test_channel_persistence w/ TEST_CHECK_DBSTMTS
This was weird.  Here is the message (with \n turned into real new lines):

```
2022-07-24T07:20:08.9144998Z Plugin '/home/runner/work/lightning/lightning/tests/plugins/dblog.py' returned an invalid response to the db_write hook: {"jsonrpc": "2.0", "id": 40, "error": {"code": -32600, "message": "Error while processing db_write: UNIQUE constraint failed: shachain_known.shachain_id, shachain_known.pos", "traceback": "Traceback (most recent call last):
  File \"/home/runner/work/lightning/lightning/contrib/pyln-client/pyln/client/plugin.py\", line 631, in _dispatch_request
    result = self._exec_func(method.func, request)
  File \"/home/runner/work/lightning/lightning/contrib/pyln-client/pyln/client/plugin.py\", line 616, in _exec_func
    return func(*ba.args, **ba.kwargs)
  File \"/home/runner/work/lightning/lightning/tests/plugins/dblog.py\", line 45, in db_write
    plugin.conn.execute(c)
sqlite3.IntegrityError: UNIQUE constraint failed: shachain_known.shachain_id, shachain_known.pos
"}}
```

Finally, I realized that we *kill* l2: this means it has updated the
plugin db but not the real db.  This is expected: a real backup plugin
would handle this case.

Simply disable the test for this case.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:33:40 -07:00
Rusty Russell 1480257644 pytest: set dblog-file when adding the dblog plugin (TEST_CHECK_DBSTMTS=1)
As we'll see in the next patch, this wasn't *supposed* to work wihtout dblog-file,
but it did, creating a dblog called "null".

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:33:40 -07:00
Rusty Russell da4e33cd0d decode: fix crash when decoding invalid rune.
If rune contains invalid UTF-8, offers (which implements decode) would
produce JSON with invalid UTF-8, which causes lightningd to complain
and kill it, and then die because it's an important plugin.

So don't decode invalid UTF-8!

Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-25 15:14:01 -07:00
Rusty Russell 8f6afedafe fuzz: fix fuzzing compilation.
It had bitrotted.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-25 08:08:32 -07:00
niftynei 7bbfef5054 tests: flake fix; l1 was waiting too long to reconnect
We were waiting too long for the reconnect to happen (60s default),
which caused this test to timeout.

When testing, let's speed up the reconnect.

L2 tried to reconnect but didn't have connection information in its
gossip -- is there a way to ask/save connection data from a node you're
making a channel with that doesn't rely on their node_announcement?
2022-07-25 16:28:09 +09:30
niftynei 4cc0da7432 nit: speedup retry timeout test 2022-07-25 16:28:09 +09:30
niftynei 9adf5f17de tests:redirect output, so test log passes 2022-07-25 16:28:09 +09:30
niftynei bed00754ad test-flake: dont let `l1` send their unilateral tx
`l1` got their tx in before `l2`, but we're waiting for `l2`'s
commitment tx. (l2 sends an error message to l1 when we call dev-fail,
l1 broadcasts their commitment tx when they get the error)

Instead, we let l1 send their commitment tx, except we blackhole it.

```
        l2.rpc.dev_fail(l1.info['id'])
        l2.daemon.wait_for_log('Failing due to dev-fail command')
>       l2.wait_for_channel_onchain(l1.info['id'])
tests/test_connection.py:2275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
contrib/pyln-testing/pyln/testing/utils.py:1043: in wait_for_channel_onchain
    wait_for(lambda: txid inself.bitcoin.rpc.getrawmempool())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
success = <function LightningNode.wait_for_channel_onchain.<locals>.<lambda> at 0x7f0f5f7577a0>
timeout = 900
defwait_for(success, timeout=TIMEOUT):
        start_time = time.time()
        interval = 0.25
whilenot success():
            time_left = start_time + timeout - time.time()
if time_left <= 0:
>               raiseValueError("Timeout while waiting for {}", success)
E               ValueError: ('Timeout while waiting for {}', <function LightningNode.wait_for_channel_onchain.<locals>.<lambda> at 0x7f0f5f7577a0>)
contrib/pyln-testing/pyln/testing/utils.py:93: ValueError
```
2022-07-23 11:37:35 -05:00
Rusty Russell 53c333a01b pytest: fix flake in test_zeroconf_forward
pay failed (non-DEVELOPER) because one node didn't see blocks in time:

```
        inv = l3.rpc.invoice(42 * 10**6, 'inv1', 'desc')['bolt11']
>       l1.rpc.pay(inv)
tests/test_opening.py:1394:
...
>           raise RpcError(method, payload, resp['error'])
E           pyln.client.lightning.RpcError: RPC call failed: method: pay, payload: {'bolt11': 'lnbcrt420u1p3dnwv7sp5qnquuwndgz35ywfg3p3dtu07ywmju78r8s0379eaxjxkv5d8jueqpp5kmaxgsye02dzmdlkqkedqvrh2evdl45sz7njrm5dff42dvp4v5qsdq8v3jhxccxqyjw5qcqp9rzjqgkjyd3q5dv6gllh77kygly9c3kfy0d9xwyjyxsq2nq3c83u5vw4n0wkf0y9gwfwhgqqqqqpqqqqqzsqqc9qyysgqad2x2zv0axa3hrfz7nurw4plvspvxlld9wtcg3xxjyxqlzm773a4fkyl09gs8uskj4m7len8r4pf4rh7v9snh3grrpawhk9qsd7vwmcqa9rgxg'}, error: {'code': 210, 'message': 'Ran out of routes to try after 176 attempts: see `paystatus`', 'attempts': [{'status': 'pending', 'partid': 1, 'amount_msat': 42000000msat}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 2, 'amount_msat': 9278783msat, 'parent_partid': 1}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 10, 'amount_msat': 9278783msat, 'parent_partid': 2}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 15, 'amount_msat': 9278783msat, 'parent_partid': 10}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 18, 'amount_msat': 9278783msat, 'parent_partid': 15}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 23, 'amount_msat': 9278783msat, 'parent_partid': 18}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 29, 'amount_msat': 9278783msat, 'parent_partid': 23}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 33, 'amount_msat': 9278783msat, 'parent_partid': 29}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 39, 'amount_msat': 9278783msat, 'parent_partid': 33}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 43, 'amount_msat': 9278783msat, 'parent_partid': 39}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 48, 'amount_msat': 9278783msat, 'parent_partid': 43}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 54, 'amount_msat': 9278783msat, 'parent_partid': 48}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 59, 'amount_msat': 4659837msat, 'parent_partid': 54}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 69, 'amount_msat': 4659837msat, 'parent_partid': 59}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 82, 'amount_msat': 4659837msat, 'parent_partid': 69}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 92, 'amount_msat': 4659837msat, 'parent_partid': 82}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 102, 'amount_msat': 4659837msat, 'parent_partid': 92}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 112, 'amount_msat': 4659837msat, 'parent_partid': 102}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 122, 'amount_msat': 4659837msat, 'parent_partid': 112}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 131, 'amount_msat': 4659837msat, 'parent_partid': 122}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 141, 'amount_msat': 4659837msat, 'parent_partid': 131}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 147, 'amount_msat': 4659837msat, 'parent_partid': 141}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 158, 'amount_msat': 4659837msat, 'parent_partid': 147}, {'status': 'pending', 'partid': 175, 'amount_msat': 2250620msat, 'parent_partid': 158}, {'status': 'pending', 'partid': 176, 'amount_msat': 2409217msat, 'parent_partid': 158}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 60, 'amount_msat': 4618946msat, 'parent_partid': 54}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 70, 'amount_msat': 4618946msat, 'parent_partid': 60}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 81, 'amount_msat': 4618946msat, 'parent_partid': 70}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 91, 'amount_msat': 4618946msat, 'parent_partid': 81}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 101, 'amount_msat': 4618946msat, 'parent_partid': 91}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 111, 'amount_msat': 4618946msat, 'parent_partid': 101}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 121, 'amount_msat': 4618946msat, 'parent_partid': 111}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 132, 'amount_msat': 4618946msat, 'parent_partid': 121}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 142, 'amount_msat': 4618946msat, 'parent_partid': 132}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 148, 'amount_msat': 4618946msat, 'parent_partid': 142}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 157, 'amount_msat': 4618946msat, 'parent_partid': 148}, {'status': 'pending', 'partid': 168, 'amount_msat': 2320055msat, 'parent_partid': 157}, {'status': 'pending', 'partid': 169, 'amount_msat': 2298891msat, 'parent_partid': 157}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 3, 'amount_msat': 9016551msat, 'parent_partid': 1}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 7, 'amount_msat': 9016551msat, 'parent_partid': 3}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 12, 'amount_msat': 9016551msat, 'parent_partid': 7}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 17, 'amount_msat': 9016551msat, 'parent_partid': 12}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 21, 'amount_msat': 9016551msat, 'parent_partid': 17}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 25, 'amount_msat': 9016551msat, 'parent_partid': 21}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 30, 'amount_msat': 9016551msat, 'parent_partid': 25}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 36, 'amount_msat': 9016551msat, 'parent_partid': 30}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 40, 'amount_msat': 9016551msat, 'parent_partid': 36}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 47, 'amount_msat': 9016551msat, 'parent_partid': 40}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 51, 'amount_msat': 9016551msat, 'parent_partid': 47}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 56, 'amount_msat': 4458645msat, 'parent_partid': 51}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 63, 'amount_msat': 4458645msat, 'parent_partid': 56}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 77, 'amount_msat': 4458645msat, 'parent_partid': 63}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 87, 'amount_msat': 4458645msat, 'parent_partid': 77}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 94, 'amount_msat': 4458645msat, 'parent_partid': 87}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 105, 'amount_msat': 4458645msat, 'parent_partid': 94}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 120, 'amount_msat': 4458645msat, 'parent_partid': 105}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 124, 'amount_msat': 4458645msat, 'parent_partid': 120}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 134, 'amount_msat': 4458645msat, 'parent_partid': 124}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 143, 'amount_msat': 4458645msat, 'parent_partid': 134}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 154, 'amount_msat': 4458645msat, 'parent_partid': 143}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 164, 'amount_msat': 2306527msat, 'parent_partid': 154}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 165, 'amount_msat': 2152118msat, 'parent_partid': 154}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 57, 'amount_msat': 4557906msat, 'parent_partid': 51}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 68, 'amount_msat': 4557906msat, 'parent_partid': 57}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 76, 'amount_msat': 4557906msat, 'parent_partid': 68}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 86, 'amount_msat': 4557906msat, 'parent_partid': 76}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 96, 'amount_msat': 4557906msat, 'parent_partid': 86}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 103, 'amount_msat': 4557906msat, 'parent_partid': 96}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 115, 'amount_msat': 4557906msat, 'parent_partid': 103}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 125, 'amount_msat': 4557906msat, 'parent_partid': 115}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 136, 'amount_msat': 4557906msat, 'parent_partid': 125}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 146, 'amount_msat': 4557906msat, 'parent_partid': 136}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 161, 'amount_msat': 4557906msat, 'parent_partid': 146}, {'status': 'pending', 'partid': 173, 'amount_msat': 2208152msat, 'parent_partid': 161}, {'status': 'pending', 'partid': 174, 'amount_msat': 2349754msat, 'parent_partid': 161}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 4, 'amount_msat': 10655305msat, 'parent_partid': 1}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 9, 'amount_msat': 10655305msat, 'parent_partid': 4}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 13, 'amount_msat': 10655305msat, 'parent_partid': 9}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 19, 'amount_msat': 10655305msat, 'parent_partid': 13}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 24, 'amount_msat': 10655305msat, 'parent_partid': 19}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 28, 'amount_msat': 10655305msat, 'parent_partid': 24}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 34, 'amount_msat': 10655305msat, 'parent_partid': 28}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 38, 'amount_msat': 10655305msat, 'parent_partid': 34}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 44, 'amount_msat': 10655305msat, 'parent_partid': 38}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 49, 'amount_msat': 10655305msat, 'parent_partid': 44}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 53, 'amount_msat': 10655305msat, 'parent_partid': 49}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 61, 'amount_msat': 4872267msat, 'parent_partid': 53}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 71, 'amount_msat': 4872267msat, 'parent_partid': 61}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 79, 'amount_msat': 4872267msat, 'parent_partid': 71}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 89, 'amount_msat': 4872267msat, 'parent_partid': 79}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 100, 'amount_msat': 4872267msat, 'parent_partid': 89}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 109, 'amount_msat': 4872267msat, 'parent_partid': 100}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 114, 'amount_msat': 4872267msat, 'parent_partid': 109}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 126, 'amount_msat': 4872267msat, 'parent_partid': 114}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 139, 'amount_msat': 4872267msat, 'parent_partid': 126}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 149, 'amount_msat': 4872267msat, 'parent_partid': 139}, {'status': 'failed', 'failreason': 'Cannot split payment any further without exceeding the maximum number of HTLCs allowed by our channels', 'partid': 162, 'amount_msat': 4872267msat, 'parent_partid': 149}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 62, 'amount_msat': 5783038msat, 'parent_partid': 53}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 72, 'amount_msat': 5783038msat, 'parent_partid': 62}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 80, 'amount_msat': 5783038msat, 'parent_partid': 72}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 90, 'amount_msat': 5783038msat, 'parent_partid': 80}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 99, 'amount_msat': 5783038msat, 'parent_partid': 90}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 110, 'amount_msat': 5783038msat, 'parent_partid': 99}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 117, 'amount_msat': 5783038msat, 'parent_partid': 110}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 128, 'amount_msat': 5783038msat, 'parent_partid': 117}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 138, 'amount_msat': 5783038msat, 'parent_partid': 128}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 150, 'amount_msat': 5783038msat, 'parent_partid': 138}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 156, 'amount_msat': 5783038msat, 'parent_partid': 150}, {'status': 'pending', 'partid': 170, 'amount_msat': 2902801msat, 'parent_partid': 156}, {'status': 'pending', 'partid': 171, 'amount_msat': 2880237msat, 'parent_partid': 156}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 5, 'amount_msat': 8942693msat, 'parent_partid': 1}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 8, 'amount_msat': 8942693msat, 'parent_partid': 5}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 16, 'amount_msat': 8942693msat, 'parent_partid': 8}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 20, 'amount_msat': 8942693msat, 'parent_partid': 16}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 27, 'amount_msat': 8942693msat, 'parent_partid': 20}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 32, 'amount_msat': 8942693msat, 'parent_partid': 27}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 35, 'amount_msat': 8942693msat, 'parent_partid': 32}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 42, 'amount_msat': 8942693msat, 'parent_partid': 35}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 45, 'amount_msat': 8942693msat, 'parent_partid': 42}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 50, 'amount_msat': 8942693msat, 'parent_partid': 45}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 58, 'amount_msat': 8942693msat, 'parent_partid': 50}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 64, 'amount_msat': 4159394msat, 'parent_partid': 58}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 78, 'amount_msat': 4159394msat, 'parent_partid': 64}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 84, 'amount_msat': 4159394msat, 'parent_partid': 78}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 98, 'amount_msat': 4159394msat, 'parent_partid': 84}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 108, 'amount_msat': 4159394msat, 'parent_partid': 98}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 116, 'amount_msat': 4159394msat, 'parent_partid': 108}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 123, 'amount_msat': 4159394msat, 'parent_partid': 116}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 135, 'amount_msat': 4159394msat, 'parent_partid': 123}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 144, 'amount_msat': 4159394msat, 'parent_partid': 135}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 155, 'amount_msat': 4159394msat, 'parent_partid': 144}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 163, 'amount_msat': 4159394msat, 'parent_partid': 155}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 65, 'amount_msat': 4783299msat, 'parent_partid': 58}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 73, 'amount_msat': 4783299msat, 'parent_partid': 65}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 83, 'amount_msat': 4783299msat, 'parent_partid': 73}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 93, 'amount_msat': 4783299msat, 'parent_partid': 83}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 104, 'amount_msat': 4783299msat, 'parent_partid': 93}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 119, 'amount_msat': 4783299msat, 'parent_partid': 104}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 127, 'amount_msat': 4783299msat, 'parent_partid': 119}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 140, 'amount_msat': 4783299msat, 'parent_partid': 127}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 151, 'amount_msat': 4783299msat, 'parent_partid': 140}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 160, 'amount_msat': 4783299msat, 'parent_partid': 151}, {'status': 'pending', 'partid': 172, 'amount_msat': 4783299msat, 'parent_partid': 160}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 6, 'amount_msat': 4106668msat, 'parent_partid': 1}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 11, 'amount_msat': 4106668msat, 'parent_partid': 6}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 14, 'amount_msat': 4106668msat, 'parent_partid': 11}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 22, 'amount_msat': 4106668msat, 'parent_partid': 14}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 26, 'amount_msat': 4106668msat, 'parent_partid': 22}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 31, 'amount_msat': 4106668msat, 'parent_partid': 26}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 37, 'amount_msat': 4106668msat, 'parent_partid': 31}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 41, 'amount_msat': 4106668msat, 'parent_partid': 37}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 46, 'amount_msat': 4106668msat, 'parent_partid': 41}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 52, 'amount_msat': 4106668msat, 'parent_partid': 46}, {'status': 'pending', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 55, 'amount_msat': 4106668msat, 'parent_partid': 52}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 66, 'amount_msat': 2165538msat, 'parent_partid': 55}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 75, 'amount_msat': 2165538msat, 'parent_partid': 66}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 85, 'amount_msat': 2165538msat, 'parent_partid': 75}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 95, 'amount_msat': 2165538msat, 'parent_partid': 85}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 107, 'amount_msat': 2165538msat, 'parent_partid': 95}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 113, 'amount_msat': 2165538msat, 'parent_partid': 107}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 130, 'amount_msat': 2165538msat, 'parent_partid': 113}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 137, 'amount_msat': 2165538msat, 'parent_partid': 130}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 152, 'amount_msat': 2165538msat, 'parent_partid': 137}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 159, 'amount_msat': 2165538msat, 'parent_partid': 152}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 167, 'amount_msat': 2165538msat, 'parent_partid': 159}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 67, 'amount_msat': 1941130msat, 'parent_partid': 55}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 74, 'amount_msat': 1941130msat, 'parent_partid': 67}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 88, 'amount_msat': 1941130msat, 'parent_partid': 74}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 97, 'amount_msat': 1941130msat, 'parent_partid': 88}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 106, 'amount_msat': 1941130msat, 'parent_partid': 97}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 118, 'amount_msat': 1941130msat, 'parent_partid': 106}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 129, 'amount_msat': 1941130msat, 'parent_partid': 118}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 133, 'amount_msat': 1941130msat, 'parent_partid': 129}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 145, 'amount_msat': 1941130msat, 'parent_partid': 133}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 153, 'amount_msat': 1941130msat, 'parent_partid': 145}, {'status': 'failed', 'failreason': 'failed: WIRE_EXPIRY_TOO_SOON (reply from remote)', 'partid': 166, 'amount_msat': 1941130msat, 'parent_partid': 153}]}
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-22 17:57:28 +02:00
Rusty Russell 8c38302ab8 hsmtool: implement checkhsm.
This gives a nice way to ensure your secret is the correct one.

Also, we don't need to suppress VALGRIND for this test, now the output
races are fixed.

Changelog-Added: `hsmtool`: new command `checkhsm` to check BIP39 passphrase against hsm_secret.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-22 16:57:27 +02:00
Rusty Russell c10e385612 commando: add stress test, fix memleak report.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-21 15:37:05 -05:00
Rusty Russell 4cada557ba pytest: don't redirect stderr by default.
Some tests need to inspect it, but most don't, and I suspect I'm missing some
error messages due to this.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-21 15:37:05 -05:00
Rusty Russell aaf743e438 commando: fix crash when rune is completely bogus.
The error routine returns a string literal in this case, which we can't take().

Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-21 15:37:05 -05:00
Rusty Russell 08bef48d5c pytest: disable autoreconnect in test_rbf_reconnect_tx_construct
It reconnects itself, so we don't see the ['connected'] is False.

```
2022-07-20T20:37:06.3808161Z     @unittest.skipIf(TEST_NETWORK != 'regtest', 'elementsd doesnt yet support PSBT features we need')
2022-07-20T20:37:06.3809031Z     @pytest.mark.developer("uses dev-disconnect")
2022-07-20T20:37:06.3809547Z     @pytest.mark.openchannel('v2')
2022-07-20T20:37:06.3810058Z     def test_rbf_reconnect_tx_construct(node_factory, bitcoind, chainparams):
...
2022-07-20T20:37:06.3864163Z         # Now we finish off the completes failure check
2022-07-20T20:37:06.3864604Z         for d in disconnects[-2:]:
2022-07-20T20:37:06.3865162Z             l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
2022-07-20T20:37:06.3865711Z             bump = l1.rpc.openchannel_bump(chan_id, chan_amount, initpsbt['psbt'])
2022-07-20T20:37:06.3866169Z             with pytest.raises(RpcError):
2022-07-20T20:37:06.3866674Z                 update = l1.rpc.openchannel_update(chan_id, bump['psbt'])
2022-07-20T20:37:06.3867367Z >           wait_for(lambda: l1.rpc.getpeer(l2.info['id'])['connected'] is False)
...
2022-07-20T20:37:06.5215961Z lightningd-1 2022-07-20T20:21:49.691Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: dev_disconnect: -WIRE_TX_COMPLETE (WIRE_TX_COMPLETE)
2022-07-20T20:37:06.5216482Z lightningd-1 2022-07-20T20:21:49.691Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-dualopend-chan#1: peer_out WIRE_TX_COMPLETE
2022-07-20T20:37:06.5216756Z lightningd-1 2022-07-20T20:21:49.691Z DEBUG   connectd: drain_peer
2022-07-20T20:37:06.5217064Z lightningd-1 2022-07-20T20:21:49.692Z DEBUG   connectd: drain_peer draining subd!
2022-07-20T20:37:06.5217549Z lightningd-1 2022-07-20T20:21:49.692Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-lightningd: peer_disconnect_done
2022-07-20T20:37:06.5218482Z lightningd-1 2022-07-20T20:21:49.692Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-lightningd: Reconnecting in 1 seconds
2022-07-20T20:37:06.5219110Z lightningd-1 2022-07-20T20:21:49.692Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-lightningd: Will try reconnect in 1 seconds
2022-07-20T20:37:06.5219427Z lightningd-1 2022-07-20T20:21:49.692Z DEBUG   gossipd: REPLY WIRE_GOSSIPD_GET_ADDRS_REPLY with 0 fds
2022-07-20T20:37:06.5219994Z lightningd-1 2022-07-20T20:21:49.696Z DEBUG   plugin-funder: Cleaning up inflights for peer id 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59
2022-07-20T20:37:06.5220305Z lightningd-1 2022-07-20T20:21:49.696Z DEBUG   connectd: maybe_free_peer freeing peer!
2022-07-20T20:37:06.5220743Z lightningd-2 2022-07-20T20:21:49.699Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-gossipd: seeker: chosen as startup peer
2022-07-20T20:37:06.5221136Z lightningd-2 2022-07-20T20:21:49.699Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-hsmd: Got WIRE_HSMD_ECDH_REQ
2022-07-20T20:37:06.5221380Z lightningd-2 2022-07-20T20:21:49.699Z DEBUG   connectd: drain_peer
2022-07-20T20:37:06.5221805Z lightningd-1 2022-07-20T20:21:49.700Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-hsmd: Got WIRE_HSMD_READY_CHANNEL
2022-07-20T20:37:06.5223761Z lightningd-1 2022-07-20T20:21:49.700Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-dualopend-chan#1: signature 30440220338460c0e75c08c21f4f4f96806d81426aee48e06ccc54e87892b332f55fe49e0220527573286f801a0d23317978a9aabbbbcb95c2ceb614596c0ab50bb72b96158b01 on tx 020000000105f2aa5f346f46007b9f8b128e8c6e4d40d0477aefd81250ba99adc9349aabe300000000009db0e280024a01000000000000220020be7935a77ca9ab70a4b8b1906825637767fed3c00824aa90c988983587d684881e6301000000000022002047021684129f8aa1c0b5b97dc99607f5f0850b548813a5da346fda22511284759a3ed620 using key 02324266de8403b3ab157a09f1f784d587af61831c998c151bcc21bb74c2b2314b
2022-07-20T20:37:06.5224194Z lightningd-1 2022-07-20T20:21:49.700Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: Connected out, starting crypto
2022-07-20T20:37:06.5224723Z lightningd-1 2022-07-20T20:21:49.700Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: seeker: chosen as startup peer
2022-07-20T20:37:06.5225006Z lightningd-1 2022-07-20T20:21:49.700Z DEBUG   hsmd: Client: Received message 31 from client
2022-07-20T20:37:06.5225461Z lightningd-1 2022-07-20T20:21:49.700Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-dualopend-chan#1: peer_out WIRE_COMMITMENT_SIGNED
2022-07-20T20:37:06.5225852Z lightningd-1 2022-07-20T20:21:49.700Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: Connect OUT
```
2022-07-21 14:25:25 -05:00
Christian Decker bac322ccdb pytest: Move generated grpc bindings to pyln-testing
These may eventually end up in pyln-client, as they allow talking to
the GRPC interface exposed by cln-grpc, however for now they are used
for testing only. Once we have sufficient API and test coverage we can
move them and leave imports in their place.
2022-07-21 14:19:06 +09:30
Rusty Russell e96eb07ef4 lightningd: test that hsm_secret is as expected, at startup.
If you get the wrong hsm_secret, your node_id will change, and
peers won't know who you are, bitcoind will reject your transaction
signatures, and other madness.

Catch this as soon as it happens, by storing our node_id in the db.

Suggested-by: @cdecker, @fiatjaf
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Config: `lightningd` will refuse to start with the wrong node_id (i.e. hsm_secret changes).
2022-07-20 19:28:33 +09:30
Vincenzo Palazzo ba4e870a1c test: disable schema check of `checkmessage` with deprecated API
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
2022-07-19 17:55:31 +02:00
Vincenzo Palazzo 7ae616ef60 rpc: improve error format
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
2022-07-19 17:55:31 +02:00
Vincenzo Palazzo 1d671a2380 rpc: checkmessage return an error if pubkey is not found
Returning an warning message when the pub key is not specified and there is no node in the graph.

We try to help people that use core lightning as a signer and nothings else.

Changelog-Deprecated: rpc: checkmessage return an error when the pubkey is not specified and it is unknown in the network graph.
2022-07-19 17:55:31 +02:00
Rusty Russell acc9dc4852 pytest: fix flake in test_channel_lease_post_expiry
We close the channel because the payment fulfilment is not totally done, and
we generate 6 blocks, causing it to hit CLTV deadline.

```
        # send some payments, mine a block or two
        inv = l2.rpc.invoice(10**4, '1', 'no_1')
        l1.rpc.pay(inv['bolt11'])
    
        # l2 attempts to close a channel that it leased, should fail
        with pytest.raises(RpcError, match=r'Peer leased this channel from us'):
            l2.rpc.close(l1.get_channel_scid(l2))
    
        bitcoind.generate_block(6)
        sync_blockheight(bitcoind, [l1, l2])
        # make sure we're at the right place for the csv lock
>       l2.daemon.wait_for_log('Blockheight: SENT_ADD_ACK_COMMIT->RCVD_ADD_ACK_REVOCATION LOCAL now 115')

tests/test_closing.py:823: 
...
lightningd-2 2022-07-17T13:15:34.242Z DEBUG   lightningd: Adding block 115: 39d95061935e9fc42b04c86ae60d0cf157765aff4c040f3a8d0b7888db19e015
lightningd-2 2022-07-17T13:15:34.244Z UNUSUAL 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Peer permanent failure in CHANNELD_NORMAL: Fulfilled HTLC 0 SENT_REMOVE_COMMIT cltv 115 hit deadline
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Rusty Russell 099d149104 pytest: work around dualopend issue.
Dualopend is not listening to the peer fd when it hangs up, so doesn't
notice it's gone.  We don't clean up the channel until it's done (usually
a good thing: it could be about to lock it in), but this harms us
here.

Fix the test failure and make a comment.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Rusty Russell 02e169fd27 lightningd: drive all reconnections out of disconnections.
The only places which should call try_reconnect now are the "connect"
command, and the disconnect path when it decides there's still an
active channel.

This introduces one subtlety: if we disconnect when there's no active
channel, but then the subd makes one, we have to catch that case!

This temporarily reverts "slow" reconnections to fast ones: see next
patch.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Rusty Russell a3c4908f4a lightningd: don't explicitly tell connectd to disconnect, have it do it on sending error/warning.
Connectd already does this when we *receive* an error or warning, but
now do it on send.  This causes some slight behavior change: we don't
disconnect when we close a channel, for example (our behaviour here
has been inconsistent across versions, depending on the code).

When connectd is told to disconnect, it now does so immediately, and
doesn't wait for subds to drain etc.  That simplifies the manual
disconnect case, which now cleans up as it would from any other
disconnection when connectd says it's disconnected.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Rusty Russell 2962b93199 pytest: don't assume disconnect finished atomically, and suppress interfering redirects.
In various places, we assumed that when `connected` is false,
everything is finished.  This is not true: we should wait for the
state we expect.

In addition, various places allows reconnections, which interfered
with the logic; suppress them.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Rusty Russell 719d1384d1 connectd: give connections a chance to drain when lightningd says to disconnect, or peer disconnects.
We want to avoid lost messages in the common cases.

This generalizes our drain code, by giving the subds each 5 seconds to
close themselves, but continue to allow them to send us traffic (if
peer is still connected) and continue to send them traffic.

We continue to send traffic *out* to the peer (if it's still
connected), until all subds are gone.  We still have a 5 second timer
to close the connection to peer.

On reconnects, we don't do this "drain period" on reconnects: we kill
immediately.

We fix up one test which was looking for the "disconnect" message
explicitly.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Rusty Russell 2daf461762 pytest: enable race test.
Now this passes.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Rusty Russell 6a9a091234 pytest: add another connection stress test, using multiple channels (bug #5254)
This one actually triggers an assert() on my machine, so though it
wasn't what I was looking for, let's include it:

```
lightning_connectd: connectd/connectd.c:1905: peer_conn_closed: Assertion `tal_count(peer->subds) == 0' failed.
lightning_connectd: FATAL SIGNAL 6 (version v0.11.0.1-15-gc812595)
0x55b3e1e21302 send_backtrace
	common/daemon.c:33
0x55b3e1e213ac crashdump
	common/daemon.c:46
0x7f44292ff08f ???
	/build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0
0x7f44292ff00b __GI_raise
	../sysdeps/unix/sysv/linux/raise.c:51
0x7f44292de858 __GI_abort
	/build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79
0x7f44292de728 __assert_fail_base
	/build/glibc-SzIz7B/glibc-2.31/assert/assert.c:92
0x7f44292effd5 __GI___assert_fail
	/build/glibc-SzIz7B/glibc-2.31/assert/assert.c:101
0x55b3e1e125db peer_conn_closed
	connectd/connectd.c:1905
0x55b3e1e17b4f destroy_subd
	connectd/multiplex.c:1112
0x55b3e1e7fdf4 notify
	ccan/ccan/tal/tal.c:240
0x55b3e1e8030b del_tree
	ccan/ccan/tal/tal.c:402
0x55b3e1e8035d del_tree
	ccan/ccan/tal/tal.c:412
0x55b3e1e806a7 tal_free
	ccan/ccan/tal/tal.c:486
0x55b3e1e6ef59 io_close
	ccan/ccan/io/io.c:450
0x55b3e1e17429 write_to_subd
	connectd/multiplex.c:957
0x55b3e1e6e1a3 next_plan
	ccan/ccan/io/io.c:59
0x55b3e1e6eebc io_do_always
	ccan/ccan/io/io.c:435
0x55b3e1e70baa handle_always
	ccan/ccan/io/poll.c:304
0x55b3e1e70ea1 io_loop
	ccan/ccan/io/poll.c:385
0x55b3e1e12dd5 main
	connectd/connectd.c:2159
0x7f44292e0082 __libc_start_main
	../csu/libc-start.c:308
0x55b3e1e0885d ???
	???:0
0xffffffffffffffff ???
	???:0
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Rusty Russell 8e1d5c19d6 pytest: test to reproduce "channeld: sent ERROR bad reestablish revocation_number: 0 vs 3"
It's caused by a reconnection race: we hold the new incoming connection while we
ask lightningd to kill the old connection.  But under some circumstances we leave
the new incoming hanging (with, in this case, old reestablish messages unread!)
and another connection comes in.

Then, later we service the long-gone "incoming" connection, channeld
reads the ancient reestablish message and gets upset.

This test used to hang, but now we've fixed reconnection races it is fine.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-18 20:50:04 -05:00
Michael Schmoock 65433de05f options: set DNS port to network default if not specified
- set port for a DNS announcement without port to network default
- remove x-fail

Changelog-Fixed: Port of a DNS announcement can be 0 if unspecified
2022-07-17 23:26:07 +02:00
Michael Schmoock 6d4285d7c4 pytest: add xfail test to show DNS w/o port issue
This adds an X-Fail testcase that demonstrates that currently
the port of a DNS announcement is not set to the corresponding
network port (in this case regtest), but it will be set to 0.

Changelog-None
2022-07-17 23:26:07 +02:00
Rusty Russell 8c48eda8c7 decode: support decoding runes.
This is a bit weird since it lives in the offers plugin, but it works
well.  This should make runes much more approachable for people!

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Rusty Russell 468dff1723 commando: add rate for maximum successful rune use per minute.
I'm assuming that nobody wants a rate slower than 1 per minute; we can
introduce 'drate' if we want a per-day kind of limit.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Rusty Russell 4ab09f7cfb commando: add support for parameters by array, parameter count.
Awkward to filter, but they're really practical for many commands.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Rusty Russell 8688daf937 commando: require runes for operation.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Rusty Russell ae4856df70 commando: don't look at messages *at all* unless they've created a rune.
This means we can leave commando on by default, without an explicit config flag.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Rusty Russell 419cb60b1b commando: add commando-rune command.
Can both mint new runes, and add one or more restrictions to existing ones.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Rusty Russell b49703e279 commando: correctly reflect error data field.
Some JSON error include "data", and we should reflect that.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Rusty Russell 49df89556b commando: support commands larger than 64k.
This is needed for invoice, which can be asked to commit to giant descriptions
(though that's antisocial!).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Rusty Russell 3fe246c2e7 plugins/commando: basic commando plugin (no runes yet).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au
Changelog-Added: Plugins: `commando` a new builtin plugin to send/recv peer commands over the lightning network, using runes.
2022-07-17 08:51:02 +09:30
Rusty Russell 3e672b784d Makefile: use a library archive for CCAN
The linker discards whole files in an archive if it doesn't need them,
so saves a bit of space (and time).  Also allows us to add more niche
things to CCAN (e.g. runes support!) without bloating all the binaries.

We also had many places which depended on $(CCAN_FILES), but that was
already a dependent of $(ALL_PROGRAMS) and $(ALL_TEST_PROGRAMS).

Before:

```
$ size lightningd/lightning*d
   text	   data	    bss	    dec	    hex	filename
2247683	   8696	  39008	2295387	 23065b	lightningd/lightning_channeld
2086607	   7432	  38880	2132919	 208bb7	lightningd/lightning_closingd
2227916	   8056	  39200	2275172	 22b764	lightningd/lightning_connectd
3369236	 119288	  39240	3527764	 35d454	lightningd/lightningd
2183551	   8352	  38880	2230783	 2209ff	lightningd/lightning_dualopend
2196389	   8024	  39136	2243549	 223bdd	lightningd/lightning_gossipd
2086216	   7488	  39264	2132968	 208be8	lightningd/lightning_hsmd
2134396	   8136	  39424	2181956	 214b44	lightningd/lightning_onchaind
2133391	   8352	  38880	2180623	 21460f	lightningd/lightning_openingd
1512168	   2136	  34384	1548688	 17a190	lightningd/lightning_websocketd
```

After:
```
text	   data	    bss	    dec	    hex	filename
2192065	   8488	  38912	2239465	 222be9	lightningd/lightning_channeld
2030957	   7224	  38816	2076997	 1fb145	lightningd/lightning_closingd
2179571	   7968	  39104	2226643	 21f9d3	lightningd/lightning_connectd
3354296	 119288	  39208	3512792	 3599d8	lightningd/lightningd
2127933	   8144	  38816	2174893	 212fad	lightningd/lightning_dualopend
2141699	   7856	  39072	2188627	 216553	lightningd/lightning_gossipd
2024482	   7288	   5240	2037010	 1f1512	lightningd/lightning_hsmd
2072074	   7920	   5400	2085394	 1fd212	lightningd/lightning_onchaind
2077773	   8144	  38816	2124733	 206bbd	lightningd/lightning_openingd
1408958	   1752	    344	1411054	 1587ee	lightningd/lightning_websocketd
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-17 08:51:02 +09:30
Simon Vrouwe ad3cbed7c2 plugin: autoclean fix double free when re-enable, remove xfail mark from test_
Fixes a crash when enabling after a disable with cycle_seconds=0.
2022-07-16 14:19:05 +09:30
Simon Vrouwe 94f16f14b7 test: test_invoices add a stress test 2022-07-16 14:19:05 +09:30
Rusty Russell 401f1debc5 common: clean up json routine locations.
We have them split over common/param.c, common/json.c,
common/json_helpers.c, common/json_tok.c and common/json_stream.c.

Change that to:
* common/json_parse (all the json_to_xxx routines)
* common/json_parse_simple (simplest the json parsing routines, for cli too)
* common/json_stream (all the json_add_xxx routines)
* common/json_param (all the param and param_xxx routines)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-15 12:24:00 -05:00
Rusty Russell ec76ba3895 lightningd/connect_control: remove param_tok from connect.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-15 12:24:00 -05:00
Rusty Russell 9685c1adaf lightningd: remove getsharedsecret.
This was introduced to allow creating a shared secret, but it's better to use
makesecret which creates unique secrets.  getsharedsecret being a generic ECDH
function allows the caller to initiate conversations as if it was us; this
is generally OK, since we don't allow untrusted API access, but the commando
plugin had to blacklist this for read-only runes explicitly.

Since @ZmnSCPxj never ended up using this after introducing it, simply
remove it.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Removed: JSONRPC: `getsharedsecret` API: use `makesecret`
2022-07-15 22:17:58 +09:30
Rusty Russell c34a0a22ad makesecret: change info_hex arg to simply "hex" to match datastore command.
And fix schema: it wasn't tested as there was no test-by-parameter-name.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-15 22:17:58 +09:30
niftynei 769efe8d54 tests: massively speed up our wait for enormous feerates
We do smoothing, waiting for this to hit the target (50k+) was taking
longer than my TIMEOUT=15; here we increase the speed at which it hits
exit velocity, so to speak.
2022-07-15 22:16:27 +09:30
niftynei f2e7e9d919 coin-moves: only log htlc_timeout pair for penalty txs
We cleanup our output tracking for timeout txs when the peer's
htlc_timeout self-expiry is hit; we'd also log its spend if happen to
see it get spent.

This is a bit of a race as they can't spend it until the locktime is
available. Hence the flakiness in tests that expected the `htlc_timeout`
to *not* be spent.

Instead, we only log an external's `htlc_timeout` spend in the case
where we also immediately register another output to track for it (only
happens when said htlc is stealable)

Fixes #5405
In-Collab-With: @ddustin
2022-07-15 22:16:27 +09:30
niftynei d284b98911 notify: `channel_state_changed` now receives notice when channel opens
Previously we wouldn't notify when a channel moves into state
"CHANNELD_AWAITING_LOCKIN", as this is the original state (so there's
no movement btw states). This meant that it's impossible to track when a
channel's commitment txs have been exchanged and we're waiting for
onchain confirmation.

It's useful to have notice of this initialization though, all in one
place so that the `channel_state_changed` notification can successfully
track the entire lifecycle of a channel, from inception to close.

Note that for v2 "dual-funded" channels, we already notify at the same place, at
"DUALOPEND_AWAITING_LOCKIN" (the initial state for a dualopend channel
is "DUALOPEND_OPEN_INIT" -- this is the only state we don't get notified
at now...)

Changelog-Added: Plugins: `channel_state_changed` now triggers for a v1 channel's initial "CHANNELD_AWAITING_LOCKIN" state transition (from prior state "unknown")
2022-07-14 12:42:48 -05:00
Rusty Russell bdefbabbef lightningd: re-transmit all closing transactions on startup.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-14 12:40:57 -05:00
Rusty Russell d544ae075b pytest: test (failing) that we rexmit closing transactions on startup.
Usually bitcoind will have them, but it's best to re-xmit.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-14 12:40:57 -05:00
adi2011 7b160b203a tests: Add tests for the RPCs
Changelog-Added: Static channel backup, to enable smooth fund recovery in case of complete data loss
2022-07-14 12:24:48 -05:00
niftynei 95ec897ac0 dual-fund: Fail if you try to buy a liquidity ad w/o dualfunding on
Fixes #5271

In-Collaboration-With: Base58 'n Coding Seminar Participants

Changelog-Changed: `fundchannel` now errors if you try to buy a liquidity ad but dont' have `experimental-dual-fund` enabled
2022-07-13 12:29:50 -05:00
Rusty Russell 1217f479df sendpay: allow route to contain both amount_msat and (deprecated) msatoshi.
Since it's a deprecation, we simply ignore one, rather than properly
checking they match etc.

Fixes: #5386
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-12 12:42:05 -05:00
Rusty Russell 1d78911b29 pytest: test that we allow both msatoshi and amount_msat is route for sendpay.
We deprecated msatoshi, but getroute() still returns both (unless
--allow-deprecated-apis=False), so we need to accept both.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-12 12:42:05 -05:00
Rusty Russell f98df63b75 Revert "pytest: fix test_gossip_no_backtalk flake."
This reverts commit 6cc4858847.

No longer needed now we don't flush gossip rcvd_filter as aggressively.
2022-07-12 21:41:19 +09:30
Rusty Russell 06e1e119aa pytest: fix test_gossip_no_empty_announcements flake.
This is a side-effect of fixing aging: sometimes, we age our
rcvd_filter cache too fast, and thus re-xmit.  This breaks
our test, since it used dev-disconnect on the channel_announce,
but that closes to l3, not l1!

```
>       assert l1.rpc.listchannels()['channels'] == []
E       AssertionError: assert [{'active': T...ags': 1, ...}] == []
E         Left contains 2 more items, first extra item: {'active': True, 'amount_msat': 100000000msat, 'base_fee_millisatoshi': 1, 'channel_flags': 0, ...}
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5403
2022-07-12 21:41:19 +09:30
Rusty Russell 312751075c lightningd: save outgoing information for more forwards.
There's one case where we can present more infomation, so do that, and
fix documentation.

Changelog-Added: JSON-RPC: `listforwards` now shows `out_channel` in more cases: even if it couldn't actually send to it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5329
2022-07-12 06:38:11 +09:30
Rusty Russell 980f3bda1f pytest: test that listforwards gives as much outgoing information as possible.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-12 06:38:11 +09:30
Simon Vrouwe 7115611249 pytest: test plugin does not register same option "name" 2022-07-10 21:09:41 -05:00
Simon Vrouwe 981fd2326a lightningd: convert plugin->cmd to absolute path, fail plugin early when non-existent
Otherwise different relative paths (e.g. /dir/plugin and /dir/../dir/plugin) to same plugin
executable would not be recognized as the same plugin
2022-07-10 21:09:41 -05:00
Rusty Russell 9d1b5a654e pytest: fix flake in test_node_reannounce
Fixes 25699994e5 (pytest: fix flake in test_node_reannounce).

Converting ->set->list does not give deterministic order, so we can
end up with the two lists differing due to order:

```
 May send its own announcement *twice*, since it always spams us.
        msgs2 = list(set(msgs2))
>       assert msgs == msgs2
E       AssertionError: assert ['01012ff5580...000000000000'] == ['01014973d81...000000000000']
E         At index 0 diff: '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000' != '01014973d8160dd8fc28e8fb25c40b9d5c68aed8dfb36af9fc13e4d2040fb3718553051a188ce98239c0bed138e1f8713a64acc7de98c183c9597fa58bf37f0b89bb0007800000080269a262bbd16c022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59022d2253494c454e544152544953542d336333626132392d6d6f6464656400000000000000'
E         Full diff:
E           [
E         +  '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000',
E            '01014973d8160dd8fc28e8fb25c40b9d5c68aed8dfb36af9fc13e4d2040fb3718553051a188ce98239c0bed138e1f8713a64acc7de98c183c9597fa58bf37f0b89bb0007800000080269a262bbd16c022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59022d2253494c454e544152544953542d336333626132392d6d6f6464656400000000000000',
E         -  '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000',
E           ]
``

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-09 12:27:05 +09:30
Rusty Russell 7dd8e27862 connectd: don't insist on ping replies when other traffic is flowing.
Got complaints about us hanging up on some nodes because they don't respond
to pings in a timely manner (e.g. ACINQ?), but that turned out to be something
else.

Nonetheless, we've had reports in the past of LND badly prioritizing gossip
traffic, and thus important messages can get queued behind gossip dumps!

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: connectd: give busy peers more time to respond to pings.
2022-07-09 12:27:05 +09:30
Rusty Russell 6cc4858847 pytest: fix test_gossip_no_backtalk flake.
Because we expire cache every flush, in DEVELOPER mode that can happen in
just over a second.  If gossipd takes a while to process the gossip,
this can mean we actually forget we received it from the peer.

Easiest fix is to run this test in non-DEVELOPER mode.

```       # With DEVELOPER, this is long enough for gossip flush.
        time.sleep(2)
>       assert not l3.daemon.is_in_log(r'\[OUT\] 0100')
E       AssertionError: assert not '2022-06-30T06:00:31.031Z 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: [OUT] 0100294834e94aa2407ace65373c84c2a8056d67a6778c22bd43f03a9000d8a01f075c3996ab6c88951bd9805adbfde78292022d7514ffcec37d5e466ef5ebd235c87ffa8a8567904502cd65ae913dbb8014509f442a6e37b00fd59fa3a06811a4965f45df1612d3a32b2566001430cfec4346cc021bd0a1bf6a87417dbc29e0715f16c4f9dc5a01bff07368ba8f54cf9813c3fc8f9f8b8c8005da2a18021f322a9a168af8f82d9deba49ced51bb26a9abc001bbad314a15baccc87c685e9302533c2e86a7b9dfe90bd4cb6be5c2da375be8c4a535a63cacff0544957a34ca865fdb61f9198d19dfd990a0fcbf8de39fa2c0fc54435c16e74df2b49fe3b6e905d599000006226e46111a0b59caaf126043eb5bbf28c34f3a5e332a1fc7b2b73cf188910f0000670000010001022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d590266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c0351802e3bd38009866c9da8ec4aa99cc4ea9c6c0dd46df15c61ef0ce1f271291714e5702324266de8403b3ab157a09f1f784d587af61831c998c151bcc21bb74c2b2314b'
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-09 09:59:52 +09:30
Rusty Russell 6aec374674 lightningd: make log-prefix actually prepend all log messages as expected.
It actually only sets the prefix for the lightningd core log messages;
the other logs have their own prefix.

Make it a real, process-wide prefix which actually goes in front of the timestamp.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: options: `log-prefix` now correctly prefixes *all* log messages.
2022-07-09 09:59:52 +09:30
Rusty Russell ae49545875 pytest: fix flake in test_option_upfront_shutdown_script
Looking at the CI logs, it seems like it took over 5 seconds, so
the unilateral close occurred instead of the expected rejection
of the WIRE_SHUTDOWN reply.  Make it bulletproof.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-09 09:59:52 +09:30
Rusty Russell f6f1844e15 options: let log-level subsystem filter also cover nodeid.
That's useful for "tell me everything about this node" debugging.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5348
Changelog-Added: lightningd: `log-level=debug:<partial-nodeid>` supported to get debug-level logs for everything about a peer.
2022-07-09 09:59:52 +09:30