If we expect further events for an onchain output (because we can steal
it away from the 'external'/rightful owner), we mark them.
This prevents us from marking a channel as 'onchain-resolved' before
all events that we're interested in have actually hit the chain.
Case that this matters:
Peer publishes a (cheating) unilateral close and a timeout htlc (which
we can steal).
We then steal the timeout htlc.
W/o the stealable flag, we'd have marked the channel as resolved when
the peer published the timeout htlc, which is incorrect as we're still
waiting for the resolution of that timeout htlc (b/c we *can* steal it).
We were failing to mark channels as resolved b/c we weren't using later
events to external (but originated from this account) events as signals
to run the channel resolution check.
This fixes that, and adds a test.
Adds schema definitions and manpages for bkpr- commands; also renames
the commands to all start with 'bkpr-', so they're easier to identify/
make runes about.
Anchor outputs are ignored by the clightning wallet, but we keep track
of them in the bookkeeper. This causes problems when we do the balance
checks on restart w/ the balance_snapshot -- it results in us printing
out a journal_entry to 'get rid of' the anchors that the clightning node
doesnt know about.
Instead, we mark some outputs as 'ignored' and exclude these from our
account balance sums when we're comparing to the clightning snapshot.
For income events, break out the amount paid in routing fees vs the
total amount of the *invoice* that is paid.
Also printout these fees, when available, on listaccountevents
This is a rare case where we RBF the output of a penalty until it no
longer has an output value we can reclaim. We ignore the txid for these
events when closing a channel.
We issue events for external deposits (withdrawals) before the tx is
confirmed in a block. To avoid double counting these, we don't count
them as confirmed/included until after they're confirmed. We do this
by keeping the blockheight as zero until the withdraw for the input for
them comes through.
Note that since we don't have any way to note when RBF'd withdraws
aren't eligible for block inclusion anymore, we don't really have a good
heuristic to trim them. Which is fine, they *will* show up in account
events however.
onchain fees are weird at channel close because:
- you may be missing an trimmed htlc (which went to fees)
- the balance from close may have been rounded (msats cant land on
chain)
- the close might have been a past state and you've actually
ended up with more money onchain than you had in the channel. wut
This commit accounts for all of this appropriately, with some tests.
channel_close.debit should equal onchain_fee.credit (for that txid)
plus sum(chain_event.credit [wallet/channel_acct]).
In the penalty case, channel_close.debit becomes channel_close.debit +
penalty_adj.debit, i.e.
channel-close.debit + (penalty_adj.debit) =
onchain_fee.credit
+ sum(chain_event.credit [wallet/channel_acct])
We might only have seen one side of the channel, as shown below. Wait
for both:
```
_____________________________ test_wumbo_channels ______________________________
[gw2] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python3
node_factory = <pyln.testing.utils.NodeFactory object at 0x7f5d51743b10>
bitcoind = <pyln.testing.utils.BitcoinD object at 0x7f5d51699d10>
@pytest.mark.openchannel('v1')
@pytest.mark.openchannel('v2')
def test_wumbo_channels(node_factory, bitcoind):
l1, l2, l3 = node_factory.get_nodes(3,
opts=[{'large-channels': None},
{'large-channels': None},
{}])
conn = l1.rpc.connect(l2.info['id'], 'localhost', port=l2.port)
expected_features = expected_peer_features(wumbo_channels=True)
if l1.config('experimental-dual-fund'):
expected_features = expected_peer_features(wumbo_channels=True,
extra=[21, 29])
assert conn['features'] == expected_features
assert only_one(l1.rpc.listpeers(l2.info['id'])['peers'])['features'] == expected_features
# Now, can we open a giant channel?
l1.fundwallet(1 << 26)
l1.rpc.fundchannel(l2.info['id'], 1 << 24)
# Get that mined, and announced.
bitcoind.generate_block(6, wait_for_mempool=1)
# Connect l3, get gossip.
l3.rpc.connect(l1.info['id'], 'localhost', port=l1.port)
wait_for(lambda: len(l3.rpc.listnodes(l1.info['id'])['nodes']) == 1)
wait_for(lambda: 'features' in only_one(l3.rpc.listnodes(l1.info['id'])['nodes']))
# Make sure channel capacity is what we expected.
> assert ([c['amount_msat'] for c in l3.rpc.listchannels()['channels']]
== [Millisatoshi(str(1 << 24) + "sat")] * 2)
E assert [16777216000msat] == [16777216000m...777216000msat]
E Right contains one more item: 16777216000msat
E Full diff:
E - [16777216000msat, 16777216000msat]
E + [16777216000msat]
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There's a race btw disconnecting and returning a successful RPC call for
openchannel_signed; if we disconnect quickly, we get an RPC error back.
Too slow and it returns w/o an error.
This needs to be cleaned up on a whole, work that I'm planning to get
into as part of a funder re-write. For now, let's just let the test
continue whether this call succeeds or fails.
We reset counters every minute, so ratelimit tests can flake since we
might hit that boundary.
Instead, wait for the reset then test explicitly, assuming that takes
less than 60 seconds.
```
for rune, cmd, params in failures:
print("{} {}".format(cmd, params))
with pytest.raises(RpcError, match='Not authorized:') as exc_info:
l2.rpc.call(method='commando',
payload={'peer_id': l1.info['id'],
'rune': rune['rune'],
'method': cmd,
> 'params': params})
E Failed: DID NOT RAISE <class 'pyln.client.lightning.RpcError'>
...
DEBUG:root:Calling commando with payload {'peer_id': '0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518', 'rune': 'O8Zr-ULTBKO3_pKYz0QKE9xYl1vQ4Xx9PtlHuist9Rk9NCZwbnVtPTAmcmF0ZT0zJnJhdGU9MQ==', 'method': 'getinfo', 'params': {}}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As the comment in set_feerates says: "Technically, this waits until
it's called, not until it's processed.".
And the wait_for() line doesn't work, since that condition is already true.
```
@unittest.skipIf(TEST_NETWORK == 'liquid-regtest', "Fees on elements are different")
@unittest.skipIf(
not DEVELOPER or DEPRECATED_APIS, "Without DEVELOPER=1 we snap to "
"FEERATE_FLOOR on testnets, and we test the new API."
)
def test_feerates(node_factory):
l1 = node_factory.get_node(options={'log-level': 'io',
'dev-no-fake-fees': True}, start=False)
l1.daemon.rpcproxy.mock_rpc('estimatesmartfee', {
'error': {"errors": ["Insufficient data or no feerate found"], "blocks": 0}
})
l1.start()
# All estimation types
types = ["opening", "mutual_close", "unilateral_close", "delayed_to_us",
"htlc_resolution", "penalty"]
# Try parsing the feerates, won't work because can't estimate
for t in types:
with pytest.raises(RpcError, match=r'Cannot estimate fees'):
feerate = l1.rpc.parsefeerate(t)
# Query feerates (shouldn't give any!)
wait_for(lambda: len(l1.rpc.feerates('perkw')['perkw']) == 2)
feerates = l1.rpc.feerates('perkw')
assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
assert 'perkb' not in feerates
assert feerates['perkw']['max_acceptable'] == 2**32 - 1
assert feerates['perkw']['min_acceptable'] == 253
for t in types:
assert t not in feerates['perkw']
wait_for(lambda: len(l1.rpc.feerates('perkb')['perkb']) == 2)
feerates = l1.rpc.feerates('perkb')
assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
assert 'perkw' not in feerates
assert feerates['perkb']['max_acceptable'] == (2**32 - 1)
assert feerates['perkb']['min_acceptable'] == 253 * 4
for t in types:
assert t not in feerates['perkb']
# Now try setting them, one at a time.
# Set CONSERVATIVE/2 feerate, for max
l1.set_feerates((15000, 0, 0, 0), True)
wait_for(lambda: len(l1.rpc.feerates('perkw')['perkw']) == 2)
feerates = l1.rpc.feerates('perkw')
assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
assert 'perkb' not in feerates
> assert feerates['perkw']['max_acceptable'] == 15000 * 10
E assert 4294967295 == (15000 * 10)
tests/test_misc.py:1392: AssertionError
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
On fast machines, we don't get failures sometimes on commando commands.
(*But* we still got "New cmd replacing old" messages, which is how I
realized we weren't freeing them promptly, hence the previous fix).
```
# Should have exactly one discard msg from each discard
> nodes[0].daemon.wait_for_logs([r"New cmd from .*, replacing old"] * discards)
tests/test_plugin.py:2839:
...
> raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E TimeoutError: Unable to find "[]" in logs.
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was weird. Here is the message (with \n turned into real new lines):
```
2022-07-24T07:20:08.9144998Z Plugin '/home/runner/work/lightning/lightning/tests/plugins/dblog.py' returned an invalid response to the db_write hook: {"jsonrpc": "2.0", "id": 40, "error": {"code": -32600, "message": "Error while processing db_write: UNIQUE constraint failed: shachain_known.shachain_id, shachain_known.pos", "traceback": "Traceback (most recent call last):
File \"/home/runner/work/lightning/lightning/contrib/pyln-client/pyln/client/plugin.py\", line 631, in _dispatch_request
result = self._exec_func(method.func, request)
File \"/home/runner/work/lightning/lightning/contrib/pyln-client/pyln/client/plugin.py\", line 616, in _exec_func
return func(*ba.args, **ba.kwargs)
File \"/home/runner/work/lightning/lightning/tests/plugins/dblog.py\", line 45, in db_write
plugin.conn.execute(c)
sqlite3.IntegrityError: UNIQUE constraint failed: shachain_known.shachain_id, shachain_known.pos
"}}
```
Finally, I realized that we *kill* l2: this means it has updated the
plugin db but not the real db. This is expected: a real backup plugin
would handle this case.
Simply disable the test for this case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As we'll see in the next patch, this wasn't *supposed* to work wihtout dblog-file,
but it did, creating a dblog called "null".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If rune contains invalid UTF-8, offers (which implements decode) would
produce JSON with invalid UTF-8, which causes lightningd to complain
and kill it, and then die because it's an important plugin.
So don't decode invalid UTF-8!
Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were waiting too long for the reconnect to happen (60s default),
which caused this test to timeout.
When testing, let's speed up the reconnect.
L2 tried to reconnect but didn't have connection information in its
gossip -- is there a way to ask/save connection data from a node you're
making a channel with that doesn't rely on their node_announcement?
This gives a nice way to ensure your secret is the correct one.
Also, we don't need to suppress VALGRIND for this test, now the output
races are fixed.
Changelog-Added: `hsmtool`: new command `checkhsm` to check BIP39 passphrase against hsm_secret.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Some tests need to inspect it, but most don't, and I suspect I'm missing some
error messages due to this.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The error routine returns a string literal in this case, which we can't take().
Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These may eventually end up in pyln-client, as they allow talking to
the GRPC interface exposed by cln-grpc, however for now they are used
for testing only. Once we have sufficient API and test coverage we can
move them and leave imports in their place.
If you get the wrong hsm_secret, your node_id will change, and
peers won't know who you are, bitcoind will reject your transaction
signatures, and other madness.
Catch this as soon as it happens, by storing our node_id in the db.
Suggested-by: @cdecker, @fiatjaf
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Config: `lightningd` will refuse to start with the wrong node_id (i.e. hsm_secret changes).
Returning an warning message when the pub key is not specified and there is no node in the graph.
We try to help people that use core lightning as a signer and nothings else.
Changelog-Deprecated: rpc: checkmessage return an error when the pubkey is not specified and it is unknown in the network graph.
We close the channel because the payment fulfilment is not totally done, and
we generate 6 blocks, causing it to hit CLTV deadline.
```
# send some payments, mine a block or two
inv = l2.rpc.invoice(10**4, '1', 'no_1')
l1.rpc.pay(inv['bolt11'])
# l2 attempts to close a channel that it leased, should fail
with pytest.raises(RpcError, match=r'Peer leased this channel from us'):
l2.rpc.close(l1.get_channel_scid(l2))
bitcoind.generate_block(6)
sync_blockheight(bitcoind, [l1, l2])
# make sure we're at the right place for the csv lock
> l2.daemon.wait_for_log('Blockheight: SENT_ADD_ACK_COMMIT->RCVD_ADD_ACK_REVOCATION LOCAL now 115')
tests/test_closing.py:823:
...
lightningd-2 2022-07-17T13:15:34.242Z DEBUG lightningd: Adding block 115: 39d95061935e9fc42b04c86ae60d0cf157765aff4c040f3a8d0b7888db19e015
lightningd-2 2022-07-17T13:15:34.244Z UNUSUAL 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Peer permanent failure in CHANNELD_NORMAL: Fulfilled HTLC 0 SENT_REMOVE_COMMIT cltv 115 hit deadline
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Dualopend is not listening to the peer fd when it hangs up, so doesn't
notice it's gone. We don't clean up the channel until it's done (usually
a good thing: it could be about to lock it in), but this harms us
here.
Fix the test failure and make a comment.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The only places which should call try_reconnect now are the "connect"
command, and the disconnect path when it decides there's still an
active channel.
This introduces one subtlety: if we disconnect when there's no active
channel, but then the subd makes one, we have to catch that case!
This temporarily reverts "slow" reconnections to fast ones: see next
patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Connectd already does this when we *receive* an error or warning, but
now do it on send. This causes some slight behavior change: we don't
disconnect when we close a channel, for example (our behaviour here
has been inconsistent across versions, depending on the code).
When connectd is told to disconnect, it now does so immediately, and
doesn't wait for subds to drain etc. That simplifies the manual
disconnect case, which now cleans up as it would from any other
disconnection when connectd says it's disconnected.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In various places, we assumed that when `connected` is false,
everything is finished. This is not true: we should wait for the
state we expect.
In addition, various places allows reconnections, which interfered
with the logic; suppress them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We want to avoid lost messages in the common cases.
This generalizes our drain code, by giving the subds each 5 seconds to
close themselves, but continue to allow them to send us traffic (if
peer is still connected) and continue to send them traffic.
We continue to send traffic *out* to the peer (if it's still
connected), until all subds are gone. We still have a 5 second timer
to close the connection to peer.
On reconnects, we don't do this "drain period" on reconnects: we kill
immediately.
We fix up one test which was looking for the "disconnect" message
explicitly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's caused by a reconnection race: we hold the new incoming connection while we
ask lightningd to kill the old connection. But under some circumstances we leave
the new incoming hanging (with, in this case, old reestablish messages unread!)
and another connection comes in.
Then, later we service the long-gone "incoming" connection, channeld
reads the ancient reestablish message and gets upset.
This test used to hang, but now we've fixed reconnection races it is fine.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This adds an X-Fail testcase that demonstrates that currently
the port of a DNS announcement is not set to the corresponding
network port (in this case regtest), but it will be set to 0.
Changelog-None
This is a bit weird since it lives in the offers plugin, but it works
well. This should make runes much more approachable for people!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I'm assuming that nobody wants a rate slower than 1 per minute; we can
introduce 'drate' if we want a per-day kind of limit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is needed for invoice, which can be asked to commit to giant descriptions
(though that's antisocial!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au
Changelog-Added: Plugins: `commando` a new builtin plugin to send/recv peer commands over the lightning network, using runes.
We have them split over common/param.c, common/json.c,
common/json_helpers.c, common/json_tok.c and common/json_stream.c.
Change that to:
* common/json_parse (all the json_to_xxx routines)
* common/json_parse_simple (simplest the json parsing routines, for cli too)
* common/json_stream (all the json_add_xxx routines)
* common/json_param (all the param and param_xxx routines)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was introduced to allow creating a shared secret, but it's better to use
makesecret which creates unique secrets. getsharedsecret being a generic ECDH
function allows the caller to initiate conversations as if it was us; this
is generally OK, since we don't allow untrusted API access, but the commando
plugin had to blacklist this for read-only runes explicitly.
Since @ZmnSCPxj never ended up using this after introducing it, simply
remove it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Removed: JSONRPC: `getsharedsecret` API: use `makesecret`
We do smoothing, waiting for this to hit the target (50k+) was taking
longer than my TIMEOUT=15; here we increase the speed at which it hits
exit velocity, so to speak.
We cleanup our output tracking for timeout txs when the peer's
htlc_timeout self-expiry is hit; we'd also log its spend if happen to
see it get spent.
This is a bit of a race as they can't spend it until the locktime is
available. Hence the flakiness in tests that expected the `htlc_timeout`
to *not* be spent.
Instead, we only log an external's `htlc_timeout` spend in the case
where we also immediately register another output to track for it (only
happens when said htlc is stealable)
Fixes#5405
In-Collab-With: @ddustin
Previously we wouldn't notify when a channel moves into state
"CHANNELD_AWAITING_LOCKIN", as this is the original state (so there's
no movement btw states). This meant that it's impossible to track when a
channel's commitment txs have been exchanged and we're waiting for
onchain confirmation.
It's useful to have notice of this initialization though, all in one
place so that the `channel_state_changed` notification can successfully
track the entire lifecycle of a channel, from inception to close.
Note that for v2 "dual-funded" channels, we already notify at the same place, at
"DUALOPEND_AWAITING_LOCKIN" (the initial state for a dualopend channel
is "DUALOPEND_OPEN_INIT" -- this is the only state we don't get notified
at now...)
Changelog-Added: Plugins: `channel_state_changed` now triggers for a v1 channel's initial "CHANNELD_AWAITING_LOCKIN" state transition (from prior state "unknown")
Fixes#5271
In-Collaboration-With: Base58 'n Coding Seminar Participants
Changelog-Changed: `fundchannel` now errors if you try to buy a liquidity ad but dont' have `experimental-dual-fund` enabled
Since it's a deprecation, we simply ignore one, rather than properly
checking they match etc.
Fixes: #5386
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We deprecated msatoshi, but getroute() still returns both (unless
--allow-deprecated-apis=False), so we need to accept both.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a side-effect of fixing aging: sometimes, we age our
rcvd_filter cache too fast, and thus re-xmit. This breaks
our test, since it used dev-disconnect on the channel_announce,
but that closes to l3, not l1!
```
> assert l1.rpc.listchannels()['channels'] == []
E AssertionError: assert [{'active': T...ags': 1, ...}] == []
E Left contains 2 more items, first extra item: {'active': True, 'amount_msat': 100000000msat, 'base_fee_millisatoshi': 1, 'channel_flags': 0, ...}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5403
There's one case where we can present more infomation, so do that, and
fix documentation.
Changelog-Added: JSON-RPC: `listforwards` now shows `out_channel` in more cases: even if it couldn't actually send to it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5329
Fixes 25699994e5 (pytest: fix flake in test_node_reannounce).
Converting ->set->list does not give deterministic order, so we can
end up with the two lists differing due to order:
```
May send its own announcement *twice*, since it always spams us.
msgs2 = list(set(msgs2))
> assert msgs == msgs2
E AssertionError: assert ['01012ff5580...000000000000'] == ['01014973d81...000000000000']
E At index 0 diff: '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000' != '01014973d8160dd8fc28e8fb25c40b9d5c68aed8dfb36af9fc13e4d2040fb3718553051a188ce98239c0bed138e1f8713a64acc7de98c183c9597fa58bf37f0b89bb0007800000080269a262bbd16c022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59022d2253494c454e544152544953542d336333626132392d6d6f6464656400000000000000'
E Full diff:
E [
E + '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000',
E '01014973d8160dd8fc28e8fb25c40b9d5c68aed8dfb36af9fc13e4d2040fb3718553051a188ce98239c0bed138e1f8713a64acc7de98c183c9597fa58bf37f0b89bb0007800000080269a262bbd16c022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59022d2253494c454e544152544953542d336333626132392d6d6f6464656400000000000000',
E - '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000',
E ]
``
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Got complaints about us hanging up on some nodes because they don't respond
to pings in a timely manner (e.g. ACINQ?), but that turned out to be something
else.
Nonetheless, we've had reports in the past of LND badly prioritizing gossip
traffic, and thus important messages can get queued behind gossip dumps!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: connectd: give busy peers more time to respond to pings.
Because we expire cache every flush, in DEVELOPER mode that can happen in
just over a second. If gossipd takes a while to process the gossip,
this can mean we actually forget we received it from the peer.
Easiest fix is to run this test in non-DEVELOPER mode.
``` # With DEVELOPER, this is long enough for gossip flush.
time.sleep(2)
> assert not l3.daemon.is_in_log(r'\[OUT\] 0100')
E AssertionError: assert not '2022-06-30T06:00:31.031Z 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: [OUT] 0100294834e94aa2407ace65373c84c2a8056d67a6778c22bd43f03a9000d8a01f075c3996ab6c88951bd9805adbfde78292022d7514ffcec37d5e466ef5ebd235c87ffa8a8567904502cd65ae913dbb8014509f442a6e37b00fd59fa3a06811a4965f45df1612d3a32b2566001430cfec4346cc021bd0a1bf6a87417dbc29e0715f16c4f9dc5a01bff07368ba8f54cf9813c3fc8f9f8b8c8005da2a18021f322a9a168af8f82d9deba49ced51bb26a9abc001bbad314a15baccc87c685e9302533c2e86a7b9dfe90bd4cb6be5c2da375be8c4a535a63cacff0544957a34ca865fdb61f9198d19dfd990a0fcbf8de39fa2c0fc54435c16e74df2b49fe3b6e905d599000006226e46111a0b59caaf126043eb5bbf28c34f3a5e332a1fc7b2b73cf188910f0000670000010001022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d590266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c0351802e3bd38009866c9da8ec4aa99cc4ea9c6c0dd46df15c61ef0ce1f271291714e5702324266de8403b3ab157a09f1f784d587af61831c998c151bcc21bb74c2b2314b'
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It actually only sets the prefix for the lightningd core log messages;
the other logs have their own prefix.
Make it a real, process-wide prefix which actually goes in front of the timestamp.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: options: `log-prefix` now correctly prefixes *all* log messages.
Looking at the CI logs, it seems like it took over 5 seconds, so
the unilateral close occurred instead of the expected rejection
of the WIRE_SHUTDOWN reply. Make it bulletproof.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
That's useful for "tell me everything about this node" debugging.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5348
Changelog-Added: lightningd: `log-level=debug:<partial-nodeid>` supported to get debug-level logs for everything about a peer.
If we can't broadcast the tx, confirm that it didn't end up in the
mempool or the utxo set before throwing an error.
Note that this doesn't protect us in the case where the funding
output has already been *spent*... but that's extremely rare, right?
Fixes#5296
Reported-By: @rustyrussell
Collab-With: @vincenzopalazzo
This grows the routing state in order to index both okay-to-broadcast
and rate-limited gossip. The gossip_store also logs the rate-limited
gossip if useful. This allows the broadcast of the last non-rate-limited
gossip.
This will be used to decouple internal use of gossip from what is
passed to gossip peers. Updates GOSSIP_STORE_VERION to 10.
Changelog-Changed: gossip_store updated to version 10.
With zeroconf we have to duplicate the `local_channel_announcement`
since we locally announce the aliased version, and then on the first
confirmation we also add the funding scid version.