(There are plenty of strings that convert into 2 bytes of UTF8
without being two ascii characters, and there are plenty of
sequences of two ascii characters that aren't printable.)
The functions that handle OpenSSH-formatted keys now no longer read or
write from disk. This commit updates their names and doc strings to stop
suggesting they do.
This moves the filesystem calls from the `ssh` module to
`ArtiNativeKeyStore`. While `ArtiNativeKeyStore` shouldn't be concerning
itself with filesystem operations either, that refactoring will be
tackled separately (see arti#899).
- This adds a new crate, `tor-geoip`, which can parse and perform
lookups in the GeoIP database C-tor already uses (generated by a
maintenance utility in the C-tor codebase).
- We embed a copy of C-tor's databases with the crate and use
`include_str!` to ship them with the binary, bloating its size
somewhat.
- This does, however, solve the problem of figuring out how to
distribute these.
- The plan is to gate this functionality behind a feature flag anyway,
so the cost should be nil unless explicitly opted into.
Part of tpo/core/onionmasq#47.
The `keymgr` module selects one of the key manager implementations
(dummy or "real") and exposes all the APIs we need, so we can remove
all of the cfgs related to the `keymgr` feature from `client.rs`.
Part of #897
This moves the key manager API selection (dummy vs "real" impl) into the
`keymgr` module. The module exports the dummy API if the `keymgr`
feature is disabled, and the impl from `tor-keymgr` otherwise.
Part of #897
`insert` and `remove` should return an error rather than `Ok(())`, as
`Ok(())` implies the key was stored/removed, which is impossible in the
no-op implementation.
We currently initialize the `ArtiNativeKeyStore` with a dummy root dir,
so when `ArtiNativeKeyStore` starts validating directories, this code
will start to fail. Let's preemptively ignore any errors coming from
`ArtiNativeKeyStore::new`. This is temporary and will be removed when we
introduce the key store config (and a real default value for the
keystore root dir).
This is mostly code movement; you may want to review it with
`--color-moved`.
I'm doing this so we can also use the function in netdoc for
looking up hsdesc authentication.
This could never be a full rustdoc test, since rustdoc never
runs tests for private items. Even if it were a rustdoc test,
it uses a bunch of types that aren't exposed in the right places,
and it invokes nonexistent functions, and it assumes a `self`
that isn't there.
In lieu of writing a new untested test, I've added a note to refer
the developer to where they can find working example code.
This algorithm only looks at circuits until it finds one that
satisfies our needs. To get a random circuit, it just randomizes
the starting point within the pool.
This optimization may help if we let circuit pools grow large.
Previously we'd always try to keep 8 circuits ready. That doesn't
make sense if we are super-busy. Instead, if we run out of
circuits, we double the amount that we try to keep ready, and if we
never go under 80% of our target number, we half the number we try
to keep ready.
We limit the rate of change here, to make sure that we aren't
flapping too much or shrinking too aggressively.
This algorithm is still a mite arbitrary, and will need tuning in
the future.
(Taken from tpo/core/arti!1113 and squashed by Ian Jackson, with
conflicting hunk in StreamPrefs struct skipped.
The setter name is wrong, the cfg feature is wrong, there are no
docs, and the TODO is still there. These will be fixed in a moment.)
We need to allow some lints in the dummy key manager because its
implementation needs to mirror that of `tor_keymgr::KeyMgr` (so we can't
apply the API changes suggested by clippy).
TorClient now only uses the tor_keymgr::KeyMgr implementation if the
keymgr experimental feature is enabled. If the feature is disabled, a
dummy key manager implementation is used.
The new `keymgr` feature depends on `onion-client`, because the key
manager is only used for HS client auth.
This simplifies usage quite a bit and will enable us to implement a
dummy `KeyMgr` that doesn't depend on the error types from tor-keymgr
(which will replace the "real" `KeyMgr` if the keymgr feature is
disabled).
The HS `HsClientSpecifier` and `HsClientSecretKeySpecifier` are moved to
`tor-hsclient`. The HS service secret key specifier stubs are moved to
`tor-hsservice`.
The `KeyMgr` is now initialized with an `ArtiNativeKeyStore` built from
an invalid key store root dir (this is alright for the purposes of this
proof-of-concept, since `ArtiNativeKeyStore::new` won't fail as it
doesn't currently validate the keystore root dir).
This means `KeyMgr` users don't need to specify the underlying key type
(e.g. `ed25519::Keypair`) when retrieving keys. Instead, they can just
specify the type required (as long as it implements `ToEncodableKey`),
e.g. `HsClientIntroAuthKeypair`.
This adds a proof-of-concept `SshKeyType::read_ssh_format_erased`
implementation for `KeyType`. The implementation decodes an OpenSSH key
and converts it to one of the key types used internally by Arti. The
value returned is type-erased, and will be downscast later down the line
by the `KeyMgr` (note: `KeyMgr` doesn't exist yet).
The `SshKeyType::write_ssh_format` will be implemented once these APIs
are a bit more stable.
It's not perfectly clear what this error type should be, so let's use
`ConfigBuildError` for now (it makes things easier in `arti-client`, as
we already have an `ErrorDetail` for it).
Formerly, every time we wanted to launch a new connection, we had
to give the RpcMgr a TorClient. The connection would hold that
TorClient until a session was authenticated, and then would wrap
it in a Session and put it in the object map.
Now, the RpcMgr holds a Box<dyn Fn()...> that knows how to
create Sessions. When a connection is authenticated, it
asks the Mgr to make it a new session. This lets us make it
clearer that the TorClient simply can't be given out until the
connection is authenticated. Later, it will let us create
more types of Session objects under more complicated rules.
It's now not actually possible to write code that doesn't work, even
if `Tr` *isn't* 'static, because of the bounds on `CastTable::insert`.
I tried to produce a non-working setup with a non-static `Simple`, but
you can't implement `Object` for such a thing. Removing 'static from
Object would stop the downcasts from Any to Object working.
Prior to the new typesafe insert, this change
- let f: fn(&dyn $crate::Object) -> &(dyn $traitname + 'static) = |self_| {
+ let f: fn(&dyn $crate::Object) -> &(dyn $traitname) = |self_| {
would result in a runtime crash. Now it results in a compiler error.
This was locally bound to `S` in one place. Bind and use it throughout.
Since this is an RPC object, `O` is a better name.
In each item, use the description once and thereafter just the name.
This adds a Weak reference from Connection to Mgr, makes DispatchTable
mutable, and makes a few other changes as discussed between me and
Diziet the other week.
I bet we are not done tweaking this, but I hope it's a setp forwards.
* Return a more informative error type (instead of Option)
* Check that time periods are an integer number of seconds
* Decide not to change the semantics of an argument.
(I'm not removing it entirely since maybe we _should_ use it, and
maybe we _will_ as we do services. I've added a TODO HS for
removing it or using it, and removed the TODO HS at the head of
pk.rs about making sure that all the key types in the module really
belong there.)
I think it isn't actually a great idea for HsIdParseError to
implement ErrorKind, since the actual ErrorKind would depend
entirely on where the problematic ID came from.
The type is a bit odd but this is a result of the underlying protocol.
I don't feel like inventing `IntroduceAckSuccess` that contains only
the extensions.
Onion service hops (pointlessly) use SHA3-256 for their
authentication, but they truncate it to 20 bytes (assuming I'm
reading the C right.)
See torspec#204 for clarification here.
This is a rather tricky piece of functionality. It works as
follows.
We introduce a `CastTable` type. Each `CastTable` tells us how to
downcast `dyn Object` for objects of a single concrete type.
The `Object` type now has a `get_casttable` method that returns
an empty `CastTable` by default.
`CastTable` is, internally, a map from the `TypeId` of the target
dyn Trait reference type to a function
`fn(&dyn Object) -> &dyn Trait`. These functions are stored as
`Box<dyn Any + ...>`. (They are Boxed because they may refer to
generic functions, which you can't get a static reference to,
and they're Any because the functions have different types.)
The `decl_object!` macro now implements `get_casttable` as
appropriate. (The syntax is a bit janky, but that's what we get
for not using derive_adhoc.) For non-generic types, `get_casttable`
uses a Lazy<CastTable>`. to initialize a CastTable exactly once.
For generic types, it use a `Lazy<RwLock<HashMap<..>>` to
build one CastTable per instantiation of the generic type.
This could probably be optimized a bit more, the yaks could be
shaved in a more scintillating hairstyle, and the syntax for
generic `decl_object` could definitely be improved.
This code tries to fill in some TODO HS code, replacing it with a
lot more code with a bunch more TODO HS comments. Hopefully the
expansions of the new TODO HS comments should be simpler.
We want the ability to send the same handshake request in parallel
on multiple introduce circuits. This implies encoding the client
handshake more than once.
(Sadly we can't _actually_ do this in the protocol as it stands,
since the onion service can use a separate KP_hss_ntor for each
introduction point; I'll add a comment to that effect later.)
We need the fix from [82d69902], which first appeared in async-trait
version 1.54. (Technically we only need this fix in tor-hsclient,
but we may as well update our minimal async-trait version everywhere.)
[82d69902]: 82d6990253
When we break the 1:1 relationship of message and cell, we'll want
this API to take messages, not cells.
This API is experimental, so we don't need to call it a semver
break.
Closes#881.
Formerly we said that it would not return until the handler
was uninstalled. This is incorrect: it returns as soon as the
message is sent and the handler installed.
Closes#885.
The comment was entirely wrong. send_control_message returns as soon
as the message has been enqueued. So we actually *need* to wait for
the oneshot.
Also, given that a circuit collapse doesn't give us a real error, we
plumb the error through the oneshot. Introduce an IEFE to capture the
error from the decoding.
We were able to get as far as we have, merely because all the new code
uses just Arc<ClientCirc> rather than the mockable version.
We want to change that, so we need to mock this function too.
This contains code to:
* Iterate over introduction points
* Make multiple attempts to connect
* Apply timeouts to the various phases of each attempt
* Establish a rendezvous point
* Represent errors that occur during the above
It provides places to add:
* Implementation of the INTRODUCE1/INTRODUCE_ACK handshake
* Reception of RENDEZVOUS2 and actual end-to-end circuit establishment
* Recording of the outcome of connection attempts via particular IPTs
* Using previous IPT outcome information for selecting IPTs to try
* Tests of the new code (although more mocking will probably be needed)
Much of this code works with a fixed type ClientCirc rather than going via
the Mockable traits. That is wrong, and it will be fixed later.
It returns a borrow (so whatever is passed remains borrowed) and the
next phase is going to need to perhaps mutate other parts of data, so
we must pass only what is needed.
Apropos a question that arose on IRC, to which I felt the answer
wasn't 100% unambiguous.
Also, reference the usual implementation (it can't be a link because
it's an upward reference).
The consensus includes a listing for clients and for relays,
saying which protocol versions are _required_ for participation on
the network, and which versions are _recommended_. We have been
parsing this, but not yet exposing it.
This commit adds accessors to expose it, since we'll need that in
order to create CircTargets for introduction points and rendezvous
points.
The actual decoding here is just a placeholder. The important part
is that we can get either a (SessionId, StreamId) tuple out of the
request, or we treat it as part of an isolation token.
This commit has a few TODOs for additional things that we'll need
in order to build out our design.
These identifiers are actually only "global" with respect to a given
`RpcMgr`, but they should not be forgeable or reusable across RpcMgr
objects. We're going to use them so that we have a kind of identifier
for `TorClient`s that we can expose to SOCKS.
I am hoping we can merge this as a "TODO (Diziet)", even though I
think it may be controversial. Ie merging this doesn't represent a
decision to do as I suggest.
Also, no longer talk about handlers being "installed". That's not
something that's exposed by this API.
And, say that `send_control_message` can be called again only
after *`send_control_message`* returns, not when `handle_msg` has
returned `UinstallHandler`. IMO this makes more sense.
Explain that we can't maintain a continuous watch while holding a
conversation with the peer. (This is surely an API bug.)
Rationale: Our weak-vs-strong design is a bit confused at the moment
due to concerns about deduplication and capability semantics. It's
not clear that a general "change strong to weak" method is
compatible with what we want to provide.
I've made doing some design choices here:
* Reserving "rpc" as a prefix for post-authentication
functionality that is not arti-specific.
* Declaring these to be methods on the session rather than methods
on the objects themselves.
There's a problem with defining an API to drop a weak reference; see
comment in code.
This will make it easier to change the semantics of what exactly we
return, whether it has to be/contain a client, whether you can use
it to look up all the live objects, &etc.
Previously, there was a bug in the way that our code used our SOCKS
implementations. If the buffer used for a SOCKS handshake became full
without completing the handshake, then rather than expanding the buffer
or closing the connection, our code would keep trying to read into the
zero-byte slice available in the full buffer forever, in a tight loop.
We're classifying this as a LOW-severity issue, since it is only
exploitable by pluggable transports (which are trusted) and by
local applications with access to the SOCKS port.
Closes#861.
Fixes TROVE-2023-001.
Reported-By: Jakob Lell <jakob AT srlabs DOT de>
Method dispatch rules mean that if the receiver type of the actual
function changes, `self.call()` can turn into a purely-recursive call
which overflows the stack.
Async Rust doesn't have the usual warning for this situation :-(.
UFCS is clumsier but doesn't have that problem because it involves
much less magical dispatch. Instead of generating a recursive call
which overflows the stack, it fails to compile.
The idea here is that we want to make DataStream visible to the
RPC system without requiring that the RPC session hold the
DataStream itself (or the Reader, or the Writer). We could solve
this problem by making _all_ the state in the DataStream shared,
but that would introduce unnecessary extra locking in our critical
path.
Instead we're creating the notion of a "control handle" that lets
you manage and observe a stream without actually owning the stream.
Right now the only supported functionality is asking for the
stream's circuit.
Part of #847
Fixes warning from
cargo -o doc --document-private-items --all-features --workspace
This was evidentlhy overlooked during recent replacement of unescorted
private keys in the code.
I could also have stopped using `::default()` to construct this
(testing-only) object, but I think it makes more sense to turn it
into a non-unit object.
- We make the tor-guardmgr "We have found that {} is usable" line
include the word "guard", otherwise it doesn't appear very useful to a
user in safe logging mode, since the guard gets replaced with
[scrubbed].
- The "Actually got an end cell..." message is downgraded to DEBUG.
Per #798, we want to make sure that we never pass around an
`ed25519::SecretKey`; only an `ed25519::Keypair` (or
`ExpandedKeypair`). This is because, when you're computing an
ed25519 signature, you have to use the public key as one of your
inputs, and if you ever use a mismatched public key you are
vulnerable to a nonce reuse attack.
(For more info see
https://moderncrypto.org/mail-archive/curves/2020/001012.html )
This is like an `ed25519::Keypair`, except that instead of a
`SecretKey` it contains an `ExpandedSecretKey`.
We'll be using this to implement #798, where we impose a rule that
there must be no "unescorted" ed25519 secret keys.
We never want to create one of these from its parts except when we
are testing it; we only want to forward an Introduce1 message with a
new command on it.
There were two bugs here that made the behavior unlike that of C
tor: we had swapped the MAC inputs, and we had forgotten to include
the public key X in the input.
I think that these Input structs had been defined so that we could
use hs_ntor interchangeably with other handshakes. The trouble is,
though, that it doesn't really work like any other handshakes we
have.
Note that some of the invocations for this function seem to put the
key and the message in a questionable order. But that's a thing to
figure out later, while debugging.
We'll want this because our hs_ntor handshake requires access to an
encoded version of the header independent from the actual encrypted
message.
part of #866.
This commit changes certain log messages to debug for recoverable errors
and a warn if all such attempts fail, in order to not clutter up the
info messages that end users get to see.
This function will be used to look up a relay by a set of LinkSpecs
given from an incoming HsDesc or INTRODUCE2 message. It differs
from other "lookup relay by IDs" functions in that it needs to be
able to return "here's a relay", "couldn't found a relay", or
"learned that this relay is impossible."
Closes#855: This is the only new API needed for ChanTarget
validation, I think.
Now ClientCirc is no longer `Clone`, and the things that need it
to be `Clone` instead return and use an Arc<ClientCirc>
We're doing this so that ClientCirc can participate in the RPC
system, and so that its semantics are more obvious.
Closes#846.
Thanks to the type system, this was a much simpler refactoring than
I had feared it would be.
We want each ID to have a unique form every time it is given out,
so that you can't use ID==ID to check whether Object==Object. (See
discussions leading to #848.)
We'd also like the form of object IDs to be a little annoying to
analyze, to discourage people from writing programs that depends on
their particular format. (We are reserving the right to change the
format whenever we want.)
We _don't_ want to use any cryptography here (yet), lest somebody
think that this is an actual security mechanism. (This isn't for
security; it's for encouraging developers to treat IDs as opaque.)
With that in mind, we now lightly obfuscate our generational indices
before returning them.
Per discussion referenced at #848, we want each operation that
returns a strong object ID to return a new, distinct strong ID.
Note that we no longer need to put strong and weak references in the
same arena; we can clean this code up a lot down the road.
Now we generate object IDs that we can parse. This is about to be
obsolete once we change how we generate objects and their IDs for #848,
but we may as well start from a working state.
This does the following:
- Gives every crate a `full`.
- Cause every `full` to depend on `full` from the lower-level
crates.
- Makes every feature listed _directly_ in `experimental` depend
on `__is_experimental`.
This upgrades us to 2.0.0-rc.2, which is the latest in the
not-quite-done-yet 2.0 series.
The only code change that's absolutely needed is opting into the
static_secrets feature.
By implementing `RangeBounds` for `TimerangeBound`, we get
`RangeBoundsExt` for free. This will enable `parse_decrypt_validate` to
easily compute the intersection of the `TimerangeBound`s its layers.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
`descriptor_fetch_attempt` now returns a `TimerangeBound<HsDesc>` (and
so does `parse_descript_validate`).
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
`parse_decrypt_validate` will need to "peek" inside an encrypted
descriptor (before validating it) to extract the `TimerangeBound` of the
inner layer. This is needed to compute the intersection of the
`TimerangeBound`s of both layers.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This makes `descriptor_ensure` refetch the descriptor if either of its
layers (inner or outer) expires.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This will be used for computing the final `TimerangeBound` of a `HsDesc`
from the `TimerangeBound`s of its inner and outer layers.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This makes `descriptor_ensure` refetch the descriptor if it has been
cached for longer than `descriptor-lifetime` minutes.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
Relays and onion service services/clients will both need this.
I'm marking this experimental for now; we should stabilize it before
we release onion services.
If we didn't do this, we would need to transfrom
`EncodedLinkSpec`s into a `LinkSpec::Unrecognized`, which is not
semantically right. What's more, every user of this API wants to
consume encoded link specifiers, so encoding them early saves a
little effort.
This commit adds functions to convert between LinkSpec and
EncodedLinkSpec, and refactors their read/write implementations a
bit to avoid code duplication.
Previously, we only accepted an OwnedCircTarget, which would have
kept us from getting a circuit that was aimed at a specialized
CircTarget that gave us LinkSpecs in a raw order.
This change is necessary so that we can build Extend2 messages
that have their LinkSpecs appear in a verbatim order as provided
in an INTRODUCE2 message or in a HS descriptor.
`parse_decrypt_validate` was marked as experimental because it was
unclear if the newly added `BadTimeBound` error kind belongs in
`ParseErrorKind`. However, we have since renamed `ParseErrorKind` to
`NetdocErrorKind` and decided to keep the new variant, so this API
doesn't need to be experimental anymore.
Closes arti #852
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
`ParseErrorSource` was originally meant to represent a parsing error,
this enum has since gained some variants that aren't really parsing
related (`Signature`, `CertSignature`, `UntimelyDescriptor`).
Since this error type is now used for general-purpose netdoc errors,
let's rename `ParseError{Kind, Source}` to `NetdocError{Kind, Source}`.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
We'll probably need the hsdir list to be shuffled deterministically for
testing purposes (this might be desirable, for example, when we write a
test for HS descriptor download retries).
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
Now there can be one of each, since we want references and handles
to be conceptually separate.
(The alternative would be to say that an operation either "returns a
handle" or "returns a reference (which may become a handle) unless a
handle already exists.")
To me, "Session" suggests that we're authenticated, when we are
not necessarily authenticated. Also, we may eventually want to have
some kind of persistent session object; if we do, then we'll want
Connections to be separate.
(This is the correct capabilities-based behavior. For now it will only
work if the TorClient uses a PreferredRuntime, but with luck we will
find a solution for #837 soon.)
Now there's a module in `arti` that runs the loop for an RPC
listener. The part of the old `listener` module that made
the framed connections is now part of the `Session` object.
There is now yet another a temporary location for the pipe; we
should pick something better. At least now it's configurable.
In the future, this will probably hold more data as well, like a
TorClient and some configuration info.
The TorClient will present an issue; I've made comments about that.
Closes#820
The `HsClientSecretKeys` stored in the HS client connection context only
have the secret keys. Certain APIs (such as `HsDesc::parse`) expect a
keypair (both `HsClientDescEncKey` and `HsClientDescEncSecretKey`). This
`From` impl makes it possible to get a `HsClientDescEncKey` out of
`HsClientDescEncSecretKey`.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
My goal here is to make sure that we can't confuse
one download operation and another, and that we actually know
what's going on. Previously, not all state transitions or
attempts to fetch information actually corresponded to a log.
One of the tests run as part of `cargo test -p arti-client` fails
because the `pt-client` feature is not enabled by default:
```
failures:
---- src/config.rs - config::BridgesConfig (line 221) stdout ----
error[E0432]: unresolved import `arti_client::config::pt`
--> src/config.rs:225:26
|
7 | use arti_client::config::pt::ManagedTransportConfigBuilder;
| ^^ could not find `pt` in `config`
error[E0599]: no method named `bridges` found for mutable reference `&mut BridgesConfigBuilder` in the current scope
--> src/config.rs:233:19
|
15 | builder.bridges().bridges().push(bridge_1);
| ^^^^^^^ private field, not a method
error[E0599]: no function or associated item named `default` found for struct `BridgeConfigBuilder` in the current scope
--> src/config.rs:236:48
|
18 | let mut bridge2_builder = BridgeConfigBuilder::default();
| ^^^^^^^ function or associated item not found in `BridgeConfigBuilder`
error[E0599]: no method named `bridges` found for mutable reference `&mut BridgesConfigBuilder` in the current scope
--> src/config.rs:247:19
|
29 | builder.bridges().bridges().push(bridge2_builder);
| ^^^^^^^ private field, not a method
error[E0599]: no method named `transports` found for mutable reference `&mut BridgesConfigBuilder` in the current scope
--> src/config.rs:255:19
|
37 | builder.bridges().transports().push(transport);
| ^^^^^^^^^^ method not found in `&mut BridgesConfigBuilder`
error: aborting due to 5 previous errors
Some errors have detailed explanations: E0432, E0599.
For more information about an error, try `rustc --explain E0432`.
Couldn't compile the test.
failures:
src/config.rs - config::BridgesConfig (line 221)
test result: FAILED. 5 passed; 1 failed; 3 ignored; 0 measured; 0 filtered out; finished in 2.10s
```
This disables the `BridgesConfig` doc test if the `pt-client` feature is not enabled.
Closes#843
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This adds an example to the `HsDesc::parse` docs. The constants from the
example are lifted from the `parse_desc_good` test.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This adds the `HsDesc::parse_decrypt_validate` method, which parses,
decrypts, and validates HS descriptors.
Closes#809
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
Right now, this lets us say whether the method was unrecognized or
whether the parameter type was incorrect.
We'll use this to enforce correct method names later on.
(I have to add another `inventory` here, since the `typetag`
maintainer does not want to expose this functionality: see
https://github.com/dtolnay/typetag/issues/57#issuecomment-1506106290
)
Otherwise there is too much risk of accidentally adding in another
1<<12 when we meant to add a 1<<13.
(It would be neat to have an alternative to bitflags here that would
auto-number our bitflags for us.)
Previously the main loop received updates via a `mpsc::channel`, and
final responses via a `futures::unordered`. This could lead to
final responses being transmitted to the user before the updates
were all flushed.
Now all of the responses are sent to the main loop via the same channel,
and they can't get out-of-sequence.
Closes#817 and (IMO) simplifies the code a bit.
Now that the sink is not part of the context, RPC functions that are
able to send an update have to declare an `impl Sink` as their
fourth argument. This syntax is not final.
Part of #824.
Now instead of hoping that buggy clients will detect a magic `id`,
we can simply tell them that they will get no `id` at all. If they
can't handle that case, no major harm is done: the connection will
get closed anyway.
Since we're serializing everything in this format, let's enforce it.
With this change, we can no longer cram arbitrary junk into an
RPC error, so we have to clean up our handling of cancelled requests.
This is a bit big, but it's not that _complicated_.
The idea here is that we use serde's "untagged" enum facility
when parsing our `Request`s, such that if parsing as a `Request`
fails, we parse as an `InvalidRequest` and try to report
what the problem was exactly.
This lets us determine the ID of a request (if it had one),
so we can report that ID in our error message. We can also
recover from a much broader variety of errors.
We now also conform with the spec reporting errors about
completely wrong json, requests without IDs, and so on.
Well, mostly correct. Our current serde implementation doesn't
tell us much about what went wrong with the object, so we can't
tell why we couldn't convert it into a Request.
Also, our output for the data field is not as the spec says:
we should bring them into conformance.
Part of #825.
Even though json-rpc uses "result" to mean "a successful return value
from a method", we can't: Rust's `Result` type is so pervasive
that confusion would be inevitable.
In one case, we use WeightRole::Exit on circuits that can't
actually be used to exit. This commit adds a comment to explain
why, so that we don't wonder about it in the future, and we have
some indication of whether it's still appropriate.
Closes#785
We now _use_ the function pointers rather than comparing them; this
lets us drop our Eq/PartialEq/Hash implementations for
`ConstTypeId_` and instead just use `TypeId`s once we're in run-time
code.