Otherwise there is too much risk of accidentally adding in another
1<<12 when we meant to add a 1<<13.
(It would be neat to have an alternative to bitflags here that would
auto-number our bitflags for us.)
Previously the main loop received updates via a `mpsc::channel`, and
final responses via a `futures::unordered`. This could lead to
final responses being transmitted to the user before the updates
were all flushed.
Now all of the responses are sent to the main loop via the same channel,
and they can't get out-of-sequence.
Closes#817 and (IMO) simplifies the code a bit.
Now that the sink is not part of the context, RPC functions that are
able to send an update have to declare an `impl Sink` as their
fourth argument. This syntax is not final.
Part of #824.
Now instead of hoping that buggy clients will detect a magic `id`,
we can simply tell them that they will get no `id` at all. If they
can't handle that case, no major harm is done: the connection will
get closed anyway.
Since we're serializing everything in this format, let's enforce it.
With this change, we can no longer cram arbitrary junk into an
RPC error, so we have to clean up our handling of cancelled requests.
This is a bit big, but it's not that _complicated_.
The idea here is that we use serde's "untagged" enum facility
when parsing our `Request`s, such that if parsing as a `Request`
fails, we parse as an `InvalidRequest` and try to report
what the problem was exactly.
This lets us determine the ID of a request (if it had one),
so we can report that ID in our error message. We can also
recover from a much broader variety of errors.
We now also conform with the spec reporting errors about
completely wrong json, requests without IDs, and so on.
Well, mostly correct. Our current serde implementation doesn't
tell us much about what went wrong with the object, so we can't
tell why we couldn't convert it into a Request.
Also, our output for the data field is not as the spec says:
we should bring them into conformance.
Part of #825.
Even though json-rpc uses "result" to mean "a successful return value
from a method", we can't: Rust's `Result` type is so pervasive
that confusion would be inevitable.
In one case, we use WeightRole::Exit on circuits that can't
actually be used to exit. This commit adds a comment to explain
why, so that we don't wonder about it in the future, and we have
some indication of whether it's still appropriate.
Closes#785
We now _use_ the function pointers rather than comparing them; this
lets us drop our Eq/PartialEq/Hash implementations for
`ConstTypeId_` and instead just use `TypeId`s once we're in run-time
code.
It's experimental, and tokio-only. To enable it, build with
the "rpc" feature turned on, and connect to
`~/.arti-rpc-TESTING/PIPE`. (`nc -U` worked for me)
I'll add some instructions.
Per our design, every connection starts out unauthenticated, and
needs one authenticate command to become authenticated.
Right now the only authentication type is "This is a unix named
socket where everybody who can connect has permission."
This lets us avoid async_trait in tor-rpccmd, and makes us use a
Box<>. I think we might actually get an even smarter type later on,
but we will need to play with this for a while too.
I'm not sure about these APIs at all! They force us to use
`async_trait` for `tor_rpccmd::Context`, which bothers me. Should we
just have a function that returns
`Option<Box<dyn Sink<Item=X, Error=Y>>` or something? If so,
what's the correct Y?
This code uses some kludges (discussed with Ian previously and
hopefully well documented here) to get a type-identifier for each
type in a const context. It then defines a macro to declare a
type-erased versions of a concrete implementation functions, and
register those implementations to be called later.
We will probably want to tweak a bunch of this code as we move ahead.
Ordinarily you can cancel a future just by dropping it, but we'll
want the ability to cancel futures that we no longer own (because we
gave them to a `FuturesUnordered`).
This is a workaround for an issue that I'm about to encounter
somewhere in our pile of dependencies as I add arti-rpcserver, and
somehow make serde_json visible in this test code thereby, making
the PartialEq method resolution ambiguous.
!1121 renamed *ProtocolFailed to *ProtocolViolation.
!1118 introduced a new reference to a *ProtocolFailed
I rebased !1118 onto main and enabled automerge. That tested the tip
of !1118. I assume a similar thing happened to !1121.
The possibility of such regressions is a property of our workflow.
It's rather surprising it doesn't happen more often.
1. Abbreviate the link text, and don't have it contain `crate`
which is not really great in docs.
2. Use `super::` for the link target, to find the right thing.
(`crate` doesn't seem to work in rustdoc, perhaps deliberately,
although the error messages are ridiculous and claim the
nonexistence of intermediate modules.)
3. Wrap the lines a bit more.
We have a local alias of `HsDesc = String` which needs to be got rid
of.
But, right now the alternative would be to implement all the code for
signature checking and decryption of an `HsDesc`, before we can make a
test case for the downloader part.
With this
cargo +stable clippy --locked --offline -p tor-netdir --features=hs-client --all-targets
I got this:
64 | use {hsdir_params::HsDirParams, hsdir_ring::HsDirRing, itertools::chain, std::iter};
| ^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
Previously, we'd only call PendingNetDir::upgrade_if_necesessary
when adding a microdescriptor. But if it began already having all
of its descriptors (because we found them in the cache), we wouldn't
actually upgrade it to a PendingNetDir::Yielding, which would make
it unusable, and would make us schedule its reset time too far
in the future.
Fixes#802.
We will use this for a lack of HS directories. (These aren't chosen
according to any local restrictions, so the problems with EK::NoPath
and EK::NoExit don't arise.)
Fixes
cargo +stable test --locked --offline F -p tor-netdoc
cargo +stable clippy -p tor-netdoc F --all-targets
for values of F including
--all-features
--features=hs-client
--features=hs-common
--features=hs-service
(nothing)
I couldn't find a test vector in C Tor. This test case was generated
from the code here.
I'm fairly sure it's right since I managed to get my descriptor
downloader to work. (That's not an MR yet, but uses this code.)
So far these are just the errors that occur during descriptor
fetch. There will be more later as we have more code in tor-hsconn.
This is very user-facing; use the "onion service" terminology.
Previously, to build descriptors for hidden services with client auth
enabled, in addition to the list of authorized clients, users of
`HsDescBuilder` were required to also provide a descriptor encryption
keypair and a descriptor cookie. This was potentially dangerous and/or
error-prone, because the ephemeral encryption key and the descriptor
cookie are expected to be randomly generated and unique for each
descriptor.
This change makes `ClientAuth` private to the `hsdesc::build` module and
updates `HsDescBuilder` to build `ClientAuth`s internally. Users now
only need to provide the list of authorized client public keys.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This adds a test for an `encode -> decode -> encode` flow for a hidden
service descriptor with client authorization enabled.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
`AuthClient`s were originally meant to represent parsed `auth-client`
lines. In !1070, this struct was repurposed for representing individual
authorized clients in the HS descriptor encoder. However, hidden
services will likely use a list of public keys to represent the
authorized clients rather than a list of `AuthClient`s, as the
information from an `AuthClient` (`client_id`, `iv`, `encrypted_cookie`)
likely won't be immediately available to the hidden service.
This change updates the HS descriptor encoder to represent authorized
clients as a list of `curve25519::PublicKey`s. As such, it is now the
responsibility of the encoder to create the `client_id`, `iv`, and
`encrypted_cookie` using the available keys, the unencrypted descriptor
cookie, and HS subcredential.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>