This reuses a lot of mechanism from the circuit code that sends END
cells when streams are dropped.
There is a problem here: Circuits and channels won't actually get
dropped, because we should be using a weak reference from the
reactor.
Part of this "declaring milestone 2 done" business is a matter of
putting additional tests and documentation into milestone 3 where
they logically belong.
We already handled the case okay when we were reading on streams,
since the reactor's going away would drop the sender side of their
mpsc channels. But if the reactor went away, nothing would tell
_writing_ streams that they needed to close.
Now we handle that case, as well as anybody who is waiting on
a meta-cell to get back to them.
When a stream is closed and we haven't adjusted its state in the
stream map yet, remember how many cells we've dropped so we can
decrement them from the window later on.
This is the first step along the line to handling Tor issue
tor#27557. We want to remember streams that we've ended and treat
them as distinct from streams that have never existed
The problem is that we would count begin and end cells towards
towards window totals when we are only supposed to count DATA
cells, *and* that we would we send our sendmes one cell too early
(or maybe late?).
Closes#1.
Previously the circuit object owned not only the outbound crypto,
but also the inbound crypto and the stream maps. That's not so
great, since the reactor needs to use the inbound crypto and the
stream maps all the time, whereas the circuit doesn't need them much
(or at all).
Moving these objects to the reactor-owned structure should let us
fix the deadlock case in stream sendme handling, since the circuit
reactor no longer needs to lock the circuit in order to do crypto
and demultiplexing. It should also speed up the code a bit, since
it doesn't need to grab the circuit lock nearly so often as before.
This change forced me to add a couple of new reactor CtrlMsg values,
since the circuit can no longer add streams and layers directly. I
think it will still be a performance win, though.
Previously our "read a bunch of this kind of document" functions had
a common problem, where they could get into an infinite loop if the
underlying "read this kind of document" function failed without
consuming any tokens.
I _think_ that this error case was unreachable (or else fuzzing
would have found it, right?), but proving that it was unreachable
was a bit fiddly, and I couldn't follow my own arguments about it.
Instead, we just store the position of the reader before we start
reading, and make sure that it has consumed at least some data. If
it hasn't, then we consume and drop a token before advancing to the
next document.