p9_conn_destroy() first kills all current requests by calling
p9_conn_cancel(), then waits for the request list to be cleared by
waiting on p9_conn->equeue. After that, polling is stopped and the
trans is destroyed. This sequence has a few problems.
* Read and write works were never cancelled and the p9_conn can be
destroyed while the works are running as r/w works remove requests
from the list and dereference the p9_conn from them.
* The list emptiness wait using p9_conn->equeue wouldn't trigger
because p9_conn_cancel() always clears all the lists and the only
way the wait can be triggered is to have another task to issue a
request between the slim window between p9_conn_cancel() and the
wait, which isn't safe under the current implementation with or
without the wait.
This patch fixes the problem by first stopping poll, which can
schedule r/w works, first and cancle r/w works which guarantees that
r/w works are not and will not run from that point and then calling
p9_conn_cancel() and do the rest of destruction.
Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>