* Fix joins to many channels
If you have enough channels that the 512 byte message limit on the JOIN
message is hit then limnoria was losing the channel that put it over the
limit and not including it in the next JOIN message. This resulted in
losing one channel for every JOIN message that pushed us over 512 bytes.
We fix this by generating the JOIN message immediately after resetting
the channels list to ensure we include the channel that pushed us over
the limit. Then the next time through our JOIN msg construction we'll
add subsequent channels without forgetting the one that pushed us over.
* Add test for channel join lists
This adds a test for the issue that is fixed in the previous commit. We
ensure that when JOINs are split over multiple messages we JOIN to all
channels that were part of the input list and don't forget any of them.
This is because the recommended method ('owner ircquote nickserv register mypassword bot@example.com')
does not work on charybdis, as Limnoria inserts a colon
before the trailing argument and Charybdis' m_alias module
does not parse commands using the IRC syntax, so it
considers the leading colon to be part of the email address.
The alternative would be to change the recommended command to:
'owner ircquote PRIVMSG nickserv :register mypassword bot@example.com'
but it is prone to typos, so I think we should avoid it.
feedparser should always catch the error, but someone reported it doesn't
catch this error on TLS cert issues:
```
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/falso/virtualenv/limnoria/lib/python3.8/site-packages/supybot/plugins/RSS/plugin.py", line 86, in newf
f(*args, **kwargs)
File "/home/falso/virtualenv/limnoria/lib/python3.8/site-packages/supybot/plugins/RSS/plugin.py", line 351, in update_feeds
self.update_feed_if_needed(feed)
File "/home/falso/virtualenv/limnoria/lib/python3.8/site-packages/supybot/plugins/RSS/plugin.py", line 337, in update_feed_if_needed
self.update_feed(feed)
File "/home/falso/virtualenv/limnoria/lib/python3.8/site-packages/supybot/plugins/RSS/plugin.py", line 311, in update_feed
d = feedparser.parse(feed.url, etag=feed.etag,
File "/home/falso/virtualenv/limnoria/lib/python3.8/site-packages/feedparser/api.py", line 214, in parse
data = _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result)
File "/home/falso/virtualenv/limnoria/lib/python3.8/site-packages/feedparser/api.py", line 114, in _open_resource
return http.get(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result)
File "/home/falso/virtualenv/limnoria/lib/python3.8/site-packages/feedparser/http.py", line 158, in get
f = opener.open(request)
File "/usr/lib/python3.8/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/lib/python3.8/urllib/request.py", line 542, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1393, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/usr/lib/python3.8/urllib/request.py", line 1354, in do_open
r = h.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1347, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 268, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.8/ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.8/ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
```
So let's catch the error just in case, so it doesn't block all other
feeds.
When the number of hostmasks exceeds 1000 (the hardcoded size of
_patternCache and _hostmaskPatternEqualCache), this triggers
a pathological case in the LRU caches, that causes all calls to be
a cache miss.
This means that on every IRC message received, ircdb.checkIgnored triggers
a recompilation of *all* user hostmasks, which is very expensive
computationally.
This commit stores them in their own cache to prevent them from
expiring.
It mistakenly used the bot's nick as target when the message is in private,
so 'more' after a private message always answered the user did not send
a command before (because said command was attributed to the bot)
This dict was filled with IrcString keys, which is hashed
as lowercase, so when queried with a non-lowercase string,
the key would not be found, and lead to very confusing errors.
The default behavior was to announce feeds on all channels with the same name,
which is rarely what was expected.
Instead, this limits it to the current network.
Until now, only `waitingJoins` was stored separately per network, while
`channels`, `sentGhost` and `identified` had one common value per plugin
instance. Instead of making everything a dictionary indexed by network
name like `waitingJoins`, let's bundle all the state together in a class
and store *its* instances in such a dictionary.
This fixes at least one race condition, for which a test case was added.
Even with `noJoinsUntilIdentified` set, the bot would let joins through
as long as *any* one network has already finished identifying.
I didn't observe any error with the current set of tests but adding
another one that used "services password" caused oen of these tests
to fail. Given that tests shouldn't leave traces in global state,
let's reset the configured passwords in finally blocks.