ui: show toggleable "Follow" status in repo groups' repo list
It makes sense to show Follow status next to repo names in the repo list, and it is a meaningful and efficient bulk operation to toggle Follow status there.
Clicking on the (Un)Follow 'heart' will toggle the caller's follow status for that repo.
The repo model already has layering violations - expand on them to compute the follow status of the current user.
(Changeset was cherry picked and modified by Mads Kiilerich.)
cleanup: run pyflakes as a part of scripts/run-all-cleanup
pyflakes has no usable configuration options, so create a small wrapper script. Instead of having two wrapper scripts (with one being almost nothing and the other containing configuration), just keep it simple and use one combined.
It has "always" been wrong, and we have thus been using the default of "pickle" - not "json" as we thought.
I doubt this change has any immediate visible impact. I guess it only means that it will use json for results instead of pickle. That might be more stable and debuggable.
Note: celery_config will uppercase the config settings and replace '.' with '_', so 'celery.result.serializer' turns into 'CELERY_TASK_SERIALIZER'.
mails: make error reporting by mail work with secure mail servers
Even with Kallithea mails working, TurboGears / backlash error reporting would fail like:
Error while reporting exception with <backlash.tracing.reporters.mail.EmailReporter object at 0x7f8f986f8710> Traceback (most recent call last): File ".../env/lib/python3.7/site-packages/backlash/tracing/reporters/mail.py", line 49, in report result = server.sendmail(self.from_address, self.error_email, msg.as_string()) File "/usr/lib64/python3.7/smtplib.py", line 867, in sendmail raise SMTPSenderRefused(code, resp, from_addr) smtplib.SMTPSenderRefused: (530, b'5.7.0 Must issue a STARTTLS command first.', 'kallithea@example.com')
The reasoning in hg _fix_path is no longer correct.
We only need it for stripping the trailing '/' ... but we would rather be explicit about that. (But it is also questionable if we actually want it at all - we should just pass the right data and fail on wrong data.)
.../site-packages/_pytest/doctest.py:381: in _mock_aware_unwrap return real_unwrap(obj, stop=_is_mocked) /usr/lib64/python3.7/inspect.py:511: in unwrap while _is_wrapper(func): /usr/lib64/python3.7/inspect.py:505: in _is_wrapper return hasattr(f, '__wrapped__') and not stop(f) .../site-packages/tg/support/objectproxy.py:19: in __getattr__ return getattr(self._current_obj(), attr) .../site-packages/tg/request_local.py:240: in _current_obj return getattr(context, self.name) .../site-packages/tg/support/objectproxy.py:19: in __getattr__ return getattr(self._current_obj(), attr) .../site-packages/tg/support/registry.py:72: in _current_obj 'thread' % self.____name__) E TypeError: No object (name: context) has been registered for this thread
pytest's doctest support is (in _mock_aware_unwrap) using py3 inspect.
Inside inspect, _is_wrapper will do an innocent looking: hasattr(f, '__wrapped__')
But if the code under test has un (unused) import of a tg context (such as tg.request), it is no longer so innocent. tg will throw: TypeError: No object (name: context) has been registered for this thread (which in py2 would have caught by hasattr, but not in py3.)
pytest will thus fail already in the "collecting ..." phase.
To work around that, use the hack of pushing a tg context in the top level pytest_configure.
py3: remove safe_unicode in places where it no longer is needed because all strings (except bytes) already *are* unicode strings
(The remaining safe_unicode calls are still needed and can't just be removed, generally because we in these cases still have to convert from bytes to unicode strings.)
py3: fix kallithea-cli ini parsing after ConfigParser got strict=True
ConfigParser in py3 defaults to strict=True and would thus reject our ssh logging hack of renaming config sections ... which cause duplicate section names.
Fortunately, fileConfig now also allows passing a ConfigParser, and we can avoid using io.StringIO .
py3: update ssh for base64.b64decode raising binascii.Error instead of TypeError
A command like: python -c 'import base64; base64.b64decode("QQ")' would fail in Python2 with: TypeError: Incorrect padding but in python3: binascii.Error: Incorrect padding
There is no point in creating dicts and then logging them as json. Also, json can't handle py3 bytes and it would fail on py3. (ext_json could perhaps handle bytes, but it seems better to keep it simple and explicit.)
If the default repr isn't good enough, it would be better to use pprint. But repr is good enough.
py3: make get_current_authuser handle missing tg context consistently and explicitly
tg context handling ends up using tg.support.registry.StackedObjectProxy._current_obj for attribute access ... which if no context has been pushed will end up in: raise TypeError( 'No object (name: %s) has been registered for this ' 'thread' % self.____name__)
utils2.get_current_authuser used code like: if hasattr(tg.tmpl_context, 'authuser'):
Python 2 hasattr will call __getattr__ and return False if it throws any exception. (It would thus catch the TypeError and silently fall through to use the default user None.) This hasattr behavior is confusing and hard to use correctly. Here, it was used incorrectly. It has been common practice to work around by using something like: getattr(x, y, None) is not None
Python 3 hasattr fixed this flaw and only catches AttributeError. The TypeError would thus (rightfully) be propagated. That is a change that must be handled when introducing py3 support.
The get_current_authuser code could more clearly and simple and py3-compatible be written as: return getattr(tmpl_context, 'authuser', None) - but then we also have to handle the TypeError explicitly ... which we are happy to do.
celery: introduce make_app instead of creating app at import time
It is dirty to instantiate things at import time (unless it really is basic singletons).
In 0.5.1 (and earlier), such dirtyness made partial test execution fail when other things had global side effects and things didn't use the usual import order:
$ py.test kallithea/lib/ collecting ... ――― kallithea/lib/celerypylons/__init__.py ――― kallithea/lib/celerypylons/__init__.py:58: in <module> app.config_from_object(celery_config(tg.config)) kallithea/lib/celerypylons/__init__.py:28: in celery_config assert config['celery.imports'] == 'kallithea.lib.celerylib.tasks', 'Kallithea Celery configuration has not been loaded' data/env/lib/python2.7/site-packages/tg/configuration/tgconfig.py:31: in __getitem__ return self.config_proxy.current_conf()[key] E KeyError: 'celery.imports'
Avoid that by running a "factory" function when the celery app actually is needed.
config: fix pyflakes warning about unused tg import
app_cfg had an unused 'import tg'. tg.hooks was used, but through a separate import. Clean that up by consistently using tg (which always makes tg.hooks available) and dropping the separate hooks import.
(localrepo might already always be available in the mercurial namespace due to side effects from other imports, but it is still better to do it explicit ... and also to please pytype.)
Since a while, the test suite shows following warning:
kallithea/tests/__init__.py:29 /home/tdescham/repo/contrib/kallithea/kallithea-review/kallithea/tests/__init__.py:29: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: kallithea.tests pytest.register_assert_rewrite('kallithea.tests')
The problem can be fixed by moving the register_assert_rewrite call from kallithea/tests/__init__.py to the root-level conftest.py, outside of the 'kallithea' module.
tests: remove race condition in test_forgot_password
One in so many times, test_forgot_password failed with:
kallithea/tests/functional/test_login.py:427: in test_forgot_password assert '\n%s\n' % token in body E assert ('\n%s\n' % 'd71ad3ed3c6ca637ad00b7098828d33c56579201') in "Password Reset Request\n\nHello passwd reset,\n\nWe have received a request to reset the password for your account.\n\nTo s...7e89326ca372ade1d424dafb106d824cddb\n\nIf it weren't you who requested the password reset, just disregard this message.\n"
i.e. the expected token is not the one in the email.
The token is calculated based on a timestamp (among others). And the token is calculated twice: once in the real code and once in the test, each time on a slightly different timestamp. Even though there is flooring of the timestamp to a second resolution, there will always be a race condition where the two timestamps floor to a different second, e.g. 4.99 vs 5.01.
The problem can be reproduced reliably by adding a sleep of e.g. 2 seconds before generating the password reset mail (after the test has already calculated the expected token).
Solve this problem by mocking the time.time() used to generate the timestamp, so that the timestamp used for the real token is the same as the one used for the expected token in the test.
scripts: handle "Python 2.7 reached the end of its life" message
The script failed with:
Error: pip detected following problems: DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support