Performance and Profiling:
- Template caching:
return render('base/csubjects.html', cache_type='memory', cache_expire=60)
This keeps the template output in memory cache for a period of 60 sec. This has resulted in a huge performance gain.
Also, cache_expire time can be more than 60 sec. in our case.
- Turn off debug in .ini and dont use –reload when starting the server
- Turn off Mako’s template checking:
Another important flag on TemplateLookup is filesystem_checks. This defaults to True, and says that each time a template is returned by the get_template() method, the revision time of the original template file is checked against the last time the template was loaded, and if the file is newer will reload its contents and recompile the template. On a production system, setting filesystem_checks to False can afford a small to moderate performance increase (depending on the type of filesystem used).
Profiling using repoze.profile:
- Integrated repoze.profile with ArchiverUI to see which portion of the code is taking the longest. Right now, it seems that hide_quoting component can be further optimized.
- Functional Testing with Nose:
Added some tests to check basic responses of different pages of ArchiverUI. For example, make sure the response is 404 if any invalid url is passed and response contains correct page for every valid url.
- Made a few modifications to code more readable and started adding docstrings
class Mlist(): """Class to represent an mlist table in a sqlite database""" __storm_table__ = "mlist" id = Int(primary = True) list_name = Unicode() # Path to the archives database corresponding to the list_name mlist db_path = Unicode()
- whenever a new list is created, update mlist table
- whenever a message is archived, update the database file corresponding to that mlist.
- Generalize for more than one mailing list
I have added support for searching in archiverUI. Most of the logic/code that was written by Priya as part of last year’s GSOC has been reused with some changes.
- Indexing: For the first time, indexing is done for all the messages stored in sqlite database file. After that, every time index_archives() is called, only new messages (that are added after the last call to index_archives() ) are indexed.
- Index Schema:
fields.Schema ( msgid=fields.ID(stored=True, unique=True),
Here, we only need to store msgid (stored = True) as we can query the database with msgid to get other parameters. But this may result in additional overhead to query database in order to show metadata of results of searching. For now lets leave it in this state, if it turns out to be the reason of poor performance then I’ll store author, subject and body as well.
As Barry had suggested, I have started looking at writing tests and documentation. I have been following this excellent guide for testing and documentation for pylons.
I have finished the basic implementation of archiver UI using pylons framework and Storm ORM. Though it still requires some minor fixes, it covers:
1. conversation-list view with quick view of conversation messages(using ajax)
2. conversation page with indentation and quote hiding feature.
Code can be found at: http://code.launchpad.net/~dushyant37/+junk/ArchiverUI
Egg package: http://bazaar.launchpad.net/~dushyant37/+junk/ArchiverUI/files/head:/ArchiverUI/dist/
Other info: README.txt
ArchiverUI package just requires a sqlite database file. But currently, the test database file that I am using has not been generated from any list archives (mbox file), so, it is not good for demo purpose.
Initially, I wasn’t sure whether I should focus on generating a database for demo purspose. But then, I realised it would be good to get feedback from mailman community. So, I started working on this and now, I am about to finish it.
Overall, generating pages dynamically seems to be a better way to view archives. I would like to discuss some other issues before proceeding further.
1. Performance: We need to look at the performance of the new archiver in order to improve it as well as to compare this approach of generating pages on the fly with the static one. Is there any specific way to go about it?
After that, performance gain can be achieved by proper use of caching which is offered by pylons and storm/sqlite.
2. Interaction with mailman: Right now, the interaction with mailman (archiver part) is through a sqlite database file. So, we just need to update the relevant database files on archiving a message through mailman.
If we want to further separate out the archiver from mailman, we can use the methods used by mhonarc and mailarchive interfaces.
3. Search: I also plan to integrate search functionality(work done by Priya) for archives into this pylons project.