These are chat archives for openworm/ChannelWorm

13th
May 2015
Gopal Sarma
@gsarma
May 13 2015 03:49
@travs, @slarson , @vahid: I've gone through your exchange from yesterday. A few thoughts:
@travs and I had a meeting about testing the other day. I've read through the sciunit code and it is not very complicated.
There are basic tests for comparing inputs and outputs, like you would have in any testing framework, as well as statistical tests, for comparing models to data.
The latter case- comparing models to data- is no different than any scientist employs when deciding to publish a result involving data that was collected. In our case, we just need to figure out (either by looking at papers, or talking to people) what kinds of statistical tests we need, and then add them to sciunit, or take advantage of whatever existing statistical libraries python has.
I've been doing a fair bit of testing at work recently, so I have some thoughts on the process as a whole.
A few observations from what I've learned at work:
1) I really believe in the idea of test driven development and am moving towards that in my own work.
2) Testing is very straightforward and testing frameworks are really not very complicated. But many people are not motivated to do it.
3) It's easy to write huge numbers of useless tests. It's important to understand how the primary code base is evolving, so you can anticipate what kinds of tests will be useful as future additions are made, or when large amounts of code is refactored.
4) It's worth having tests for run-time or memory usage. That way you can see if something suddenly gets a lot slower, or uses up more memory that it used to.
5) If you have large numbers of tests, you need good ways to display the results so you can easily see which ones are failing. iPython notebooks might be a good way for us to do this.
Gopal Sarma
@gsarma
May 13 2015 04:06
Two ideas specific to us:
1) I think we should be working on documentation hand in hand with testing. One of my interests in being involved with testing is to familiarize myself with the big picture of the code base, so helping with documenting our internal functions will be an important way to gain that high-level knowledge
2) I am also interested in the larger set of issues related to code quality. In particular, I am interested in the idea of having "lints" that identify not just problematic code, but also stylistic problems, or that enforce or encourage certain code practices across the board. I think this will be a great way to deal with the particular difficulties of a large, heterogenous, open-source project.
@travs if we can jump in and start writing some tests this weekend, that would be great. Even if they are completely trivial, we just need to get the ball rolling and will learn from our mistakes.
Vahid Ghayoomie
@VahidGh
May 13 2015 07:13
@gsarma, good points.
One thing is that the short-time objective of our testing framework is what is discussed in openworm/muscle_model#30. After having this, it could be extended to cover more curves and parameters usually exists in similar studies (such as I/t, V/t, activation/inactivation rates, etc). Just saying to keep in mind the big picture, and that it's not going to cover just I/V curves, as you pointed out.
This generalization is the same as for digitization and fitting processes, which finally their outputs would be inputs of the testing framework.
Travis Jacobs
@travs
May 13 2015 14:10
@gsarma Awesome stuff!
I can see documenting this being easier if we use iPython notebooks as well, so let's do that.
Travis Jacobs
@travs
May 13 2015 14:18
As for writing tests this weekend, I'm all for it. I'll come up with some Capabilities for our model over the next few days to give us some direction.
@gsarma you should get a poll from me shortly.
Gopal Sarma
@gsarma
May 13 2015 23:35
@VahidGh thank you for the link and the pointers- I didn't know about this ticket.