CCP4 5.0 Release Review - 5th October 2004

Attendance: Chris Morris (chairing), Charles Ballard, Martyn Winn, Peter Briggs.

1. Agenda

Chris proposed the agenda, agreed.

2. Review of the facts

Peter Briggs prepared a factual review and timeline for the release process as a Powerpoint presentation: Download the PPT file

Slide 2 personnel: Maria Turkenburg (YSBL) contributed documentation, but is not managed as part of the core staff.

Slide 3: CVS was also used from the beginning, with web access to a copy of the tree.

Slide 4 indicates the sections in the subsequent presentation.

Slide 8: a list of the intended core components. Updates to Refmac and Molrep were not core. Harry Powell insisted that the release includes an update to Mosflm, or else the right to distribute it would be withdrawn.

Slide 9: Should read "most, except for the core libraries."

Slides 8-11 are a record of a meeting on 5th-6th Dec 2002

Slide 12: The third date should be 27/03/03

Slide 13: There had been complaints by commercial users about the licence, and Paul Emsley argued for use of LGPL for the Clipper libraries.

Slide 14: the "missing" components were in some cases ones that were not available at all, and some where revisions were expected. In the case of FFTW, licence negotiations were required.

Slide 15: reasons for decision is not always recorded.

Slide 16: It was decided that WHAT_CHECK was too big.

Slide 18: Eleanor had been testing BULK, in September it was accepted into the release. By then the activity was fixing rather than putting the suite together.

Slide 19: "reverse project" should read as "reverse drift of project"

Slide 21: In April, the Sales/Contract department in RAL said that we could not release under the academic licence. A compromise was reached later.

Slide 23: Week by week figures are not available, and some FTP logs not available. Bugzilla results are similarly not available weekly. Entry of problems into Bugzilla is uneven. It would be useful to have a graph of emails per week, but not if it includes the spam.

3. Lessons for the future

Slide 5 Lessons learnt from 4.2

1,3,4 Discussion and communication: SHARP and SOLVE were not tested against the revised CCP4 libraries in good time. However it is not clear that the problem was communication.

2. Code freeze. There was no real agreement to coordinate development with the CCP4 release, especially after the delays in the release.

Pryank commented by email "Is there a clear definition as to what counts as a developmental change, and what is classified as a bug fix"?

Slide 6: Pryank commented "was the release plan unrealistic with regards to the amount of time reserved for ... testing"? Allocating this much time was controversial at the time.

Slide 11: Refmac needed patches to run with the new release. They had to be applied each time we received new code. It is not kept in CVS.

We learnt from York that ARP/wARP 6.0 was not compatible with the new release. The ARP/wARP team was working on 6.1, which was delayed. Eventually, for a patch release, Martyn posted a fix to ARPwaters. (ARP/wARP is not part of the CCP4 suite.)

Martyn pointed out that if we had test scripts for these programs here we might have been more conscious of the problems and managed them more actively.

We published a guide to converting from the old libraries to the new ones. Future releases should recognise the need for that earlier. It takes time for external developers to make the changes.

See also slide 21.

Slide 12: targets were "deadlines" to meet external goals, not driven by estimates. In future, it would be useful to record estimates. Version 5 meant big changes which made it harder to estimate, and future releases may not resemble it.

Slide 13: licensing was a major effort that had not been planned. A release could probably been shipped in September-October 2003 if a license had been available. It was a false start to begin with the existing licence. We do not know how this could have gone better.

"rolling releases" did not seem to work - the rate of defect reports was low, and increased only when a beta was issued. This included the summer period. Rolling releases may have increased the risk that users have mixed versions.

Should we look for new ways to test it more thoroughly? Our test suite has some obvious gaps where it does not exercise new facilities. Some of the defect reports to the mailing lists are specific enough that it is possible to make a regression test.

Slide 16: there were still defects in the libraries. These were coming to light slowly as external groups began to use them. We did not have units test for them. Valgrind found some errors, perhaps static checkers would have found some.

Slide 18: There is a lesson that deliverables other than software can also take substantial time, e.g. documentation. Also need to recognise that there are other calls on the team's time.

Slide 21: It was necessary to release urgently, while there was agreement on a licence. Some collaborators were not ready. We lack some credibility in setting dates.

Slide 22: The total number of defects was good, but there were two very serious defects. This underlines the point about a stronger test suite.

Notes provided by Chris Morris (7th October 2004)
Html-ised by Peter Briggs (14th October 2004)