Julia Community 🟣

Cover image for Case Study: Documenting machine learning models in a Julia ML framework

Case Study: Documenting machine learning models in a Julia ML framework

Julia is a relatively new, general purpose programming language. MLJ (Machine Learning in Julia) is a toolbox written in Julia providing a common interface and meta-algorithms for selecting, tuning, evaluating, composing and comparing a variety of machine learning models implemented in Julia and other languages.

Authors: Anthony Blaom, Logan Kilpatrick and David Josephs
Problem Statement


While MLJ provides detailed documentation for its model-generic functionality (eg, hyperparameter optimization) users previously relied on third party package providers for model-specific documentation. This is physically scattered, occasionally terse, and not in any standard format. This was viewed as a barrier to adoption, especially by users new to machine learning, which is a large demographic.

Proposal Abstract

Having decided on a standard for model document strings, this project’s goal was to roll out model document strings for individual models. For a suitably identified technical writer, this was to involve:

  • Learning to use MLJ for data science projects
  • Understanding the document string specification
  • Reading and understanding third party model documentation
  • Boosting machine learning knowledge where appropriate to inform accurate document strings
  • Collaborating through code reviews in the writing of new document strings

Details of the proposal are on the Julia website.


Project Description

Creating the proposal

Our Google Season of Docs process always starts with an open solicitation to the community for project ideas. Those are generally crowd sourced and added to the Julia website. From there, the core Julia team evalautes each possible proposal based on the level of contributor interest, impact to the community, and enthusiasm of the mentor. As we have learned with Google Summer of Code over the last 10 years, the contributor experience is profoundly shaped by the mentor so we work hard to make sure there is someone with expertise and adequate time to support each project if selected.

This year, we were lucky enough to have a project that checked all three boxed. MLJ’s usage in the Julia ecosystem has expanded significantly over time so it seemed like a worthwhile investment to support the project with documentation help, especially around something critical like model information.

Once we officially announced that the MLJ project was the one selected, we shared this widely with the community for input. Generally, unless people are close to the proposed project itself, people don’t have much to say. Nonetheless, this process is still critical for transparency in the open source community.

Budget

Our budget was estimated based on previous years of supporting technical writers in similar domains and scopes of work. Estimating is always more of an art than science which is why we tend to add a buffer of time/budget to support unexpected hiccups.

Initially, we intended to have two main mentors but due to mentor availability, we only ended up with one person (Anthony), who did most of the mentoring work. We ended up spending the full amount allocated for the project per our expectations (expect ordering our wrap up t-shirts which is still in progress).

Participants

List the project participants. MLJ’s co-creator and lead developer Anthony Blaom managed the project, reviewed contributions, and provided mentorship to the technical writer David Josephs. Several third party model package developers/authors were also involved in documentation review, including GitHub users @ExpandingMan, @sylvaticus, @davnn, @tlienart, @okonsamuel. Logan Kilpatrick co-wrote the proposal, helped with recruitment, and took care of project administration.

When we knew we would be getting funding, we immediately shared the hiring details with the community on Slack, Discourse, and posted a job listing on LinkedIn to cast the widest possible net. Prospective candidates were asked to write a little about their background, describe previous technical writing experience and open-source contributions. This information, together with published examples of their technical writing, were evaluated. Two candidates were invited for one-on-one zoom interviews, which followed up on the written application and gave candidates an opportunity to demonstrate oral communication skills, which were deemed essential.

Did anyone drop out? No.

Since familiarity with Julia was strongly preferred, and some data science proficiency essential, it was challenging finding a large pool of candidates. In the end we selected a candidate who was strong in data science but less experienced with Julia. That said, our writer David had just started working for a company that codes in Julia, and that worked out nicely for us. David was quickly up-to-speed with the Julia proficiency we needed. Our experience reaffirms to us the importance in our work of scientific domain knowledge (machine learning) and good communication skills, over specific technical skills, such as proficiency with a certain tool.
Timeline

Our original proposal details a timeline. Our initial ambition included documentation for all models, with the exception of the sk-learn models; time was divided equally among model-providing packages. In hindsight this was a poor distribution as some packages provide a lot more models than others. Gauging progress was further complicated by the fact that some models had vastly more hyper-parameters to document.


Results

A tracking issue nicely summarizes results of the project and its status going forward beyond Google Season of Docs 2022. Documentation additions were made in the following packages, linked to the relevant pull requests:

Also, the technical writer made these code additions, to synthesize multi-target supervised learning datasets, to improve some doc-string examples:

Were there any deliverables in the proposal that did not get created? List those as well. The following packages did not get new docstrings, but were included in the original proposal:

Did this project result in any new or updated processes or procedures in your organization? No.

Metrics

What metrics did you choose to measure the success of the project? Were you able to collect those metrics? Did the metrics correlate well or poorly with the behaviors or outcomes you wanted for the project? Did your metrics change since your proposal? Did you add or remove any metrics? How often do you intend to collect metrics going forward?

Initially progress was measured by the number of third party packages documented but, as described above, a better measure was the proportion of individual models documented. As the project is quite close to being finished, I don’t imagine we need to rethink our metrics for this project.

Analysis

What went well? What was unexpected? What hurdles or setbacks did you face? Do you consider your project successful? Why or why not? (If it's too early to tell, explain when you expect to be able to judge the success of your project.)

This documentation project was always going to have some tedium associated with it, and it was fantastic to have help. Our technical writer was super enthusiastic and eager to learn things beyond the project remit. This enthusiasm helped me (Anthony) a lot to boost my own engagement. All in all, the communication side of things went very well.

I think having our writer David working at a Julia shop (startup using Julia) was an unexpected benefit, as I that increased exposure of the MLJ project. We had a few volunteer contributions from a co-worker, for example. Of course our project and David’s company shared the goal of boosting David’s Julia proficiency quickly. I believe David’s new expertise in MLJ is a definite benefit for his company, which currently builds Julia deep learning models.

Another benefit of the project was that the process of documentation occasionally highlighted issues or improvements with the software, which were then addressed or tagged for later projects. Moreover, David provided valuable feedback on his own experience with the software, as a new user.

As manager of the project, I did not anticipate how much time pull-request reviews would take. I’ve learned that reviewing documentation is at least as intensive as code review. In doc review there’s no set of tests to provide extra reassurance; you really need to carefully check every word.

Fortunately, there were no big setbacks. I would definitely rate the project as a success: We were able to achieve most of our goals, and this is certain to smooth out the on-ramp for new MLJ users. The final analysis will come over time, as we check our engagement levels, and check user feedback. A survey has been prepared and is to be rolled out soon.

Summary

In 2-4 paragraphs, summarize your project experience. Highlight what you learned, and what you would choose to do differently in the future. What advice would you give to other projects trying to solve a similar problem with documentation?

In this project a Google Season of Docs Technical Writer added document strings to models provided by most of the machine learning packages interfacing with the MLJ machine learning framework. This writing was primarily supervised and reviewed by one other contributor, the framework’s lead author and co-creator.

The main lesson for the MLJ team has been that creating good docstrings is a lot of work, with the review process as intensive as code review. It is easy to underestimate the resources needed for good documentation. Recruiting for short-term Julia related development is challenging, given the language’s young age.

In recruitment, it pays to value domain knowledge and good oral and written communication skills over specific skills, like proficiency in a particular language, assuming you have more than a few months of engagement. Doing so in this case led to a satisfying outcome. (By contrast, we have found a lack of Julia proficiency in GSoC projects more challenging.)


Appendix

A blog post describes our technical writer’s experience working on the project.

Acknowledgements

Anthony Blaom acknowledges the support of a New Zealand Strategic Science Investment awarded to the University of Auckland, which funded his work on MLJ during the project.

Top comments (5)

Collapse
 
fortunewalla profile image
Fortune Walla

Very well explained about the intricate process to find the right person for a job with such an overwhelming responsibility of trying to communicate the workings of the Julia ML code to a scientific audience. 👏

1) When you saying sufficient scientific domain knowledge in ML, do you mean that people who are already working in ML or would ML knowledge also be enough?

2) For a beginner, could completing basic courses from this list be sufficient to be considered as a technical writer in MLJ?

ml.mit.edu/classes2.html

Collapse
 
patalt profile image
Patrick Altmeyer • Edited

Very interesting, thanks Anthony, David and @logankilpatrick.

MLJ is such an ambitious and important project for ML in Julia so it's great to see it being promoted. Anthony is also very proactive and willing to offer help, which I've much appreciated when working on interfacing ConformalPrediction.jl to MLJ. The MLJModellingInterface makes this process quite straight-forward and I hope that more and more developers will recognise its value.

Out of curiosity, what has come out of this related project on interpretable ML?

Collapse
 
fortunewalla profile image
Fortune Walla

Thanks for the last link. It shows the magnitude of the effort needed to transform MLJ into a computational software powerhouse. I like that name "Speed demons only need apply"😄

Collapse
 
Sloan, the sloth mascot
Comment deleted
Collapse
 
Sloan, the sloth mascot
Comment deleted