Assessing each article in the two categories ‘significance of findings’ (5 grades from ‘useful’ to ‘landmark’) and ‘strength of support’ (6 grades from ‘inadequate’ to ‘exceptional’).
Will SciPost be left behind in the quest for best practices in academic publishing? Of course, it is very hard to make important changes to the way a journal operates: usually, new practices require new journals. However, eLife has now shown that radical change can at least be attempted. It will take some time to know whether they succeed.
Yes, that is an interesting development, and I do think SciPost could go in that direction. The more urgent problem SciPost is facing is scalability: Our way of operating (with a fully volunteer set editors) does not scale well with the number of submissions. Will the eLife model help in that aspect?
I am not sure this new model would help with the scalability issue: editors would still have to be found and (substantial) referee reports would still have to be procured this their model.
For comparison, some data on eLife are available at the end of their 2021 report. We can also get an idea of their number of editors from this page.
In the current SciPost model, reviews for rejected articles do not generally remain publicly available. Not rejecting papers means that all reviews remain available. This means less wasted work, which is surely good for the system in general, although maybe not for SciPost’s scalability in particular.
Emulating this system would be an improvement for SciPost. The main change would be to make editors in charge of assessment, so that each article would be assessed. (Reviewers could still make suggestions.) Adopting the same grades as eLife would also be good.
A more ambitious change would be for the assessment to supersede the tiering system (top 10% etc) and the distinction SciPost Physics / SciPost Physics Core.