Reviews for "Scientometrics"

Journal title Average duration Review reports
(1st review rnd.)
(click to go to journal page) 1st rev. rnd Tot. handling Im. rejection Number Quality Overall rating Outcome Year
Scientometrics n/a n/a 62.0
days
n/a n/a n/a Rejected (im.) 2022
Motivation: To justify the manuscript's rejection (without external review) the only comment was as follows: "Comment by the editor: We have again decided against publication in our journal for the following reasons. Firstly, the theoretical embedding of the model into the scientometric context seems to be problematic. The same questions regarding quantification and measurement of the notion of quality (both peer review and journal) can be quantified and measured. Again, we do not see any attempt of giving empirical examples to show concrete implications for quantitative science studies. Finally, the grandiosely introduced free online tool proved a formula for just calculating sensitivity/specificity values based on partially obscure parameters without any other features."

We sent an email to the EiC of Scientometrics (without response). In our email, we said that such an editor's comment shows a content-based bias in our research. This is so because the purpose of peer-review is not to ”kill the paper,” but to evaluate it in terms of its strengths and limitations. However, the editor's comment involves partiality against our submission by virtue of the content (i.e., methods and theoretical orientation) of the work. This is the confirmation bias that challenges the impartiality of peer
review because an editor should evaluate a submission on the basis of its content and relationship to the literature, independently of their own theoretical/methodological preferences and commitments. Furthermore, this is a disciplinary editor that prefers mainstream research and exhibits bias against our interdisciplinary research that used Bayesian inference and Information Theory to define the optimal value of sensitivity and specificity of the peer-review process. In addition, he or she also has a contemptuous and little considerate treatment of us authors.

Please, see the repetitive use of “again”, and the lack of sense of some of their sentences. Especially disrespectful and with a complete lack of rigor and knowledge is their comment “the grandiosely introduced free online tool proved a formula for just calculating sensitivity/specificity values based on partially obscure parameters without any other features." On the other hand, it is especially relevant that their comment (without external review) of 7 lines of text, took them a total of two full months. In my opinion, it is another clear attempt to hinder our work.

By definition, mathematics and computer science are quantitative fields and also we published more than 30 papers in the same journal using a similar theoretical orientation.

However, in the last year, it seems that this same approach (theoretical and mathematical) is no longer valid for this editor repetitively assigned to our submissions. He or she is clearly an editor opposed to our approach and work, who does not send the works for external review, although showing that he or she is clearly not an expert in his/her comments that lack (in our opinion) the minimum rigor necessary to evaluate the work of scientists (more after publishing around thirty papers in Scientometrics).
Scientometrics 12.3
weeks
18.6
weeks
n/a 2 3
(good)
4
(very good)
Accepted 2022
Scientometrics 6.0
weeks
10.6
weeks
n/a 2 4
(very good)
5
(excellent)
Accepted 2020
Scientometrics n/a n/a 9.0
days
n/a n/a n/a Rejected (im.) 2021
Motivation: Reviewers are humans and might be affected by cognitive biases when information overload comes into play. In fact, no amount of scientific training will completely mask the human impulses to partisanship. And the consequence is that authors may receive incorrect editorial decisions in their submissions to peer-reviewed journals. For instance, the journal editor issues a substantial revision when in fact a moderate one would suffice. This would be over-revision in peer review. In this situation, there exists a fraud cost if the journal editor tries to request the author to make a substantial revision when in fact a moderate one would be sufficient.

Thus, in our research paper, we identified a set of conditions under which the peer review process involves equilibrium fraud and over-revision. An equilibrium in peer review is efficient if the first peer-reviewed journal to which the author submits their research paper makes a truthful editorial decision, which the author accepts. When the fraud cost is sufficiently high, there exists an efficient equilibrium. Otherwise, when the fraud cost cannot sustain an efficient equilibrium, it may arise a specialization equilibrium in which the author first submits the manuscript to a top journal which makes a truthful editorial decision. This specialization equilibrium may explain why academic journals with higher quality standards more often attract authors who write articles of higher quality. Finally, when the fraud cost is not too large, we show that a new type of equilibrium emerges in our model, equilibria involving costly fraud, in which the first peer-reviewed journal to which the research paper is submitted always requests substantial revisions. If the review time and the probability of very serious concerns from reviewers were large, the author would prefer to send the research paper to one single peer-reviewed journal even if that would involve over-revision. In the fraud equilibrium the author's revision cost is high and independent of the true quality of the manuscript.

Therefore, in this paper drawing on Game Theory, we proposed a simple model with authors submitting manuscripts to peer-reviewed journals in order to investigate whether a competitive set of academic journals performs efficiently. We identified a set of conditions under which the peer review process involves equilibrium fraud and over-revision. To this aim we analyze Perfect Bayesian Equilibria when we assume the existence of top journals that only publish high-quality research.

To prove our results, in our submission we provided an appendix with four mathematical propositions including proofs.
Scientometrics 9.6
weeks
11.4
weeks
n/a 2 4
(very good)
5
(excellent)
Accepted 2020
Motivation: The editor handled the paper swiftly and the comments of reviewers were very helpful in streamlining and improving the paper.
Scientometrics n/a n/a 11.0
days
n/a n/a n/a Rejected (im.) 2019
Scientometrics 9.4
weeks
9.4
weeks
n/a 2 4
(very good)
4
(very good)
Rejected 2016
Motivation: The review quality is good enough and they suggested good suggestions for improving the manuscript.
Scientometrics 7.0
weeks
9.4
weeks
n/a 1 5
(excellent)
5
(excellent)
Accepted 2017
Motivation: The timing for the review process and the quality of the review itself exceeded all expectations.
Scientometrics 5.9
weeks
7.7
weeks
n/a 1 4
(very good)
4
(very good)
Accepted 2016
Motivation: In general, asking only one reviewer may not be enough for ensuring the quality of a paper. However, in this particular case the review received was very good and considerably improved the manuscript.
Scientometrics 21.7
weeks
21.7
weeks
n/a 2 3
(good)
3
(good)
Accepted 2014
Scientometrics 3.0
weeks
13.4
weeks
n/a 1 3
(good)
4
(very good)
Accepted 2015
Motivation: Generally very good handling of the manuscript; first review report very good and instructive. Duration of second review round surprisingly long (10 weeks) given that the editor's decision after the first review round was "accept condition upon minor revisions".
Scientometrics 11.0
weeks
15.0
weeks
n/a 2 4
(very good)
4
(very good)
Accepted 2013
Scientometrics 8.7
weeks
17.4
weeks
n/a 2 2
(moderate)
4
(very good)
Accepted 2013
Motivation: Swift review process. Both editor and reviewers focused on improving the manuscript. Generally positive experience.
Scientometrics 6.0
weeks
6.0
weeks
n/a 1 3
(good)
3
(good)
Accepted 2012
Motivation: Review process was speedy and adequate. However, some editorial details had to be fixed (switching decimal commas to decimal points in figures, to comply with journal style) which proved to be very tardy due to misunderstandings and technical problems. This significantly slowed down the procedure of final acceptance of the article.